de Alwis, Manudul Pahansen; Lo Martire, Riccardo; Äng, Björn O; Garme, Karl
2016-01-01
Background High-performance marine craft crews are susceptible to various adverse health conditions caused by multiple interactive factors. However, there are limited epidemiological data available for assessment of working conditions at sea. Although questionnaire surveys are widely used for identifying exposures, outcomes and associated risks with high accuracy levels, until now, no validated epidemiological tool exists for surveying occupational health and performance in these populations. Aim To develop and validate a web-based questionnaire for epidemiological assessment of occupational and individual risk exposure pertinent to the musculoskeletal health conditions and performance in high-performance marine craft populations. Method A questionnaire for investigating the association between work-related exposure, performance and health was initially developed by a consensus panel under four subdomains, viz. demography, lifestyle, work exposure and health and systematically validated by expert raters for content relevance and simplicity in three consecutive stages, each iteratively followed by a consensus panel revision. The item content validity index (I-CVI) was determined as the proportion of experts giving a rating of 3 or 4. The scale content validity index (S-CVI/Ave) was computed by averaging the I-CVIs for the assessment of the questionnaire as a tool. Finally, the questionnaire was pilot tested. Results The S-CVI/Ave increased from 0.89 to 0.96 for relevance and from 0.76 to 0.94 for simplicity, resulting in 36 items in the final questionnaire. The pilot test confirmed the feasibility of the questionnaire. Conclusions The present study shows that the web-based questionnaire fulfils previously published validity acceptance criteria and is therefore considered valid and feasible for the empirical surveying of epidemiological aspects among high-performance marine craft crews and similar populations. PMID:27324717
ERIC Educational Resources Information Center
Stevens, Christopher John; Dascombe, Ben James
2015-01-01
Sports performance testing is one of the most common and important measures used in sport science. Performance testing protocols must have high reliability to ensure any changes are not due to measurement error or inter-individual differences. High validity is also important to ensure test performance reflects true performance. Time-trial…
NASA Astrophysics Data System (ADS)
Francesconi, Benjamin; Neveu-VanMalle, Marion; Espesset, Aude; Alhammoud, Bahjat; Bouzinac, Catherine; Clerc, Sébastien; Gascon, Ferran
2017-09-01
Sentinel-2 is an Earth Observation mission developed by the European Space Agency (ESA) in the frame of the Copernicus program of the European Commission. The mission is based on a constellation of 2-satellites: Sentinel-2A launched in June 2015 and Sentinel-2B launched in March 2017. It offers an unprecedented combination of systematic global coverage of land and coastal areas, a high revisit of five days at the equator and 2 days at mid-latitudes under the same viewing conditions, high spatial resolution, and a wide field of view for multispectral observations from 13 bands in the visible, near infrared and short wave infrared range of the electromagnetic spectrum. The mission performances are routinely and closely monitored by the S2 Mission Performance Centre (MPC), including a consortium of Expert Support Laboratories (ESL). This publication focuses on the Sentinel-2 Level-1 product quality validation activities performed by the MPC. It presents an up-to-date status of the Level-1 mission performances at the beginning of the constellation routine phase. Level-1 performance validations routinely performed cover Level-1 Radiometric Validation (Equalisation Validation, Absolute Radiometry Vicarious Validation, Absolute Radiometry Cross-Mission Validation, Multi-temporal Relative Radiometry Vicarious Validation and SNR Validation), and Level-1 Geometric Validation (Geolocation Uncertainty Validation, Multi-spectral Registration Uncertainty Validation and Multi-temporal Registration Uncertainty Validation). Overall, the Sentinel-2 mission is proving very successful in terms of product quality thereby fulfilling the promises of the Copernicus program.
ERIC Educational Resources Information Center
Lievens, Filip; Patterson, Fiona
2011-01-01
In high-stakes selection among candidates with considerable domain-specific knowledge and experience, investigations of whether high-fidelity simulations (assessment centers; ACs) have incremental validity over low-fidelity simulations (situational judgment tests; SJTs) are lacking. Therefore, this article integrates research on the validity of…
de Alwis, Manudul Pahansen; Lo Martire, Riccardo; Äng, Björn O; Garme, Karl
2016-06-20
High-performance marine craft crews are susceptible to various adverse health conditions caused by multiple interactive factors. However, there are limited epidemiological data available for assessment of working conditions at sea. Although questionnaire surveys are widely used for identifying exposures, outcomes and associated risks with high accuracy levels, until now, no validated epidemiological tool exists for surveying occupational health and performance in these populations. To develop and validate a web-based questionnaire for epidemiological assessment of occupational and individual risk exposure pertinent to the musculoskeletal health conditions and performance in high-performance marine craft populations. A questionnaire for investigating the association between work-related exposure, performance and health was initially developed by a consensus panel under four subdomains, viz. demography, lifestyle, work exposure and health and systematically validated by expert raters for content relevance and simplicity in three consecutive stages, each iteratively followed by a consensus panel revision. The item content validity index (I-CVI) was determined as the proportion of experts giving a rating of 3 or 4. The scale content validity index (S-CVI/Ave) was computed by averaging the I-CVIs for the assessment of the questionnaire as a tool. Finally, the questionnaire was pilot tested. The S-CVI/Ave increased from 0.89 to 0.96 for relevance and from 0.76 to 0.94 for simplicity, resulting in 36 items in the final questionnaire. The pilot test confirmed the feasibility of the questionnaire. The present study shows that the web-based questionnaire fulfils previously published validity acceptance criteria and is therefore considered valid and feasible for the empirical surveying of epidemiological aspects among high-performance marine craft crews and similar populations. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Shin, Marlena H; Sullivan, Jennifer L; Rosen, Amy K; Solomon, Jeffrey L; Dunn, Edward J; Shimada, Stephanie L; Hayes, Jennifer; Rivard, Peter E
2014-12-01
Increasing use of Agency for Healthcare Research and Quality's Patient Safety Indicators (PSIs) for hospital performance measurement intensifies the need to critically assess their validity. Our study examined the extent to which variation in PSI composite score is related to differences in hospital organizational structures or processes (i.e., criterion validity). In site visits to three Veterans Health Administration hospitals with high and three with low PSI composite scores ("low performers" and "high performers," respectively), we interviewed a cross-section of hospital staff. We then coded interview transcripts for evidence in 13 safety-related domains and assessed variation across high and low performers. Evidence of leadership and coordination of work/communication (organizational process domains) was predominantly favorable for high performers only. Evidence in the other domains was either mixed, or there were insufficient data to rate the domains. While we found some evidence of criterion validity, the extent to which variation in PSI rates is related to differences in hospitals' organizational structures/processes needs further study. © The Author(s) 2014.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rainer, Leo I.; Hoeschele, Marc A.; Apte, Michael G.
This report addresses the results of detailed monitoring completed under Program Element 6 of Lawrence Berkeley National Laboratory's High Performance Commercial Building Systems (HPCBS) PIER program. The purpose of the Energy Simulations and Projected State-Wide Energy Savings project is to develop reasonable energy performance and cost models for high performance relocatable classrooms (RCs) across California climates. A key objective of the energy monitoring was to validate DOE2 simulations for comparison to initial DOE2 performance projections. The validated DOE2 model was then used to develop statewide savings projections by modeling base case and high performance RC operation in the 16 Californiamore » climate zones. The primary objective of this phase of work was to utilize detailed field monitoring data to modify DOE2 inputs and generate performance projections based on a validated simulation model. Additional objectives include the following: (1) Obtain comparative performance data on base case and high performance HVAC systems to determine how they are operated, how they perform, and how the occupants respond to the advanced systems. This was accomplished by installing both HVAC systems side-by-side (i.e., one per module of a standard two module, 24 ft by 40 ft RC) on the study RCs and switching HVAC operating modes on a weekly basis. (2) Develop projected statewide energy and demand impacts based on the validated DOE2 model. (3) Develop cost effectiveness projections for the high performance HVAC system in the 16 California climate zones.« less
Use of the color trails test as an embedded measure of performance validity.
Henry, George K; Algina, James
2013-01-01
One hundred personal injury litigants and disability claimants referred for a forensic neuropsychological evaluation were administered both portions of the Color Trails Test (CTT) as part of a more comprehensive battery of standardized tests. Subjects who failed two or more free-standing tests of cognitive performance validity formed the Failed Performance Validity (FPV) group, while subjects who passed all free-standing performance validity measures were assigned to the Passed Performance Validity (PPV) group. A cutscore of ≥45 seconds to complete Color Trails 1 (CT1) was associated with a classification accuracy of 78%, good sensitivity (66%) and high specificity (90%), while a cutscore of ≥84 seconds to complete Color Trails 2 (CT2) was associated with a classification accuracy of 82%, good sensitivity (74%) and high specificity (90%). A CT1 cutscore of ≥58 seconds, and a CT2 cutscore ≥100 seconds was associated with 100% positive predictive power at base rates from 20 to 50%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, Bryan Scott; Gough, Sean T.
This report documents a validation of the MCNP6 Version 1.0 computer code on the high performance computing platform Moonlight, for operations at Los Alamos National Laboratory (LANL) that involve plutonium metals, oxides, and solutions. The validation is conducted using the ENDF/B-VII.1 continuous energy group cross section library at room temperature. The results are for use by nuclear criticality safety personnel in performing analysis and evaluation of various facility activities involving plutonium materials.
ERIC Educational Resources Information Center
Santelices, Maria Veronica; Taut, Sandy
2011-01-01
This paper describes convergent validity evidence regarding the mandatory, standards-based Chilean national teacher evaluation system (NTES). The study examined whether NTES identifies--and thereby rewards or punishes--the "right" teachers as high- or low-performing. We collected in-depth teaching performance data on a sample of 58…
The Validation of a Case-Based, Cumulative Assessment and Progressions Examination
Coker, Adeola O.; Copeland, Jeffrey T.; Gottlieb, Helmut B.; Horlen, Cheryl; Smith, Helen E.; Urteaga, Elizabeth M.; Ramsinghani, Sushma; Zertuche, Alejandra; Maize, David
2016-01-01
Objective. To assess content and criterion validity, as well as reliability of an internally developed, case-based, cumulative, high-stakes third-year Annual Student Assessment and Progression Examination (P3 ASAP Exam). Methods. Content validity was assessed through the writing-reviewing process. Criterion validity was assessed by comparing student scores on the P3 ASAP Exam with the nationally validated Pharmacy Curriculum Outcomes Assessment (PCOA). Reliability was assessed with psychometric analysis comparing student performance over four years. Results. The P3 ASAP Exam showed content validity through representation of didactic courses and professional outcomes. Similar scores on the P3 ASAP Exam and PCOA with Pearson correlation coefficient established criterion validity. Consistent student performance using Kuder-Richardson coefficient (KR-20) since 2012 reflected reliability of the examination. Conclusion. Pharmacy schools can implement internally developed, high-stakes, cumulative progression examinations that are valid and reliable using a robust writing-reviewing process and psychometric analyses. PMID:26941435
Majumdar, Subhabrata; Basak, Subhash C
2018-04-26
Proper validation is an important aspect of QSAR modelling. External validation is one of the widely used validation methods in QSAR where the model is built on a subset of the data and validated on the rest of the samples. However, its effectiveness for datasets with a small number of samples but large number of predictors remains suspect. Calculating hundreds or thousands of molecular descriptors using currently available software has become the norm in QSAR research, owing to computational advances in the past few decades. Thus, for n chemical compounds and p descriptors calculated for each molecule, the typical chemometric dataset today has high value of p but small n (i.e. n < p). Motivated by the evidence of inadequacies of external validation in estimating the true predictive capability of a statistical model in recent literature, this paper performs an extensive and comparative study of this method with several other validation techniques. We compared four validation methods: leave-one-out, K-fold, external and multi-split validation, using statistical models built using the LASSO regression, which simultaneously performs variable selection and modelling. We used 300 simulated datasets and one real dataset of 95 congeneric amine mutagens for this evaluation. External validation metrics have high variation among different random splits of the data, hence are not recommended for predictive QSAR models. LOO has the overall best performance among all validation methods applied in our scenario. Results from external validation are too unstable for the datasets we analyzed. Based on our findings, we recommend using the LOO procedure for validating QSAR predictive models built on high-dimensional small-sample data. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Loeding, B L; Greenan, J P
1998-12-01
The study examined the validity and reliability of four assessments, with three instruments per domain. Domains included generalizable mathematics, communication, interpersonal relations, and reasoning skills. Participants were deaf, legally blind, or visually impaired students enrolled in vocational classes at residential secondary schools. The researchers estimated the internal consistency reliability, test-retest reliability, and construct validity correlations of three subinstruments: student self-ratings, teacher ratings, and performance assessments. The data suggest that these instruments are highly internally consistent measures of generalizable vocational skills. Four performance assessments have high-to-moderate test-retest reliability estimates, and were generally considered to possess acceptable validity and reliability.
Validity of Various Methods for Determining Velocity, Force, and Power in the Back Squat.
Banyard, Harry G; Nosaka, Ken; Sato, Kimitake; Haff, G Gregory
2017-10-01
To examine the validity of 2 kinematic systems for assessing mean velocity (MV), peak velocity (PV), mean force (MF), peak force (PF), mean power (MP), and peak power (PP) during the full-depth free-weight back squat performed with maximal concentric effort. Ten strength-trained men (26.1 ± 3.0 y, 1.81 ± 0.07 m, 82.0 ± 10.6 kg) performed three 1-repetition-maximum (1RM) trials on 3 separate days, encompassing lifts performed at 6 relative intensities including 20%, 40%, 60%, 80%, 90%, and 100% of 1RM. Each repetition was simultaneously recorded by a PUSH band and commercial linear position transducer (LPT) (GymAware [GYM]) and compared with measurements collected by a laboratory-based testing device consisting of 4 LPTs and a force plate. Trials 2 and 3 were used for validity analyses. Combining all 120 repetitions indicated that the GYM was highly valid for assessing all criterion variables while the PUSH was only highly valid for estimations of PF (r = .94, CV = 5.4%, ES = 0.28, SEE = 135.5 N). At each relative intensity, the GYM was highly valid for assessing all criterion variables except for PP at 20% (ES = 0.81) and 40% (ES = 0.67) of 1RM. Moreover, the PUSH was only able to accurately estimate PF across all relative intensities (r = .92-.98, CV = 4.0-8.3%, ES = 0.04-0.26, SEE = 79.8-213.1 N). PUSH accuracy for determining MV, PV, MF, MP, and PP across all 6 relative intensities was questionable for the back squat, yet the GYM was highly valid at assessing all criterion variables, with some caution given to estimations of MP and PP performed at lighter loads.
2012-03-01
such as FASCODE is accomplished. The assessment is limited by the correctness of the models used; validating the models is beyond the scope of this...comparisons with other models and validation against data sets (Snell et al. 2000). 2.3.2 Previous Research Several LADAR simulations have been produced...performance models would better capture the atmosphere physics and climatological effects on these systems. Also, further validation needs to be performed
Performance Validation Approach for the GTX Air-Breathing Launch Vehicle
NASA Technical Reports Server (NTRS)
Trefny, Charles J.; Roche, Joseph M.
2002-01-01
The primary objective of the GTX effort is to determine whether or not air-breathing propulsion can enable a launch vehicle to achieve orbit in a single stage. Structural weight, vehicle aerodynamics, and propulsion performance must be accurately known over the entire flight trajectory in order to make a credible assessment. Structural, aerodynamic, and propulsion parameters are strongly interdependent, which necessitates a system approach to design, evaluation, and optimization of a single-stage-to-orbit concept. The GTX reference vehicle serves this purpose, by allowing design, development, and validation of components and subsystems in a system context. The reference vehicle configuration (including propulsion) was carefully chosen so as to provide high potential for structural and volumetric efficiency, and to allow the high specific impulse of air-breathing propulsion cycles to be exploited. Minor evolution of the configuration has occurred as analytical and experimental results have become available. With this development process comes increasing validation of the weight and performance levels used in system performance determination. This paper presents an overview of the GTX reference vehicle and the approach to its performance validation. Subscale test rigs and numerical studies used to develop and validate component performance levels and unit structural weights are outlined. The sensitivity of the equivalent, effective specific impulse to key propulsion component efficiencies is presented. The role of flight demonstration in development and validation is discussed.
Sisic, Nedim; Jelicic, Mario; Pehar, Miran; Spasic, Miodrag; Sekulic, Damir
2016-01-01
In basketball, anthropometric status is an important factor when identifying and selecting talents, while agility is one of the most vital motor performances. The aim of this investigation was to evaluate the influence of anthropometric variables and power capacities on different preplanned agility performances. The participants were 92 high-level, junior-age basketball players (16-17 years of age; 187.6±8.72 cm in body height, 78.40±12.26 kg in body mass), randomly divided into a validation and cross-validation subsample. The predictors set consisted of 16 anthropometric variables, three tests of power-capacities (Sargent-jump, broad-jump and medicine-ball-throw) as predictors. The criteria were three tests of agility: a T-Shape-Test; a Zig-Zag-Test, and a test of running with a 180-degree turn (T180). Forward stepwise multiple regressions were calculated for validation subsamples and then cross-validated. Cross validation included correlations between observed and predicted scores, dependent samples t-test between predicted and observed scores; and Bland Altman graphics. Analysis of the variance identified centres being advanced in most of the anthropometric indices, and medicine-ball-throw (all at P<0.05); with no significant between-position-differences for other studied motor performances. Multiple regression models originally calculated for the validation subsample were then cross-validated, and confirmed for Zig-zag-Test (R of 0.71 and 0.72 for the validation and cross-validation subsample, respectively). Anthropometrics were not strongly related to agility performance, but leg length is found to be negatively associated with performance in basketball-specific agility. Power capacities are confirmed to be an important factor in agility. The results highlighted the importance of sport-specific tests when studying pre-planned agility performance in basketball. The improvement in power capacities will probably result in an improvement in agility in basketball athletes, while anthropometric indices should be used in order to identify those athletes who can achieve superior agility performance.
Development, Validation and Integration of the ATLAS Trigger System Software in Run 2
NASA Astrophysics Data System (ADS)
Keyes, Robert; ATLAS Collaboration
2017-10-01
The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware, and software, associated to various sub-detectors that must seamlessly cooperate in order to select one collision of interest out of every 40,000 delivered by the LHC every millisecond. These proceedings discuss the challenges, organization and work flow of the ongoing trigger software development, validation, and deployment. The goal of this development is to ensure that the most up-to-date algorithms are used to optimize the performance of the experiment. The goal of the validation is to ensure the reliability and predictability of the software performance. Integration tests are carried out to ensure that the software deployed to the online trigger farm during data-taking run as desired. Trigger software is validated by emulating online conditions using a benchmark run and mimicking the reconstruction that occurs during normal data-taking. This exercise is computationally demanding and thus runs on the ATLAS high performance computing grid with high priority. Performance metrics ranging from low-level memory and CPU requirements, to distributions and efficiencies of high-level physics quantities are visualized and validated by a range of experts. This is a multifaceted critical task that ties together many aspects of the experimental effort and thus directly influences the overall performance of the ATLAS experiment.
Teachable, high-content analytics for live-cell, phase contrast movies.
Alworth, Samuel V; Watanabe, Hirotada; Lee, James S J
2010-09-01
CL-Quant is a new solution platform for broad, high-content, live-cell image analysis. Powered by novel machine learning technologies and teach-by-example interfaces, CL-Quant provides a platform for the rapid development and application of scalable, high-performance, and fully automated analytics for a broad range of live-cell microscopy imaging applications, including label-free phase contrast imaging. The authors used CL-Quant to teach off-the-shelf universal analytics, called standard recipes, for cell proliferation, wound healing, cell counting, and cell motility assays using phase contrast movies collected on the BioStation CT and BioStation IM platforms. Similar to application modules, standard recipes are intended to work robustly across a wide range of imaging conditions without requiring customization by the end user. The authors validated the performance of the standard recipes by comparing their performance with truth created manually, or by custom analytics optimized for each individual movie (and therefore yielding the best possible result for the image), and validated by independent review. The validation data show that the standard recipes' performance is comparable with the validated truth with low variation. The data validate that the CL-Quant standard recipes can provide robust results without customization for live-cell assays in broad cell types and laboratory settings.
Select Methodology for Validating Advanced Satellite Measurement Systems
NASA Technical Reports Server (NTRS)
Larar, Allen M.; Zhou, Daniel K.; Liu, Xi; Smith, William L.
2008-01-01
Advanced satellite sensors are tasked with improving global measurements of the Earth's atmosphere, clouds, and surface to enable enhancements in weather prediction, climate monitoring capability, and environmental change detection. Measurement system validation is crucial to achieving this goal and maximizing research and operational utility of resultant data. Field campaigns including satellite under-flights with well calibrated FTS sensors aboard high-altitude aircraft are an essential part of the validation task. This presentation focuses on an overview of validation methodology developed for assessment of high spectral resolution infrared systems, and includes results of preliminary studies performed to investigate the performance of the Infrared Atmospheric Sounding Interferometer (IASI) instrument aboard the MetOp-A satellite.
Dantas, Jose Luiz; Pereira, Gleber; Nakamura, Fabio Yuzo
2015-09-01
The five-kilometer time trial (TT5km) has been used to assess aerobic endurance performance without further investigation of its validity. This study aimed to perform a preliminary validation of the TT5km to rank well-trained cyclists based on aerobic endurance fitness and assess changes of the aerobic endurance performance. After the incremental test, 20 cyclists (age = 31.3 ± 7.9 years; body mass index = 22.7 ± 1.5 kg/m(2); maximal aerobic power = 360.5 ± 49.5 W) performed the TT5km twice, collecting performance (time to complete, absolute and relative power output, average speed) and physiological responses (heart rate and electromyography activity). The validation criteria were pacing strategy, absolute and relative reliability, validity, and sensitivity. Sensitivity index was obtained from the ratio between the smallest worthwhile change and typical error. The TT5km showed high absolute (coefficient of variation < 3%) and relative (intraclass coefficient correlation > 0.95) reliability of performance variables, whereas it presented low reliability of physiological responses. The TT5km performance variables were highly correlated with the aerobic endurance indices obtained from incremental test (r > 0.70). These variables showed adequate sensitivity index (> 1). TT5km is a valid test to rank the aerobic endurance fitness of well-trained cyclists and to differentiate changes on aerobic endurance performance. Coaches can detect performance changes through either absolute (± 17.7 W) or relative power output (± 0.3 W.kg(-1)), the time to complete the test (± 13.4 s) and the average speed (± 1.0 km.h(-1)). Furthermore, TT5km performance can also be used to rank the athletes according to their aerobic endurance fitness.
Development and validation of a high-fidelity phonomicrosurgical trainer.
Klein, Adam M; Gross, Jennifer
2017-04-01
To validate the use of a high-fidelity phonomicrosurgical trainer. A high-fidelity phonomicrosurgical trainer, based on a previously validated model by Contag et al., 1 was designed with multilayered vocal folds that more closely mimic the consistency of true vocal folds, containing intracordal lesions to practice phonomicrosurgical removal. A training module was developed to simulate the true phonomicrosurgical experience. A validation study with novice and expert surgeons was conducted. Novices and experts were instructed to remove the lesion from the synthetic vocal folds, and novices were given four training trials. Performances were measured by the amount of time spent and tissue injury (microflap, superficial, deep) to the vocal fold. An independent Student t test and Fisher exact tests were used to compare subjects. A matched-paired t test and Wilcoxon signed rank tests were used to compare novice performance on the first and fourth trials and assess for improvement. Experts completed the excision with less total errors than novices (P = .004) and made less injury to the microflap (P = .05) and superficial tissue (P = .003). Novices improved their performance with training, making less total errors (P = .002) and superficial tissue injuries (P = .02) and spending less time for removal (P = .002) after several practice trials. This high-fidelity phonomicrosurgical trainer has been validated for novice surgeons. It can distinguish between experts and novices; and after training, it helped to improve novice performance. N/A. Laryngoscope, 127:888-893, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
A Self-Validation Method for High-Temperature Thermocouples Under Oxidizing Atmospheres
NASA Astrophysics Data System (ADS)
Mokdad, S.; Failleau, G.; Deuzé, T.; Briaudeau, S.; Kozlova, O.; Sadli, M.
2015-08-01
Thermocouples are prone to significant drift in use particularly when they are exposed to high temperatures. Indeed, high-temperature exposure can affect the response of a thermocouple progressively by changing the structure of the thermoelements and inducing inhomogeneities. Moreover, an oxidizing atmosphere contributes to thermocouple drift by changing the chemical nature of the metallic wires by the effect of oxidation. In general, severe uncontrolled drift of thermocouples results from these combined influences. A periodic recalibration of the thermocouple can be performed, but sometimes it is not possible to remove the sensor out of the process. Self-validation methods for thermocouples provide a solution to avoid this drawback, but there are currently no high-temperature contact thermometers with self-validation capability at temperatures up to . LNE-Cnam has developed fixed-point devices integrated to the thermocouples consisting of machined alumina-based devices for operation under oxidizing atmospheres. These devices require small amounts of pure metals (typically less than 2 g). They are suitable for self-validation of high-temperature thermocouples up to . In this paper the construction and the characterization of these integrated fixed-point devices are described. The phase-transition plateaus of gold, nickel, and palladium, which enable coverage of the temperature range between and , are assessed with this self-validation technique. Results of measurements performed at LNE-Cnam with the integrated self-validation module at several levels of temperature will be presented. The performance of the devices are assessed and discussed, in terms of robustness and metrological characteristics. Uncertainty budgets are also proposed and detailed.
DOT National Transportation Integrated Search
1995-01-01
This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety cortical functions in high-speed rail or magnetic levitation ...
2016-12-01
System for Steel Structures in Corrosive Environments Final Report on Project F12-AR06 Co ns tr uc tio n En gi ne er in g R es ea rc h La bo ra...Prevention and Control Program ERDC/CERL TR-16-27 December 2016 Demonstration and Validation of Two-Coat High- Performance Coating System for Steel ...Performance Coating System for Steel Structures in Corrosive Environments” ERDC/CERL TR-16-27 ii Abstract Department of Defense (DoD) installations
NEXT Performance Curve Analysis and Validation
NASA Technical Reports Server (NTRS)
Saripalli, Pratik; Cardiff, Eric; Englander, Jacob
2016-01-01
Performance curves of the NEXT thruster are highly important in determining the thruster's ability in performing towards mission-specific goals. New performance curves are proposed and examined here. The Evolutionary Mission Trajectory Generator (EMTG) is used to verify variations in mission solutions based on both available thruster curves and the new curves generated. Furthermore, variations in BOL and EOL curves are also examined. Mission design results shown here validate the use of EMTG and the new performance curves.
Experimental validation of prototype high voltage bushing
NASA Astrophysics Data System (ADS)
Shah, Sejal; Tyagi, H.; Sharma, D.; Parmar, D.; M. N., Vishnudev; Joshi, K.; Patel, K.; Yadav, A.; Patel, R.; Bandyopadhyay, M.; Rotti, C.; Chakraborty, A.
2017-08-01
Prototype High voltage bushing (PHVB) is a scaled down configuration of DNB High Voltage Bushing (HVB) of ITER. It is designed for operation at 50 kV DC to ensure operational performance and thereby confirming the design configuration of DNB HVB. Two concentric insulators viz. Ceramic and Fiber reinforced polymer (FRP) rings are used as double layered vacuum boundary for 50 kV isolation between grounded and high voltage flanges. Stress shields are designed for smooth electric field distribution. During ceramic to Kovar brazing, spilling cannot be controlled which may lead to high localized electrostatic stress. To understand spilling phenomenon and precise stress calculation, quantitative analysis was performed using Scanning Electron Microscopy (SEM) of brazed sample and similar configuration modeled while performing the Finite Element (FE) analysis. FE analysis of PHVB is performed to find out electrical stresses on different areas of PHVB and are maintained similar to DNB HV Bushing. With this configuration, the experiment is performed considering ITER like vacuum and electrical parameters. Initial HV test is performed by temporary vacuum sealing arrangements using gaskets/O-rings at both ends in order to achieve desired vacuum and keep the system maintainable. During validation test, 50 kV voltage withstand is performed for one hour. Voltage withstand test for 60 kV DC (20% higher rated voltage) have also been performed without any breakdown. Successful operation of PHVB confirms the design of DNB HV Bushing. In this paper, configuration of PHVB with experimental validation data is presented.
Validation of a Low-Thrust Mission Design Tool Using Operational Navigation Software
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Knittel, Jeremy M.; Williams, Ken; Stanbridge, Dale; Ellison, Donald H.
2017-01-01
Design of flight trajectories for missions employing solar electric propulsion requires a suitably high-fidelity design tool. In this work, the Evolutionary Mission Trajectory Generator (EMTG) is presented as a medium-high fidelity design tool that is suitable for mission proposals. EMTG is validated against the high-heritage deep-space navigation tool MIRAGE, demonstrating both the accuracy of EMTG's model and an operational mission design and navigation procedure using both tools. The validation is performed using a benchmark mission to the Jupiter Trojans.
Comparing current definitions of return to work: a measurement approach.
Steenstra, I A; Lee, H; de Vroome, E M M; Busse, J W; Hogg-Johnson, S J
2012-09-01
Return-to-work (RTW) status is an often used outcome in work and health research. In low back pain, work is regarded as a normal activity a worker should return to in order to fully recover. Comparing outcomes across studies and even jurisdictions using different definitions of RTW can be challenging for readers in general and when performing a systematic review in particular. In this study, the measurement properties of previously defined RTW outcomes were examined with data from two studies from two countries. Data on RTW in low back pain (LBP) from the Canadian Early Claimant Cohort (ECC); a workers' compensation based study, and the Dutch Amsterdam Sherbrooke Evaluation (ASE) study were analyzed. Correlations between outcomes, differences in predictive validity when using different outcomes and construct validity when comparing outcomes to a functional status outcome were analyzed. In the ECC all definitions were highly correlated and performed similarly in predictive validity. When compared to functional status, RTW definitions in the ECC study performed fair to good on all time points. In the ASE study all definitions were highly correlated and performed similarly in predictive validity. The RTW definitions, however, failed to compare or compared poorly with functional status. Only one definition compared fairly on one time point. Differently defined outcomes are highly correlated, give similar results in prediction, but seem to differ in construct validity when compared to functional status depending on societal context or possibly birth cohort. Comparison of studies using different RTW definitions appears valid as long as RTW status is not considered as a measure of functional status.
Erdodi, Laszlo A; Sagar, Sanya; Seke, Kristian; Zuccato, Brandon G; Schwartz, Eben S; Roth, Robert M
2018-06-01
This study was designed to develop performance validity indicators embedded within the Delis-Kaplan Executive Function Systems (D-KEFS) version of the Stroop task. Archival data from a mixed clinical sample of 132 patients (50% male; M Age = 43.4; M Education = 14.1) clinically referred for neuropsychological assessment were analyzed. Criterion measures included the Warrington Recognition Memory Test-Words and 2 composites based on several independent validity indicators. An age-corrected scaled score ≤6 on any of the 4 trials reliably differentiated psychometrically defined credible and noncredible response sets with high specificity (.87-.94) and variable sensitivity (.34-.71). An inverted Stroop effect was less sensitive (.14-.29), but comparably specific (.85-90) to invalid performance. Aggregating the newly developed D-KEFS Stroop validity indicators further improved classification accuracy. Failing the validity cutoffs was unrelated to self-reported depression or anxiety. However, it was associated with elevated somatic symptom report. In addition to processing speed and executive function, the D-KEFS version of the Stroop task can function as a measure of performance validity. A multivariate approach to performance validity assessment is generally superior to univariate models. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
USDA-ARS?s Scientific Manuscript database
A single laboratory validation has been performed on a practical ultra high-performance liquid chromatography (UHPLC), diode array detection (DAD), and tandem mass spectrometry (MS) method for determination of yohimbine in yohimbe barks and related dietary supplements. Good separation was achieved u...
ERIC Educational Resources Information Center
Brady, Michael P.; Heiser, Lawrence A.; McCormick, Jazarae K.; Forgan, James
2016-01-01
High-stakes standardized student assessments are increasingly used in value-added evaluation models to connect teacher performance to P-12 student learning. These assessments are also being used to evaluate teacher preparation programs, despite validity and reliability threats. A more rational model linking student performance to candidates who…
Development of self and peer performance assessment on iodometric titration experiment
NASA Astrophysics Data System (ADS)
Nahadi; Siswaningsih, W.; Kusumaningtyas, H.
2018-05-01
This study aims to describe the process in developing of reliable and valid assessment to measure students’ performance on iodometric titration and the effect of the self and peer assessment on students’ performance. The self and peer-instrument provides valuable feedback for the student performance improvement. The developed assessment contains rubric and task for facilitating self and peer assessment. The participants are 24 students at the second-grade student in certain vocational high school in Bandung. The participants divided into two groups. The first 12 students involved in the validity test of the developed assessment, while the remain 12 students participated for the reliability test. The content validity was evaluated based on the judgment experts. Test result of content validity based on judgment expert show that the developed performance assessment instrument categorized as valid on each task with the realibity classified as very good. Analysis of the impact of the self and peer assessment implementation showed that the peer instrument supported the self assessment.
The interplay between academic performance and quality of life among preclinical students.
Shareef, Mohammad Abrar; AlAmodi, Abdulhadi A; Al-Khateeb, Abdulrahman A; Abudan, Zainab; Alkhani, Mohammed A; Zebian, Sanderlla I; Qannita, Ahmed S; Tabrizi, Mariam J
2015-10-31
The high academic performance of medical students greatly influences their professional competence in long term career. Meanwhile, medical students greatly demand procuring a good quality of life that can help them sustain their medical career. This study examines validity and reliability of the tool among preclinical students and testifies the influence of their scholastic performance along with gender and academic year on their quality of life. A cross sectional study was conducted by distributing World Health Organization Quality of Life, WHOQOL-BREF, survey among medical students of year one to three at Alfaisal University. For validity, item discriminate validity(IDV) and confirmatory factor analysis were measured and for reliability, Cronbach's α test and internal item consistency(IIC) were examined. The association of GPA, gender and academic year with all major domains was drawn using Pearson's correlation, independent samples t-test and one-way ANOVA, respectively. A total of 335 preclinical students have responded to this questionnaire. The construct has demonstrated an adequate validity and good reliability. The high academic performance of students positively correlated with physical (r = 0.23, p < 0.001), psychological health (r = 0.29, p < 0.001), social relations (r = 0.11, p = 0.03) and environment (r = 0.23, p < 0.001). Male student scored higher than female peers in physical and psychological health. This study has identified a direct relationship between the academic performance of preclinical students and their quality of life. The WHOQOL-BREF is a valid and reliable tool among preclinical students and the positive direction of high academic performance with greater QOL suggests that academic achievers procure higher satisfaction and poor achievers need a special attention for the improvement of their quality of life.
DOT National Transportation Integrated Search
2008-01-01
Computer simulations are often used in aviation studies. These simulation tools may require complex, high-fidelity aircraft models. Since many of the flight models used are third-party developed products, independent validation is desired prior to im...
Reliable and valid assessment of point-of-care ultrasonography.
Todsen, Tobias; Tolsgaard, Martin Grønnebæk; Olsen, Beth Härstedt; Henriksen, Birthe Merete; Hillingsø, Jens Georg; Konge, Lars; Jensen, Morten Lind; Ringsted, Charlotte
2015-02-01
To explore the reliability and validity of the Objective Structured Assessment of Ultrasound Skills (OSAUS) scale for point-of-care ultrasonography (POC US) performance. POC US is increasingly used by clinicians and is an essential part of the management of acute surgical conditions. However, the quality of performance is highly operator-dependent. Therefore, reliable and valid assessment of trainees' ultrasonography competence is needed to ensure patient safety. Twenty-four physicians, representing novices, intermediates, and experts in POC US, scanned 4 different surgical patient cases in a controlled set-up. All ultrasound examinations were video-recorded and assessed by 2 blinded radiologists using OSAUS. Reliability was examined using generalizability theory. Construct validity was examined by comparing performance scores between the groups and by correlating physicians' OSAUS scores with diagnostic accuracy. The generalizability coefficient was high (0.81) and a D-study demonstrated that 1 assessor and 5 cases would result in similar reliability. The construct validity of the OSAUS scale was supported by a significant difference in the mean scores between the novice group (17.0; SD 8.4) and the intermediate group (30.0; SD 10.1), P = 0.007, as well as between the intermediate group and the expert group (72.9; SD 4.4), P = 0.04, and by a high correlation between OSAUS scores and diagnostic accuracy (Spearman ρ correlation coefficient = 0.76; P < 0.001). This study demonstrates high reliability as well as evidence of construct validity of the OSAUS scale for assessment of POC US competence. Hence, the OSAUS scale may be suitable for both in-training as well as end-of-training assessment.
Bush, Hillary H; Eisenhower, Abbey; Briggs-Gowan, Margaret; Carter, Alice S
2015-01-01
Rooted in the theory of attention put forth by Mirsky, Anthony, Duncan, Ahearn, and Kellam (1991), the Structured Attention Module (SAM) is a developmentally sensitive, computer-based performance task designed specifically to assess sustained selective attention among 3- to 6-year-old children. The current study addressed the feasibility and validity of the SAM among 64 economically disadvantaged preschool-age children (mean age = 58 months; 55% female); a population known to be at risk for attention problems and adverse math performance outcomes. Feasibility was demonstrated by high completion rates and strong associations between SAM performance and age. Principal Factor Analysis with rotation produced robust support for a three-factor model (Accuracy, Speed, and Endurance) of SAM performance, which largely corresponded with existing theorized models of selective and sustained attention. Construct validity was evidenced by positive correlations between SAM Composite scores and all three SAM factors and IQ, and between SAM Accuracy and sequential memory. Value-added predictive validity was not confirmed through main effects of SAM on math performance above and beyond age and IQ; however, significant interactions by child sex were observed: Accuracy and Endurance both interacted with child sex to predict math performance. In both cases, the SAM factors predicted math performance more strongly for girls than for boys. There were no overall sex differences in SAM performance. In sum, the current findings suggest that interindividual variation in sustained selective attention, and potentially other aspects of attention and executive function, among young, high-risk children can be captured validly with developmentally sensitive measures.
NASA Astrophysics Data System (ADS)
Nahadi, Firman, Harry; Yulina, Erlis
2016-02-01
The purposes of this study were to develop a performance assessment instrument for assessing the competence of psychomotor high school students on salt hydrolysis concepts. The design used in this study was the Research & Development which consists of three phases: development, testing and application of instruments. Subjects in this study were high school students in class XI science, which amounts to 93 students. In the development phase, seven validators validated 17 tasks instrument. In the test phase, we divided 19 students into three-part different times to conduct performance test in salt hydrolysis lab work and observed by six raters. The first, the second, and the third groups recpectively consist of five, six, and eight students. In the application phase, two raters observed the performance of 74 students in the salt hydrolysis lab work in several times. The results showed that 16 of 17 tasks of performance assessment instrument developed can be stated to be valid with CVR value of 1,00 and 0,714. While, the rest was not valid with CVR value was 0.429, below the critical value (0.622). In the test phase, reliability value of instrument obtained were 0,951 for the five-student group, 0,806 for the six-student group and 0,743 for the eight-student group. From the interviews, teachers strongly agree with the performance instrument developed. They stated that the instrument was feasible to use for maximum number of students were six in a single observation.
Sattler, Tine; Sekulic, Damir; Spasic, Miodrag; Osmankac, Nedzad; Vicente João, Paulo; Dervisevic, Edvin; Hadzic, Vedran
2016-01-01
Previous investigations noted potential importance of isokinetic strength in rapid muscular performances, such as jumping. This study aimed to identify the influence of isokinetic-knee-strength on specific jumping performance in volleyball. The secondary aim of the study was to evaluate reliability and validity of the two volleyball-specific jumping tests. The sample comprised 67 female (21.96±3.79 years; 68.26±8.52 kg; 174.43±6.85 cm) and 99 male (23.62±5.27 years; 84.83±10.37 kg; 189.01±7.21 cm) high- volleyball players who competed in 1st and 2nd National Division. Subjects were randomly divided into validation (N.=55 and 33 for males and females, respectively) and cross-validation subsamples (N.=54 and 34 for males and females, respectively). Set of predictors included isokinetic tests, to evaluate the eccentric and concentric strength capacities of the knee extensors, and flexors for dominant and non-dominant leg. The main outcome measure for the isokinetic testing was peak torque (PT) which was later normalized for body mass and expressed as PT/Kg. Block-jump and spike-jump performances were measured over three trials, and observed as criteria. Forward stepwise multiple regressions were calculated for validation subsamples and then cross-validated. Cross validation included correlations between and t-test differences between observed and predicted scores; and Bland Altman graphics. Jumping tests were found to be reliable (spike jump: ICC of 0.79 and 0.86; block-jump: ICC of 0.86 and 0.90; for males and females, respectively), and their validity was confirmed by significant t-test differences between 1st vs. 2nd division players. Isokinetic variables were found to be significant predictors of jumping performance in females, but not among males. In females, the isokinetic-knee measures were shown to be stronger and more valid predictors of the block-jump (42% and 64% of the explained variance for validation and cross-validation subsample, respectively) than that of the spike-jump (39% and 34% of the explained variance for validation and cross-validation subsample, respectively). Differences between prediction models calculated for males and females are mostly explained by gender-specific biomechanics of jumping. Study defined importance of knee-isokinetic-strength in volleyball jumping performance in female athletes. Further studies should evaluate association between ankle-isokinetic-strength and volleyball-specific jumping performances. Results reinforce the need for the cross-validation of the prediction-models in sport and exercise sciences.
Performance validation of the ANSER control laws for the F-18 HARV
NASA Technical Reports Server (NTRS)
Messina, Michael D.
1995-01-01
The ANSER control laws were implemented in Ada by NASA Dryden for flight test on the High Alpha Research Vehicle (HARV). The Ada implementation was tested in the hardware-in-the-loop (HIL) simulation, and results were compared to those obtained with the NASA Langley batch Fortran implementation of the control laws which are considered the 'truth model.' This report documents the performance validation test results between these implementations. This report contains the ANSER performance validation test plan, HIL versus batch time-history comparisons, simulation scripts used to generate checkcases, and detailed analysis of discrepancies discovered during testing.
Performance validation of the ANSER Control Laws for the F-18 HARV
NASA Technical Reports Server (NTRS)
Messina, Michael D.
1995-01-01
The ANSER control laws were implemented in Ada by NASA Dryden for flight test on the High Alpha Research Vehicle (HARV). The Ada implementation was tested in the hardware-in-the-loop (HIL) simulation, and results were compared to those obtained with the NASA Langley batch Fortran implementation of the control laws which are considered the 'truth model'. This report documents the performance validation test results between these implementations. This report contains the ANSER performance validation test plan, HIL versus batch time-history comparisons, simulation scripts used to generate checkcases, and detailed analysis of discrepancies discovered during testing.
NASA Astrophysics Data System (ADS)
Yerimadesi; Bayharti; Jannah, S. M.; Lufri; Festiyed; Kiram, Y.
2018-04-01
This Research and Development(R&D) aims to produce guided discovery learning based module on topic of acid-base and determine its validity and practicality in learning. Module development used Four D (4-D) model (define, design, develop and disseminate).This research was performed until development stage. Research’s instruments were validity and practicality questionnaires. Module was validated by five experts (three chemistry lecturers of Universitas Negeri Padang and two chemistry teachers of SMAN 9 Padang). Practicality test was done by two chemistry teachers and 30 students of SMAN 9 Padang. Kappa Cohen’s was used to analyze validity and practicality. The average moment kappa was 0.86 for validity and those for practicality were 0.85 by teachers and 0.76 by students revealing high category. It can be concluded that validity and practicality was proven for high school chemistry learning.
NASA Technical Reports Server (NTRS)
Lindensmith, Chris A.; Briggs, H. Clark; Beregovski, Yuri; Feria, V. Alfonso; Goullioud, Renaud; Gursel, Yekta; Hahn, Inseob; Kinsella, Gary; Orzewalla, Matthew; Phillips, Charles
2006-01-01
SIM Planetquest (SIM) is a large optical interferometer for making microarcsecond measurements of the positions of stars, and to detect Earth-sized planets around nearby stars. To achieve this precision, SIM requires stability of optical components to tens of picometers per hour. The combination of SIM s large size (9 meter baseline) and the high stability requirement makes it difficult and costly to measure all aspects of system performance on the ground. To reduce risks, costs and to allow for a design with fewer intermediate testing stages, the SIM project is developing an integrated thermal, mechanical and optical modeling process that will allow predictions of the system performance to be made at the required high precision. This modeling process uses commercial, off-the-shelf tools and has been validated against experimental results at the precision of the SIM performance requirements. This paper presents the description of the model development, some of the models, and their validation in the Thermo-Opto-Mechanical (TOM3) testbed which includes full scale brassboard optical components and the metrology to test them at the SIM performance requirement levels.
Reference Proteome Extracts for Mass Spec Instrument Performance Validation and Method Development
Rosenblatt, Mike; Urh, Marjeta; Saveliev, Sergei
2014-01-01
Biological samples of high complexity are required to test protein mass spec sample preparation procedures and validate mass spec instrument performance. Total cell protein extracts provide the needed sample complexity. However, to be compatible with mass spec applications, such extracts should meet a number of design requirements: compatibility with LC/MS (free of detergents, etc.)high protein integrity (minimal level of protein degradation and non-biological PTMs)compatibility with common sample preparation methods such as proteolysis, PTM enrichment and mass-tag labelingLot-to-lot reproducibility Here we describe total protein extracts from yeast and human cells that meet the above criteria. Two extract formats have been developed: Intact protein extracts with primary use for sample preparation method development and optimizationPre-digested extracts (peptides) with primary use for instrument validation and performance monitoring
Armistead-Jehle, Patrick; Cole, Wesley R; Stegman, Robert L
2018-02-01
The study was designed to replicate and extend pervious findings demonstrating the high rates of invalid neuropsychological testing in military service members (SMs) with a history of mild traumatic brain injury (mTBI) assessed in the context of a medical evaluation board (MEB). Two hundred thirty-one active duty SMs (61 of which were undergoing an MEB) underwent neuropsychological assessment. Performance validity (Word Memory Test) and symptom validity (MMPI-2-RF) test data were compared across those evaluated within disability (MEB) and clinical contexts. As with previous studies, there were significantly more individuals in an MEB context that failed performance (MEB = 57%, non-MEB = 31%) and symptom validity testing (MEB = 57%, non-MEB = 22%) and performance validity testing had a notable affect on cognitive test scores. Performance and symptom validity test failure rates did not vary as a function of the reason for disability evaluation when divided into behavioral versus physical health conditions. These data are consistent with past studies, and extends those studies by including symptom validity testing and investigating the effect of reason for MEB. This and previous studies demonstrate that more than 50% of SMs seen in the context of an MEB will fail performance validity tests and over-report on symptom validity measures. These results emphasize the importance of using both performance and symptom validity testing when evaluating SMs with a history of mTBI, especially if they are being seen for disability evaluations, in order to ensure the accuracy of cognitive and psychological test data. Published by Oxford University Press 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Towards Virtual FLS: Development of a Peg Transfer Simulator
Arikatla, Venkata S; Ahn, Woojin; Sankaranarayanan, Ganesh; De, Suvranu
2014-01-01
Background Peg transfer is one of five tasks in the Fundamentals of Laparoscopic Surgery (FLS), program. We report the development and validation of a Virtual Basic Laparoscopic Skill Trainer-Peg Transfer (VBLaST-PT©) simulator for automatic real-time scoring and objective quantification of performance. Methods We have introduced new techniques in order to allow bi-manual manipulation of pegs and automatic scoring/evaluation while maintaining high quality of simulation. We performed a preliminary face and construct validation study with 22 subjects divided into two groups: experts (PGY 4–5, fellow and practicing surgeons) and novice (PGY 1–3). Results Face validation shows high scores for all the aspects of the simulation. A two-tailed Mann-Whitney U-test scores showed significant difference between the two groups on completion time (p=0.003), FLS score (p=0.002) and the VBLaST-PT© score (p=0.006). Conclusions VBLaST-PT© is a high quality virtual simulator that showed both face and construct validity. PMID:24030904
DOT National Transportation Integrated Search
1995-09-01
This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety critical functions in high-speed rail or magnetic levitation ...
NASA Astrophysics Data System (ADS)
Saha, Gouranga Chandra
Very often a number of factors, especially time, space and money, deter many science educators from using inquiry-based, hands-on, laboratory practical tasks as alternative assessment instruments in science. A shortage of valid inquiry-based laboratory tasks for high school biology has been cited. Driven by this need, this study addressed the following three research questions: (1) How can laboratory-based performance tasks be designed and developed that are doable by students for whom they are designed/written? (2) Do student responses to the laboratory-based performance tasks validly represent at least some of the intended process skills that new biology learning goals want students to acquire? (3) Are the laboratory-based performance tasks psychometrically consistent as individual tasks and as a set? To answer these questions, three tasks were used from the six biology tasks initially designed and developed by an iterative process of trial testing. Analyses of data from 224 students showed that performance-based laboratory tasks that are doable by all students require careful and iterative process of development. Although the students demonstrated more skill in performing than planning and reasoning, their performances at the item level were very poor for some items. Possible reasons for the poor performances have been discussed and suggestions on how to remediate the deficiencies have been made. Empirical evidences for validity and reliability of the instrument have been presented both from the classical and the modern validity criteria point of view. Limitations of the study have been identified. Finally implications of the study and directions for further research have been discussed.
Validity and Reliability of Accelerometers in Patients With COPD: A SYSTEMATIC REVIEW.
Gore, Shweta; Blackwood, Jennifer; Guyette, Mary; Alsalaheen, Bara
2018-05-01
Reduced physical activity is associated with poor prognosis in chronic obstructive pulmonary disease (COPD). Accelerometers have greatly improved quantification of physical activity by providing information on step counts, body positions, energy expenditure, and magnitude of force. The purpose of this systematic review was to compare the validity and reliability of accelerometers used in patients with COPD. An electronic database search of MEDLINE and CINAHL was performed. Study quality was assessed with the Strengthening the Reporting of Observational Studies in Epidemiology checklist while methodological quality was assessed using the modified Quality Appraisal Tool for Reliability Studies. The search yielded 5392 studies; 25 met inclusion criteria. The SenseWear Pro armband reported high criterion validity under controlled conditions (r = 0.75-0.93) and high reliability (ICC = 0.84-0.86) for step counts. The DynaPort MiniMod demonstrated highest concurrent validity for step count using both video and manual methods. Validity of the SenseWear Pro armband varied between studies especially in free-living conditions, slower walking speeds, and with addition of weights during gait. A high degree of variability was found in the outcomes used and statistical analyses performed between studies, indicating a need for further studies to measure reliability and validity of accelerometers in COPD. The SenseWear Pro armband is the most commonly used accelerometer in COPD, but measurement properties are limited by gait speed variability and assistive device use. DynaPort MiniMod and Stepwatch accelerometers demonstrated high validity in patients with COPD but lack reliability data.
Rotordynamic Instability Problems in High-Performance Turbomachinery
NASA Technical Reports Server (NTRS)
1984-01-01
Rotordynamics and predictions on the stability of characteristics of high performance turbomachinery were discussed. Resolutions of problems on experimental validation of the forces that influence rotordynamics were emphasized. The programs to predict or measure forces and force coefficients in high-performance turbomachinery are illustrated. Data to design new machines with enhanced stability characteristics or upgrading existing machines are presented.
Onboard Processing and Autonomous Operations on the IPEX Cubesat
NASA Technical Reports Server (NTRS)
Chien, Steve; Doubleday, Joshua; Ortega, Kevin; Flatley, Tom; Crum, Gary; Geist, Alessandro; Lin, Michael; Williams, Austin; Bellardo, John; Puig-Suari, Jordi;
2012-01-01
IPEX is a 1u Cubesat sponsored by NASA Earth Science Technology Office (ESTO), the goals or which are: (1) Flight validate high performance flight computing, (2) Flight validate onboard instrument data processing product generation software, (3) flight validate autonomous operations for instrument processing, (4) enhance NASA outreach and university ties.
Hung, Andrew J; Shah, Swar H; Dalag, Leonard; Shin, Daniel; Gill, Inderbir S
2015-08-01
We developed a novel procedure specific simulation platform for robotic partial nephrectomy. In this study we prospectively evaluate its face, content, construct and concurrent validity. This hybrid platform features augmented reality and virtual reality. Augmented reality involves 3-dimensional robotic partial nephrectomy surgical videos overlaid with virtual instruments to teach surgical anatomy, technical skills and operative steps. Advanced technical skills are assessed with an embedded full virtual reality renorrhaphy task. Participants were classified as novice (no surgical training, 15), intermediate (less than 100 robotic cases, 13) or expert (100 or more robotic cases, 14) and prospectively assessed. Cohort performance was compared with the Kruskal-Wallis test (construct validity). Post-study questionnaire was used to assess the realism of simulation (face validity) and usefulness for training (content validity). Concurrent validity evaluated correlation between virtual reality renorrhaphy task and a live porcine robotic partial nephrectomy performance (Spearman's analysis). Experts rated the augmented reality content as realistic (median 8/10) and helpful for resident/fellow training (8.0-8.2/10). Experts rated the platform highly for teaching anatomy (9/10) and operative steps (8.5/10) but moderately for technical skills (7.5/10). Experts and intermediates outperformed novices (construct validity) in efficiency (p=0.0002) and accuracy (p=0.002). For virtual reality renorrhaphy, experts outperformed intermediates on GEARS metrics (p=0.002). Virtual reality renorrhaphy and in vivo porcine robotic partial nephrectomy performance correlated significantly (r=0.8, p <0.0001) (concurrent validity). This augmented reality simulation platform displayed face, content and construct validity. Performance in the procedure specific virtual reality task correlated highly with a porcine model (concurrent validity). Future efforts will integrate procedure specific virtual reality tasks and their global assessment. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Validating the Assessment for Measuring Indonesian Secondary School Students Performance in Ecology
NASA Astrophysics Data System (ADS)
Rachmatullah, A.; Roshayanti, F.; Ha, M.
2017-09-01
The aims of this current study are validating the American Association for the Advancement of Science (AAAS) Ecology assessment and examining the performance of Indonesian secondary school students on the assessment. A total of 611 Indonesian secondary school students (218 middle school students and 393 high school students) participated in the study. Forty-five items of AAAS assessment in the topic of Interdependence in Ecosystems were divided into two versions which every version has 21 similar items. Linking item method was used as the method to combine those two versions of assessment and further Rasch analyses were utilized to validate the instrument. Independent sample t-test was also run to compare the performance of Indonesian students and American students based on the mean of item difficulty. We found that from the total of 45 items, three items were identified as misfitting items. Later on, we also found that both Indonesian middle and high school students were significantly lower performance with very large and medium effect size compared to American students. We will discuss our findings in the regard of validation issue and the connection to Indonesian student’s science literacy.
Srivastava, Pooja; Tiwari, Neerja; Yadav, Akhilesh K; Kumar, Vijendra; Shanker, Karuna; Verma, Ram K; Gupta, Madan M; Gupta, Anil K; Khanuja, Suman P S
2008-01-01
This paper describes a sensitive, selective, specific, robust, and validated densitometric high-performance thin-layer chromatographic (HPTLC) method for the simultaneous determination of 3 key withanolides, namely, withaferin-A, 12-deoxywithastramonolide, and withanolide-A, in Ashwagandha (Withania somnifera) plant samples. The separation was performed on aluminum-backed silica gel 60F254 HPTLC plates using dichloromethane-methanol-acetone-diethyl ether (15 + 1 + 1 + 1, v/v/v/v) as the mobile phase. The withanolides were quantified by densitometry in the reflection/absorption mode at 230 nm. Precise and accurate quantification could be performed in the linear working concentration range of 66-330 ng/band with good correlation (r2 = 0.997, 0.999, and 0.996, respectively). The method was validated for recovery, precision, accuracy, robustness, limit of detection, limit of quantitation, and specificity according to International Conference on Harmonization guidelines. Specificity of quantification was confirmed using retention factor (Rf) values, UV-Vis spectral correlation, and electrospray ionization mass spectra of marker compounds in sample tracks.
ERIC Educational Resources Information Center
Schubert, T. F., Jr.; Kim, E. M.
2009-01-01
The use of Miller's Theorem in the determination of the high-frequency cutoff frequency of transistor amplifiers was recently challenged by a paper published in this TRANSACTIONS. Unfortunately, that paper provided no simulation or experimental results to bring credence to the challenge or to validate the alternate method of determination…
ERIC Educational Resources Information Center
Chang, Chi-Cheng; Liang, Chaoyun; Chen, Yi-Hui
2013-01-01
This study explored the reliability and validity of Web-based portfolio self-assessment. Participants were 72 senior high school students enrolled in a computer application course. The students created learning portfolios, viewed peers' work, and performed self-assessment on the Web-based portfolio assessment system. The results indicated: 1)…
Predictive Variables of Half-Marathon Performance for Male Runners.
Gómez-Molina, Josué; Ogueta-Alday, Ana; Camara, Jesus; Stickley, Christoper; Rodríguez-Marroyo, José A; García-López, Juan
2017-06-01
The aims of this study were to establish and validate various predictive equations of half-marathon performance. Seventy-eight half-marathon male runners participated in two different phases. Phase 1 (n = 48) was used to establish the equations for estimating half-marathon performance, and Phase 2 (n = 30) to validate these equations. Apart from half-marathon performance, training-related and anthropometric variables were recorded, and an incremental test on a treadmill was performed, in which physiological (VO 2max , speed at the anaerobic threshold, peak speed) and biomechanical variables (contact and flight times, step length and step rate) were registered. In Phase 1, half-marathon performance could be predicted to 90.3% by variables related to training and anthropometry (Equation 1), 94.9% by physiological variables (Equation 2), 93.7% by biomechanical parameters (Equation 3) and 96.2% by a general equation (Equation 4). Using these equations, in Phase 2 the predicted time was significantly correlated with performance (r = 0.78, 0.92, 0.90 and 0.95, respectively). The proposed equations and their validation showed a high prediction of half-marathon performance in long distance male runners, considered from different approaches. Furthermore, they improved the prediction performance of previous studies, which makes them a highly practical application in the field of training and performance.
A Preliminary Assessment of the SURF Reactive Burn Model Implementation in FLAG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Carl Edward; McCombe, Ryan Patrick; Carver, Kyle
Properly validated and calibrated reactive burn models (RBM) can be useful engineering tools for assessing high explosive performance and safety. Experiments with high explosives are expensive. Inexpensive RBM calculations are increasingly relied on for predictive analysis for performance and safety. This report discusses the validation of Menikoff and Shaw’s SURF reactive burn model, which has recently been implemented in the FLAG code. The LANL Gapstick experiment is discussed as is its’ utility in reactive burn model validation. Data obtained from pRad for the LT-63 series is also presented along with FLAG simulations using SURF for both PBX 9501 and PBXmore » 9502. Calibration parameters for both explosives are presented.« less
Yang, Xing-Xin; Zhang, Xiao-Xia; Chang, Rui-Miao; Wang, Yan-Wei; Li, Xiao-Ni
2011-01-01
A simple and reliable high performance liquid chromatography (HPLC) method has been developed for the simultaneous quantification of five major bioactive components in ‘Shu-Jin-Zhi-Tong’ capsules (SJZTC), for the purposes of quality control of this commonly prescribed traditional Chinese medicine. Under the optimum conditions, excellent separation was achieved, and the assay was fully validated in terms of linearity, precision, repeatability, stability and accuracy. The validated method was applied successfully to the determination of the five compounds in SJZTC samples from different production batches. The HPLC method can be used as a valid analytical method to evaluate the intrinsic quality of SJZTC. PMID:29403711
Marwah, Ashok; Marwah, Padma; Lardy, Henry
2005-09-25
17alpha-Methyltestosterone (MT) is used to manipulate the gender of a variety of fish species. A high performance liquid chromatography (HPLC) internal standard method for the determination of 17alpha-methyltestosterone in fish feed using 3beta-methoxy-17beta-hydroxyandrost-5-en-7-one as internal standard (IS) has been developed. The method has been validated for the quantitation of MT in fish feed using 245 nm UV absorbance as the parent wavelength and 255 nm as a qualifier wavelength. The method was validated in the concentration range of 15.0-120 mg/kg of 17alpha-methyltestosterone in fish feed. Method was also found to be suitable for other feeds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundstrom, Blake
Google is encouraging development of advanced photovoltaic inverters with high power density by holding a public competition and offering a prize for the best performing high power developed. NREL will perform the performance and validation for all inverters entered into the competition and provide results to Google.
Development and testing of the cancer multidisciplinary team meeting observational tool (MDT-MOT)
Harris, Jenny; Taylor, Cath; Sevdalis, Nick; Jalil, Rozh; Green, James S.A.
2016-01-01
Abstract Objective To develop a tool for independent observational assessment of cancer multidisciplinary team meetings (MDMs), and test criterion validity, inter-rater reliability/agreement and describe performance. Design Clinicians and experts in teamwork used a mixed-methods approach to develop and refine the tool. Study 1 observers rated pre-determined optimal/sub-optimal MDM film excerpts and Study 2 observers independently rated video-recordings of 10 MDMs. Setting Study 2 included 10 cancer MDMs in England. Participants Testing was undertaken by 13 health service staff and a clinical and non-clinical observer. Intervention None. Main Outcome Measures Tool development, validity, reliability/agreement and variability in MDT performance. Results Study 1: Observers were able to discriminate between optimal and sub-optimal MDM performance (P ≤ 0.05). Study 2: Inter-rater reliability was good for 3/10 domains. Percentage of absolute agreement was high (≥80%) for 4/10 domains and percentage agreement within 1 point was high for 9/10 domains. Four MDTs performed well (scored 3+ in at least 8/10 domains), 5 MDTs performed well in 6–7 domains and 1 MDT performed well in only 4 domains. Leadership and chairing of the meeting, the organization and administration of the meeting, and clinical decision-making processes all varied significantly between MDMs (P ≤ 0.01). Conclusions MDT-MOT demonstrated good criterion validity. Agreement between clinical and non-clinical observers (within one point on the scale) was high but this was inconsistent with reliability coefficients and warrants further investigation. If further validated MDT-MOT might provide a useful mechanism for the routine assessment of MDMs by the local workforce to drive improvements in MDT performance. PMID:27084499
Development and testing of the cancer multidisciplinary team meeting observational tool (MDT-MOT).
Harris, Jenny; Taylor, Cath; Sevdalis, Nick; Jalil, Rozh; Green, James S A
2016-06-01
To develop a tool for independent observational assessment of cancer multidisciplinary team meetings (MDMs), and test criterion validity, inter-rater reliability/agreement and describe performance. Clinicians and experts in teamwork used a mixed-methods approach to develop and refine the tool. Study 1 observers rated pre-determined optimal/sub-optimal MDM film excerpts and Study 2 observers independently rated video-recordings of 10 MDMs. Study 2 included 10 cancer MDMs in England. Testing was undertaken by 13 health service staff and a clinical and non-clinical observer. None. Tool development, validity, reliability/agreement and variability in MDT performance. Study 1: Observers were able to discriminate between optimal and sub-optimal MDM performance (P ≤ 0.05). Study 2: Inter-rater reliability was good for 3/10 domains. Percentage of absolute agreement was high (≥80%) for 4/10 domains and percentage agreement within 1 point was high for 9/10 domains. Four MDTs performed well (scored 3+ in at least 8/10 domains), 5 MDTs performed well in 6-7 domains and 1 MDT performed well in only 4 domains. Leadership and chairing of the meeting, the organization and administration of the meeting, and clinical decision-making processes all varied significantly between MDMs (P ≤ 0.01). MDT-MOT demonstrated good criterion validity. Agreement between clinical and non-clinical observers (within one point on the scale) was high but this was inconsistent with reliability coefficients and warrants further investigation. If further validated MDT-MOT might provide a useful mechanism for the routine assessment of MDMs by the local workforce to drive improvements in MDT performance. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
Frey, Alexander J; Wang, Qingqing; Busch, Christine; Feldman, Daniel; Bottalico, Lisa; Mesaros, Clementina A; Blair, Ian A; Vachani, Anil; Snyder, Nathaniel W
2016-12-01
A multiplexed quantitative method for the analysis of three major unconjugated steroids in human serum by stable isotope dilution liquid chromatography-high resolution mass spectrometry (LC-HRMS) was developed and validated on a Q Exactive Plus hybrid quadrupole/Orbitrap mass spectrometer. This quantification utilized isotope dilution and Girard P derivatization on the keto-groups of testosterone (T), androstenedione (AD) and dehydroepiandrosterone (DHEA) to improve ionization efficiency using electrospray ionization. Major isomeric compounds to T and DHEA; the inactive epimer of testosterone (epiT), and the metabolite of AD, 5α-androstanedione (5α-AD) were completely resolved on a biphenyl column within an 18min method. Inter- and intra-day method validation using LC-HRMS with qualifying product ions was performed and acceptable analytical performance was achieved. The method was further validated by comparing steroid levels from 100μL of serum from young vs older subjects. Since this approach provides high-dimensional HRMS data, untargeted analysis by age group was performed. DHEA and T were detected among the top analytes most significantly different across the two groups after untargeted LC-HRMS analysis, as well as a number of other still unknown metabolites, indicating the potential for combined targeted/untargeted analysis in steroid analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
Fuermaier, Anselm B M; Tucha, Oliver; Koerts, Janneke; Lange, Klaus W; Weisbrod, Matthias; Aschenbrenner, Steffen; Tucha, Lara
2017-12-01
The assessment of performance validity is an essential part of the neuropsychological evaluation of adults with attention-deficit/hyperactivity disorder (ADHD). Most available tools, however, are inaccurate regarding the identification of noncredible performance. This study describes the development of a visuospatial working memory test, including a validity indicator for noncredible cognitive performance of adults with ADHD. Visuospatial working memory of adults with ADHD (n = 48) was first compared to the test performance of healthy individuals (n = 48). Furthermore, a simulation design was performed including 252 individuals who were randomly assigned to either a control group (n = 48) or to 1 of 3 simulation groups who were requested to feign ADHD (n = 204). Additional samples of 27 adults with ADHD and 69 instructed simulators were included to cross-validate findings from the first samples. Adults with ADHD showed impaired visuospatial working memory performance of medium size as compared to healthy individuals. Simulation groups committed significantly more errors and had shorter response times as compared to patients with ADHD. Moreover, binary logistic regression analysis was carried out to derive a validity index that optimally differentiates between true and feigned ADHD. ROC analysis demonstrated high classification rates of the validity index, as shown in excellent specificity (95.8%) and adequate sensitivity (60.3%). The visuospatial working memory test as presented in this study therefore appears sensitive in indicating cognitive impairment of adults with ADHD. Furthermore, the embedded validity index revealed promising results concerning the detection of noncredible cognitive performance of adults with ADHD. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Morgan, Philip E.
2004-01-01
This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.
Collins, Anne; Ross, Janine
2017-01-01
We performed a systematic review to identify all original publications describing the asymmetric inheritance of cellular organelles in normal animal eukaryotic cells and to critique the validity and imprecision of the evidence. Searches were performed in Embase, MEDLINE and Pubmed up to November 2015. Screening of titles, abstracts and full papers was performed by two independent reviewers. Data extraction and validity were performed by one reviewer and checked by a second reviewer. Study quality was assessed using the SYRCLE risk of bias tool, for animal studies and by developing validity tools for the experimental model, organelle markers and imprecision. A narrative data synthesis was performed. We identified 31 studies (34 publications) of the asymmetric inheritance of organelles after mitotic or meiotic division. Studies for the asymmetric inheritance of centrosomes (n = 9); endosomes (n = 6), P granules (n = 4), the midbody (n = 3), mitochondria (n = 3), proteosomes (n = 2), spectrosomes (n = 2), cilia (n = 2) and endoplasmic reticulum (n = 2) were identified. Asymmetry was defined and quantified by variable methods. Assessment of the statistical reliability of the results indicated only two studies (7%) were judged to have low concern, the majority of studies (77%) were 'unclear' and five (16%) were judged to have 'high concerns'; the main reasons were low technical repeats (<10). Assessment of model validity indicated that the majority of studies (61%) were judged to be valid, ten studies (32%) were unclear and two studies (7%) were judged to have 'high concerns'; both described 'stem cells' without providing experimental evidence to confirm this (pluripotency and self-renewal). Assessment of marker validity indicated that no studies had low concern, most studies were unclear (96.5%), indicating there were insufficient details to judge if the markers were appropriate. One study had high concern for marker validity due to the contradictory results of two markers for the same organelle. For most studies the validity and imprecision of results could not be confirmed. In particular, data were limited due to a lack of reporting of interassay variability, sample size calculations, controls and functional validation of organelle markers. An evaluation of 16 systematic reviews containing cell assays found that only 50% reported adherence to PRISMA or ARRIVE reporting guidelines and 38% reported a formal risk of bias assessment. 44% of the reviews did not consider how relevant or valid the models were to the research question. 75% reviews did not consider how valid the markers were. 69% of reviews did not consider the impact of the statistical reliability of the results. Future systematic reviews in basic or preclinical research should ensure the rigorous reporting of the statistical reliability of the results in addition to the validity of the methods. Increased awareness of the importance of reporting guidelines and validation tools is needed for the scientific community. PMID:28562636
Predictive Variables of Half-Marathon Performance for Male Runners
Gómez-Molina, Josué; Ogueta-Alday, Ana; Camara, Jesus; Stickley, Christoper; Rodríguez-Marroyo, José A.; García-López, Juan
2017-01-01
The aims of this study were to establish and validate various predictive equations of half-marathon performance. Seventy-eight half-marathon male runners participated in two different phases. Phase 1 (n = 48) was used to establish the equations for estimating half-marathon performance, and Phase 2 (n = 30) to validate these equations. Apart from half-marathon performance, training-related and anthropometric variables were recorded, and an incremental test on a treadmill was performed, in which physiological (VO2max, speed at the anaerobic threshold, peak speed) and biomechanical variables (contact and flight times, step length and step rate) were registered. In Phase 1, half-marathon performance could be predicted to 90.3% by variables related to training and anthropometry (Equation 1), 94.9% by physiological variables (Equation 2), 93.7% by biomechanical parameters (Equation 3) and 96.2% by a general equation (Equation 4). Using these equations, in Phase 2 the predicted time was significantly correlated with performance (r = 0.78, 0.92, 0.90 and 0.95, respectively). The proposed equations and their validation showed a high prediction of half-marathon performance in long distance male runners, considered from different approaches. Furthermore, they improved the prediction performance of previous studies, which makes them a highly practical application in the field of training and performance. Key points The present study obtained four equations involving anthropometric, training, physiological and biomechanical variables to estimate half-marathon performance. These equations were validated in a different population, demonstrating narrows ranges of prediction than previous studies and also their consistency. As a novelty, some biomechanical variables (i.e. step length and step rate at RCT, and maximal step length) have been related to half-marathon performance. PMID:28630571
NASA Astrophysics Data System (ADS)
Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas
2016-06-01
Empirical models are frequently applied to produce landslide susceptibility maps for large areas. Subsequent quantitative validation results are routinely used as the primary criteria to infer the validity and applicability of the final maps or to select one of several models. This study hypothesizes that such direct deductions can be misleading. The main objective was to explore discrepancies between the predictive performance of a landslide susceptibility model and the geomorphic plausibility of subsequent landslide susceptibility maps while a particular emphasis was placed on the influence of incomplete landslide inventories on modelling and validation results. The study was conducted within the Flysch Zone of Lower Austria (1,354 km2) which is known to be highly susceptible to landslides of the slide-type movement. Sixteen susceptibility models were generated by applying two statistical classifiers (logistic regression and generalized additive model) and two machine learning techniques (random forest and support vector machine) separately for two landslide inventories of differing completeness and two predictor sets. The results were validated quantitatively by estimating the area under the receiver operating characteristic curve (AUROC) with single holdout and spatial cross-validation technique. The heuristic evaluation of the geomorphic plausibility of the final results was supported by findings of an exploratory data analysis, an estimation of odds ratios and an evaluation of the spatial structure of the final maps. The results showed that maps generated by different inventories, classifiers and predictors appeared differently while holdout validation revealed similar high predictive performances. Spatial cross-validation proved useful to expose spatially varying inconsistencies of the modelling results while additionally providing evidence for slightly overfitted machine learning-based models. However, the highest predictive performances were obtained for maps that explicitly expressed geomorphically implausible relationships indicating that the predictive performance of a model might be misleading in the case a predictor systematically relates to a spatially consistent bias of the inventory. Furthermore, we observed that random forest-based maps displayed spatial artifacts. The most plausible susceptibility map of the study area showed smooth prediction surfaces while the underlying model revealed a high predictive capability and was generated with an accurate landslide inventory and predictors that did not directly describe a bias. However, none of the presented models was found to be completely unbiased. This study showed that high predictive performances cannot be equated with a high plausibility and applicability of subsequent landslide susceptibility maps. We suggest that greater emphasis should be placed on identifying confounding factors and biases in landslide inventories. A joint discussion between modelers and decision makers of the spatial pattern of the final susceptibility maps in the field might increase their acceptance and applicability.
Validation environment for AIPS/ALS: Implementation and results
NASA Technical Reports Server (NTRS)
Segall, Zary; Siewiorek, Daniel; Caplan, Eddie; Chung, Alan; Czeck, Edward; Vrsalovic, Dalibor
1990-01-01
The work is presented which was performed in porting the Fault Injection-based Automated Testing (FIAT) and Programming and Instrumentation Environments (PIE) validation tools, to the Advanced Information Processing System (AIPS) in the context of the Ada Language System (ALS) application, as well as an initial fault free validation of the available AIPS system. The PIE components implemented on AIPS provide the monitoring mechanisms required for validation. These mechanisms represent a substantial portion of the FIAT system. Moreover, these are required for the implementation of the FIAT environment on AIPS. Using these components, an initial fault free validation of the AIPS system was performed. The implementation is described of the FIAT/PIE system, configured for fault free validation of the AIPS fault tolerant computer system. The PIE components were modified to support the Ada language. A special purpose AIPS/Ada runtime monitoring and data collection was implemented. A number of initial Ada programs running on the PIE/AIPS system were implemented. The instrumentation of the Ada programs was accomplished automatically inside the PIE programming environment. PIE's on-line graphical views show vividly and accurately the performance characteristics of Ada programs, AIPS kernel and the application's interaction with the AIPS kernel. The data collection mechanisms were written in a high level language, Ada, and provide a high degree of flexibility for implementation under various system conditions.
Simulation-based assessment to identify critical gaps in safe anesthesia resident performance.
Blum, Richard H; Boulet, John R; Cooper, Jeffrey B; Muret-Wagstaff, Sharon L
2014-01-01
Valid methods are needed to identify anesthesia resident performance gaps early in training. However, many assessment tools in medicine have not been properly validated. The authors designed and tested use of a behaviorally anchored scale, as part of a multiscenario simulation-based assessment system, to identify high- and low-performing residents with regard to domains of greatest concern to expert anesthesiology faculty. An expert faculty panel derived five key behavioral domains of interest by using a Delphi process (1) Synthesizes information to formulate a clear anesthetic plan; (2) Implements a plan based on changing conditions; (3) Demonstrates effective interpersonal and communication skills with patients and staff; (4) Identifies ways to improve performance; and (5) Recognizes own limits. Seven simulation scenarios spanning pre-to-postoperative encounters were used to assess performances of 22 first-year residents and 8 fellows from two institutions. Two of 10 trained faculty raters blinded to trainee program and training level scored each performance independently by using a behaviorally anchored rating scale. Residents, fellows, facilitators, and raters completed surveys. Evidence supporting the reliability and validity of the assessment scores was procured, including a high generalizability coefficient (ρ = 0.81) and expected performance differences between first-year resident and fellow participants. A majority of trainees, facilitators, and raters judged the assessment to be useful, realistic, and representative of critical skills required for safe practice. The study provides initial evidence to support the validity of a simulation-based performance assessment system for identifying critical gaps in safe anesthesia resident performance early in training.
HÖner, Oliver; Votteler, Andreas; Schmid, Markus; Schultz, Florian; Roth, Klaus
2015-01-01
The utilisation of motor performance tests for talent identification in youth sports is discussed intensively in talent research. This article examines the reliability, differential stability and validity of the motor diagnostics conducted nationwide by the German football talent identification and development programme and provides reference values for a standardised interpretation of the diagnostics results. Highly selected players (the top 4% of their age groups, U12-U15) took part in the diagnostics at 17 measurement points between spring 2004 and spring 2012 (N = 68,158). The heterogeneous test battery measured speed abilities and football-specific technical skills (sprint, agility, dribbling, ball control, shooting, juggling). For all measurement points, the overall score and the speed tests showed high internal consistency, high test-retest reliability and satisfying differential stability. The diagnostics demonstrated satisfying factorial-related validity with plausible and stable loadings on the two empirical factors "speed" and "technical skills". The score, and the technical skills dribbling and juggling, differentiated the most among players of different performance levels and thus showed the highest criterion-related validity. Satisfactory psychometric properties for the diagnostics are an important prerequisite for a scientifically sound rating of players' actual motor performance and for the future examination of the prognostic validity for success in adulthood.
Implementation and application of an interactive user-friendly validation software for RADIANCE
NASA Astrophysics Data System (ADS)
Sundaram, Anand; Boonn, William W.; Kim, Woojin; Cook, Tessa S.
2012-02-01
RADIANCE extracts CT dose parameters from dose sheets using optical character recognition and stores the data in a relational database. To facilitate validation of RADIANCE's performance, a simple user interface was initially implemented and about 300 records were evaluated. Here, we extend this interface to achieve a wider variety of functions and perform a larger-scale validation. The validator uses some data from the RADIANCE database to prepopulate quality-testing fields, such as correspondence between calculated and reported total dose-length product. The interface also displays relevant parameters from the DICOM headers. A total of 5,098 dose sheets were used to test the performance accuracy of RADIANCE in dose data extraction. Several search criteria were implemented. All records were searchable by accession number, study date, or dose parameters beyond chosen thresholds. Validated records were searchable according to additional criteria from validation inputs. An error rate of 0.303% was demonstrated in the validation. Dose monitoring is increasingly important and RADIANCE provides an open-source solution with a high level of accuracy. The RADIANCE validator has been updated to enable users to test the integrity of their installation and verify that their dose monitoring is accurate and effective.
Validation of the breast evaluation questionnaire for breast hypertrophy and breast reduction.
Lewin, Richard; Elander, Anna; Lundberg, Jonas; Hansson, Emma; Thorarinsson, Andri; Claudelin, Malin; Bladh, Helena; Lidén, Mattias
2018-06-13
There is a lack of published, validated questionnaires for evaluating psychosocial morbidity in patients with breast hypertrophy undergoing breast reduction surgery. To validate the breast evaluation questionnaire (BEQ), originally developed for the assessment of breast augmentation patients, for the assessment of psychosocial morbidity in patients with breast hypertrophy undergoing breast reduction surgery. Validation study Subjects: Women with macromastia Methods: The validation of the BEQ, adapted to breast reduction, was performed in several steps. Content validity, reliability, construct validity and responsiveness were assessed. The original version was adjusted according to the results for content validity and resulted in item reduction and a modified BEQ (mBEQ) that was then assessed for reliability, construct validity and responsiveness. Internal and external validation was performed for the modified BEQ. Convergent validity was tested against Breast-Q (reduction) and discriminate validity was tested against the SF-36. Known-groups validation revealed significant differences between the normal population and patients undergoing breast reduction surgery. The BEQ showed good reliability by test-re-test analysis and high responsiveness. The modified BEQ may be reliable, valid and responsive instrument for assessing women who undergo breast reduction.
Validity Evidence for ACT Compass® Placement Tests. ACT Research Report Series 2014 (2)
ERIC Educational Resources Information Center
Westrick, Paul A.; Allen, Jeff
2014-01-01
We examined the validity of using Compass® test scores and high school grade point average (GPA) for placing students in first-year college courses and for identifying students at risk of not succeeding. Consistent with other research, the combination of high school GPA and Compass scores performed better than either measure used alone. Results…
A diagnostic model for chronic hypersensitivity pneumonitis
Johannson, Kerri A; Elicker, Brett M; Vittinghoff, Eric; Assayag, Deborah; de Boer, Kaïssa; Golden, Jeffrey A; Jones, Kirk D; King, Talmadge E; Koth, Laura L; Lee, Joyce S; Ley, Brett; Wolters, Paul J; Collard, Harold R
2017-01-01
The objective of this study was to develop a diagnostic model that allows for a highly specific diagnosis of chronic hypersensitivity pneumonitis using clinical and radiological variables alone. Chronic hypersensitivity pneumonitis and other interstitial lung disease cases were retrospectively identified from a longitudinal database. High-resolution CT scans were blindly scored for radiographic features (eg, ground-glass opacity, mosaic perfusion) as well as the radiologist’s diagnostic impression. Candidate models were developed then evaluated using clinical and radiographic variables and assessed by the cross-validated C-statistic. Forty-four chronic hypersensitivity pneumonitis and eighty other interstitial lung disease cases were identified. Two models were selected based on their statistical performance, clinical applicability and face validity. Key model variables included age, down feather and/or bird exposure, radiographic presence of ground-glass opacity and mosaic perfusion and moderate or high confidence in the radiographic impression of chronic hypersensitivity pneumonitis. Models were internally validated with good performance, and cut-off values were established that resulted in high specificity for a diagnosis of chronic hypersensitivity pneumonitis. PMID:27245779
Assessing Arthroscopic Skills Using Wireless Elbow-Worn Motion Sensors.
Kirby, Georgina S J; Guyver, Paul; Strickland, Louise; Alvand, Abtin; Yang, Guang-Zhong; Hargrove, Caroline; Lo, Benny P L; Rees, Jonathan L
2015-07-01
Assessment of surgical skill is a critical component of surgical training. Approaches to assessment remain predominantly subjective, although more objective measures such as Global Rating Scales are in use. This study aimed to validate the use of elbow-worn, wireless, miniaturized motion sensors to assess the technical skill of trainees performing arthroscopic procedures in a simulated environment. Thirty participants were divided into three groups on the basis of their surgical experience: novices (n = 15), intermediates (n = 10), and experts (n = 5). All participants performed three standardized tasks on an arthroscopic virtual reality simulator while wearing wireless wrist and elbow motion sensors. Video output was recorded and a validated Global Rating Scale was used to assess performance; dexterity metrics were recorded from the simulator. Finally, live motion data were recorded via Bluetooth from the wireless wrist and elbow motion sensors and custom algorithms produced an arthroscopic performance score. Construct validity was demonstrated for all tasks, with Global Rating Scale scores and virtual reality output metrics showing significant differences between novices, intermediates, and experts (p < 0.001). The correlation of the virtual reality path length to the number of hand movements calculated from the wireless sensors was very high (p < 0.001). A comparison of the arthroscopic performance score levels with virtual reality output metrics also showed highly significant differences (p < 0.01). Comparisons of the arthroscopic performance score levels with the Global Rating Scale scores showed strong and highly significant correlations (p < 0.001) for both sensor locations, but those of the elbow-worn sensors were stronger and more significant (p < 0.001) than those of the wrist-worn sensors. A new wireless assessment of surgical performance system for objective assessment of surgical skills has proven valid for assessing arthroscopic skills. The elbow-worn sensors were shown to achieve an accurate assessment of surgical dexterity and performance. The validation of an entirely objective assessment of arthroscopic skill with wireless elbow-worn motion sensors introduces, for the first time, a feasible assessment system for the live operating theater with the added potential to be applied to other surgical and interventional specialties. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.
Wu, Liqing; Takatsu, Akiko; Park, Sang-Ryoul; Yang, Bin; Yang, Huaxin; Kinumi, Tomoya; Wang, Jing; Bi, Jiaming; Wang, Yang
2015-04-01
This article concerns the development and co-validation of a porcine insulin (pINS) certified reference material (CRM) produced by the National Institute of Metrology, People's Republic of China. Each CRM unit contained about 15 mg of purified solid pINS. The moisture content, amount of ignition residue, molecular mass, and purity of the pINS were measured. Both high-performance liquid chromatography-isotope dilution mass spectrometry and a purity deduction method were used to determine the mass fraction of the pINS. Fifteen units were selected to study the between-bottle homogeneity, and no inhomogeneity was observed. A stability study concluded that the CRM was stable for at least 12 months at -20 °C. The certified value of the CRM was (0.892 ± 0.036) g/g. A co-validation of the CRM was performed among Chinese, Japanese, and Korean laboratories under the framework of the Asian Collaboration on Reference Materials. The co-validation results agreed well with the certified value of the CRM. Consequently, the pINS CRM may be used as a calibration material or as a validation standard for pharmaceutical purposes to improve the quality of pharmaceutical products.
Rahman, M Shafiqur; Ambler, Gareth; Choodari-Oskooei, Babak; Omar, Rumana Z
2017-04-18
When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell's concordance measure which tended to increase as censoring increased. We recommend that Uno's concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller's measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston's D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive accuracy curves. In addition, we recommend to investigate the characteristics of the validation data such as the level of censoring and the distribution of the prognostic index derived in the validation setting before choosing the performance measures.
DOT National Transportation Integrated Search
2017-05-01
The main objective of this research is to develop and validate the behavior of a new class of environmentally friendly and costeffective : high-performance concrete (HPC) referred to herein as Eco-HPC. The proposed project aimed at developing two cla...
Expression signature as a biomarker for prenatal diagnosis of trisomy 21.
Volk, Marija; Maver, Aleš; Lovrečić, Luca; Juvan, Peter; Peterlin, Borut
2013-01-01
A universal biomarker panel with the potential to predict high-risk pregnancies or adverse pregnancy outcome does not exist. Transcriptome analysis is a powerful tool to capture differentially expressed genes (DEG), which can be used as biomarker-diagnostic-predictive tool for various conditions in prenatal setting. In search of biomarker set for predicting high-risk pregnancies, we performed global expression profiling to find DEG in Ts21. Subsequently, we performed targeted validation and diagnostic performance evaluation on a larger group of case and control samples. Initially, transcriptomic profiles of 10 cultivated amniocyte samples with Ts21 and 9 with normal euploid constitution were determined using expression microarrays. Datasets from Ts21 transcriptomic studies from GEO repository were incorporated. DEG were discovered using linear regression modelling and validated using RT-PCR quantification on an independent sample of 16 cases with Ts21 and 32 controls. The classification performance of Ts21 status based on expression profiling was performed using supervised machine learning algorithm and evaluated using a leave-one-out cross validation approach. Global gene expression profiling has revealed significant expression changes between normal and Ts21 samples, which in combination with data from previously performed Ts21 transcriptomic studies, were used to generate a multi-gene biomarker for Ts21, comprising of 9 gene expression profiles. In addition to biomarker's high performance in discriminating samples from global expression profiling, we were also able to show its discriminatory performance on a larger sample set 2, validated using RT-PCR experiment (AUC=0.97), while its performance on data from previously published studies reached discriminatory AUC values of 1.00. Our results show that transcriptomic changes might potentially be used to discriminate trisomy of chromosome 21 in the prenatal setting. As expressional alterations reflect both, causal and reactive cellular mechanisms, transcriptomic changes may thus have future potential in the diagnosis of a wide array of heterogeneous diseases that result from genetic disturbances.
O'Connor, Peter; Nguyen, Jessica; Anglim, Jeromy
2017-01-01
In this study, we investigated the validity of the Trait Emotional Intelligence Questionnaire-Short Form (TEIQue-SF; Petrides, 2009) in the context of task-induced stress. We used a total sample of 225 volunteers to investigate (a) the incremental validity of the TEIQue-SF over other predictors of coping with task-induced stress, and (b) the construct validity of the TEIQue-SF by examining the mechanisms via which scores from the TEIQue-SF predict coping outcomes. Results demonstrated that the TEIQue-SF possessed incremental validity over the Big Five personality traits in the prediction of emotion-focused coping. Results also provided support for the construct validity of the TEIQue-SF by demonstrating that this measure predicted adaptive coping via emotion-focused channels. Specifically, results showed that, following a task stressor, the TEIQue-SF predicted low negative affect and high task performance via high levels of emotion-focused coping. Consistent with the purported theoretical nature of the trait emotional intelligence (EI) construct, trait EI as assessed by the TEIQue-SF primarily enhances affect and performance in stressful situations by regulating negative emotions.
The teamwork in assertive community treatment (TACT) scale: development and validation.
Wholey, Douglas R; Zhu, Xi; Knoke, David; Shah, Pri; Zellmer-Bruhn, Mary; Witheridge, Thomas F
2012-11-01
Team design is meticulously specified for assertive community treatment (ACT) teams, yet performance can vary across ACT teams, even those with high fidelity. By developing and validating the Teamwork in Assertive Community Treatment (TACT) scale, investigators examined the role of team processes in ACT performance. The TACT scale measuring ACT teamwork was developed from a conceptual model grounded in organizational research and adapted for the ACT and mental health context. TACT subscales were constructed after exploratory and confirmatory factor analyses. The reliability, discriminant validity, predictive validity, temporal stability, internal consistency, and within-team agreement were established with surveys from approximately 300 members of 26 Minnesota ACT teams who completed the questionnaire three times, at six-month intervals. Nine TACT subscales emerged from the analyses: exploration, exploitation of new and existing knowledge, psychological safety, goal agreement, conflict, constructive controversy, information accessibility, encounter preparedness, and consumer-centered care. These nine subscales demonstrated fit and temporal stability (confirmatory factor analysis), high internal consistency (Cronbach's alpha), and within-team agreement and between-team differences (rwg and intraclass correlations). Correlational analyses of the subscales revealed that they measure related yet distinctive aspects of ACT team processes, and regression analyses demonstrated predictive validity (encounter preparedness is related to staff outcomes). The TACT scale demonstrated high reliability and validity and can be included in research and evaluation of teamwork in ACT and mental health teams.
Yi, Ming; Zhao, Yongmei; Jia, Li; He, Mei; Kebebew, Electron; Stephens, Robert M.
2014-01-01
To apply exome-seq-derived variants in the clinical setting, there is an urgent need to identify the best variant caller(s) from a large collection of available options. We have used an Illumina exome-seq dataset as a benchmark, with two validation scenarios—family pedigree information and SNP array data for the same samples, permitting global high-throughput cross-validation, to evaluate the quality of SNP calls derived from several popular variant discovery tools from both the open-source and commercial communities using a set of designated quality metrics. To the best of our knowledge, this is the first large-scale performance comparison of exome-seq variant discovery tools using high-throughput validation with both Mendelian inheritance checking and SNP array data, which allows us to gain insights into the accuracy of SNP calling through such high-throughput validation in an unprecedented way, whereas the previously reported comparison studies have only assessed concordance of these tools without directly assessing the quality of the derived SNPs. More importantly, the main purpose of our study was to establish a reusable procedure that applies high-throughput validation to compare the quality of SNP discovery tools with a focus on exome-seq, which can be used to compare any forthcoming tool(s) of interest. PMID:24831545
Adderley, N J; Mallett, S; Marshall, T; Ghosh, S; Rayman, G; Bellary, S; Coleman, J; Akiboye, F; Toulis, K A; Nirantharakumar, K
2018-06-01
To temporally and externally validate our previously developed prediction model, which used data from University Hospitals Birmingham to identify inpatients with diabetes at high risk of adverse outcome (mortality or excessive length of stay), in order to demonstrate its applicability to other hospital populations within the UK. Temporal validation was performed using data from University Hospitals Birmingham and external validation was performed using data from both the Heart of England NHS Foundation Trust and Ipswich Hospital. All adult inpatients with diabetes were included. Variables included in the model were age, gender, ethnicity, admission type, intensive therapy unit admission, insulin therapy, albumin, sodium, potassium, haemoglobin, C-reactive protein, estimated GFR and neutrophil count. Adverse outcome was defined as excessive length of stay or death. Model discrimination in the temporal and external validation datasets was good. In temporal validation using data from University Hospitals Birmingham, the area under the curve was 0.797 (95% CI 0.785-0.810), sensitivity was 70% (95% CI 67-72) and specificity was 75% (95% CI 74-76). In external validation using data from Heart of England NHS Foundation Trust, the area under the curve was 0.758 (95% CI 0.747-0.768), sensitivity was 73% (95% CI 71-74) and specificity was 66% (95% CI 65-67). In external validation using data from Ipswich, the area under the curve was 0.736 (95% CI 0.711-0.761), sensitivity was 63% (95% CI 59-68) and specificity was 69% (95% CI 67-72). These results were similar to those for the internally validated model derived from University Hospitals Birmingham. The prediction model to identify patients with diabetes at high risk of developing an adverse event while in hospital performed well in temporal and external validation. The externally validated prediction model is a novel tool that can be used to improve care pathways for inpatients with diabetes. Further research to assess clinical utility is needed. © 2018 Diabetes UK.
PERFORMANCE OF OVID MEDLINE SEARCH FILTERS TO IDENTIFY HEALTH STATE UTILITY STUDIES.
Arber, Mick; Garcia, Sonia; Veale, Thomas; Edwards, Mary; Shaw, Alison; Glanville, Julie M
2017-01-01
This study was designed to assess the sensitivity of three Ovid MEDLINE search filters developed to identify studies reporting health state utility values (HSUVs), to improve the performance of the best performing filter, and to validate resulting search filters. Three quasi-gold standard sets (QGS1, QGS2, QGS3) of relevant studies were harvested from reviews of studies reporting HSUVs. The performance of three initial filters was assessed by measuring their relative recall of studies in QGS1. The best performing filter was then developed further using QGS2. This resulted in three final search filters (FSF1, FSF2, and FSF3), which were validated using QGS3. FSF1 (sensitivity maximizing) retrieved 132/139 records (sensitivity: 95 percent) in the QGS3 validation set. FSF1 had a number needed to read (NNR) of 842. FSF2 (balancing sensitivity and precision) retrieved 128/139 records (sensitivity: 92 percent) with a NNR of 502. FSF3 (precision maximizing) retrieved 123/139 records (sensitivity: 88 percent) with a NNR of 383. We have developed and validated a search filter (FSF1) to identify studies reporting HSUVs with high sensitivity (95 percent) and two other search filters (FSF2 and FSF3) with reasonably high sensitivity (92 percent and 88 percent) but greater precision, resulting in a lower NNR. These seem to be the first validated filters available for HSUVs. The availability of filters with a range of sensitivity and precision options enables researchers to choose the filter which is most appropriate to the resources available for their specific research.
Simard, Marc; Sirois, Caroline; Candas, Bernard
2018-05-01
To validate and compare performance of an International Classification of Diseases, tenth revision (ICD-10) version of a combined comorbidity index merging conditions of Charlson and Elixhauser measures against individual measures in the prediction of 30-day mortality. To select a weight derivation method providing optimal performance across ICD-9 and ICD-10 coding systems. Using 2 adult population-based cohorts of patients with hospital admissions in ICD-9 (2005, n=337,367) and ICD-10 (2011, n=348,820), we validated a combined comorbidity index by predicting 30-day mortality with logistic regression. To appreciate performance of the Combined index and both individual measures, factors impacting indices performance such as population characteristics and weight derivation methods were accounted for. We applied 3 scoring methods (Van Walraven, Schneeweiss, and Charlson) and determined which provides best predictive values. Combined index [c-statistics: 0.853 (95% confidence interval: CI, 0.848-0.856)] performed better than original Charlson [0.841 (95% CI, 0.835-0.844)] or Elixhauser [0.841 (95% CI, 0.837-0.844)] measures on ICD-10 cohort. All weight derivation methods provided close high discrimination results for the Combined index (Van Walraven: 0.852, Schneeweiss: 0.851, Charlson: 0.849). Results were consistent across both coding systems. The Combined index remains valid with both ICD-9 and ICD-10 coding systems and the 3 weight derivation methods evaluated provided consistent high performance across those coding systems.
USDA-ARS?s Scientific Manuscript database
An easy and reliable high-throughput analysis method was developed and validated for 192 diverse pesticides and 51 environmental contaminants (13 PCB congeners, 14 PAHs, 7 PBDE congeners, and 17 novel flame retardants) in cattle, swine, and poultry muscle. Sample preparation was based on the “quick,...
Wolf, Timothy J; Dahl, Abigail; Auen, Colleen; Doherty, Meghan
2017-07-01
The objective of this study was to evaluate the inter-rater reliability, test-retest reliability, concurrent validity, and discriminant validity of the Complex Task Performance Assessment (CTPA): an ecologically valid performance-based assessment of executive function. Community control participants (n = 20) and individuals with mild stroke (n = 14) participated in this study. All participants completed the CTPA and a battery of cognitive assessments at initial testing. The control participants completed the CTPA at two different times one week apart. The intra-class correlation coefficient (ICC) for inter-rater reliability for the total score on the CTPA was .991. The ICCs for all of the sub-scores of the CTPA were also high (.889-.977). The CTPA total score was significantly correlated to Condition 4 of the DKEFS Color-Word Interference Test (p = -.425), and the Wechsler Test of Adult Reading (p = -.493). Finally, there were significant differences between control subjects and individuals with mild stroke on the total score of the CTPA (p = .007) and all sub-scores except interpretation failures and total items incorrect. These results are also consistent with other current executive function performance-based assessments and indicate that the CTPA is a reliable and valid performance-based measure of executive function.
Discrete tyre model application for evaluation of vehicle limit handling performance
NASA Astrophysics Data System (ADS)
Siramdasu, Y.; Taheri, S.
2016-11-01
The goal of this study is twofold, first, to understand the transient and nonlinear effects of anti-lock braking systems (ABS), road undulations and driving dynamics on lateral performance of tyre and second, to develop objective handling manoeuvres and respective metrics to characterise these effects on vehicle behaviour. For studying the transient and nonlinear handling performance of the vehicle, the variations of relaxation length of tyre and tyre inertial properties play significant roles [Pacejka HB. Tire and vehicle dynamics. 3rd ed. Butterworth-Heinemann; 2012]. To accurately simulate these nonlinear effects during high-frequency vehicle dynamic manoeuvres, requires a high-frequency dynamic tyre model (? Hz). A 6 DOF dynamic tyre model integrated with enveloping model is developed and validated using fixed axle high-speed oblique cleat experimental data. Commercially available vehicle dynamics software CarSim® is used for vehicle simulation. The vehicle model was validated by comparing simulation results with experimental sinusoidal steering tests. The validated tyre model is then integrated with vehicle model and a commercial grade rule-based ABS model to perform various objective simulations. Two test scenarios of ABS braking in turn on a smooth road and accelerating in a turn on uneven and smooth roads are considered. Both test cases reiterated that while the tyre is operating in the nonlinear region of slip or slip angle, any road disturbance or high-frequency brake torque input variations can excite the inertial belt vibrations of the tyre. It is shown that these inertial vibrations can directly affect the developed performance metrics and potentially degrade the handling performance of the vehicle.
The Construction and Validation of an Abridged Version of the Autism-Spectrum Quotient (AQ-Short)
ERIC Educational Resources Information Center
Hoekstra, Rosa A.; Vinkhuyzen, Anna A. E.; Wheelwright, Sally; Bartels, Meike; Boomsma, Dorret I.; Baron-Cohen, Simon; Posthuma, Danielle; van der Sluis, Sophie
2011-01-01
This study reports on the development and validation of an abridged version of the 50-item Autism-Spectrum Quotient (AQ), a self-report measure of autistic traits. We aimed to reduce the number of items whilst retaining high validity and a meaningful factor structure. The item reduction procedure was performed on data from 1,263 Dutch students and…
USDA-ARS?s Scientific Manuscript database
This work describes the development and validation of a method for the simultaneous determination of 13 estrogens and progestogens in honey by high performance liquid chromatography-tandem mass spectrometry. The target compounds were preconcentrated by solid phase extraction. Pretreatment variables ...
Suhr, Julie A; Berry, David T R
2017-12-01
Invalid self-report and invalid performance occur with high base rates in attention deficit/hyperactivity disorder (ADHD; Harrison, 2006; Musso & Gouvier, 2014). Although much research has focused on the development and validation of symptom validity tests (SVTs) and performance validity tests (PVTs) for psychiatric and neurological presentations, less attention has been given to the use of SVTs and PVTs in ADHD evaluation. This introduction to the special section describes a series of studies examining the use of SVTs and PVTs in adult ADHD evaluation. We present the series of studies in the context of prior research on noncredible presentation and call for future research using improved research methods and with a focus on assessment issues specific to ADHD evaluation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Application of a High-Fidelity Icing Analysis Method to a Model-Scale Rotor in Forward Flight
NASA Technical Reports Server (NTRS)
Narducci, Robert; Orr, Stanley; Kreeger, Richard E.
2012-01-01
An icing analysis process involving the loose coupling of OVERFLOW-RCAS for rotor performance prediction and with LEWICE3D for thermal analysis and ice accretion is applied to a model-scale rotor for validation. The process offers high-fidelity rotor analysis for the noniced and iced rotor performance evaluation that accounts for the interaction of nonlinear aerodynamics with blade elastic deformations. Ice accumulation prediction also involves loosely coupled data exchanges between OVERFLOW and LEWICE3D to produce accurate ice shapes. Validation of the process uses data collected in the 1993 icing test involving Sikorsky's Powered Force Model. Non-iced and iced rotor performance predictions are compared to experimental measurements as are predicted ice shapes.
Evacuation performance evaluation tool.
Farra, Sharon; Miller, Elaine T; Gneuhs, Matthew; Timm, Nathan; Li, Gengxin; Simon, Ashley; Brady, Whittney
2016-01-01
Hospitals conduct evacuation exercises to improve performance during emergency events. An essential aspect in this process is the creation of reliable and valid evaluation tools. The objective of this article is to describe the development and implications of a disaster evacuation performance tool that measures one portion of the very complex process of evacuation. Through the application of the Delphi technique and DeVellis's framework, disaster and neonatal experts provided input in developing this performance evaluation tool. Following development, content validity and reliability of this tool were assessed. Large pediatric hospital and medical center in the Midwest. The tool was pilot tested with an administrative, medical, and nursing leadership group and then implemented with a group of 68 healthcare workers during a disaster exercise of a neonatal intensive care unit (NICU). The tool has demonstrated high content validity with a scale validity index of 0.979 and inter-rater reliability G coefficient (0.984, 95% CI: 0.948-0.9952). The Delphi process based on the conceptual framework of DeVellis yielded a psychometrically sound evacuation performance evaluation tool for a NICU.
aMAP is a validated pipeline for registration and segmentation of high-resolution mouse brain data
Niedworok, Christian J.; Brown, Alexander P. Y.; Jorge Cardoso, M.; Osten, Pavel; Ourselin, Sebastien; Modat, Marc; Margrie, Troy W.
2016-01-01
The validation of automated image registration and segmentation is crucial for accurate and reliable mapping of brain connectivity and function in three-dimensional (3D) data sets. While validation standards are necessarily high and routinely met in the clinical arena, they have to date been lacking for high-resolution microscopy data sets obtained from the rodent brain. Here we present a tool for optimized automated mouse atlas propagation (aMAP) based on clinical registration software (NiftyReg) for anatomical segmentation of high-resolution 3D fluorescence images of the adult mouse brain. We empirically evaluate aMAP as a method for registration and subsequent segmentation by validating it against the performance of expert human raters. This study therefore establishes a benchmark standard for mapping the molecular function and cellular connectivity of the rodent brain. PMID:27384127
Turusheva, Anna; Frolova, Elena; Bert, Vaes; Hegendoerfer, Eralda; Degryse, Jean-Marie
2017-07-01
Prediction models help to make decisions about further management in clinical practice. This study aims to develop a mortality risk score based on previously identified risk predictors and to perform internal and external validations. In a population-based prospective cohort study of 611 community-dwelling individuals aged 65+ in St. Petersburg (Russia), all-cause mortality risks over 2.5 years follow-up were determined based on the results obtained from anthropometry, medical history, physical performance tests, spirometry and laboratory tests. C-statistic, risk reclassification analysis, integrated discrimination improvement analysis, decision curves analysis, internal validation and external validation were performed. Older adults were at higher risk for mortality [HR (95%CI)=4.54 (3.73-5.52)] when two or more of the following components were present: poor physical performance, low muscle mass, poor lung function, and anemia. If anemia was combined with high C-reactive protein (CRP) and high B-type natriuretic peptide (BNP) was added the HR (95%CI) was slightly higher (5.81 (4.73-7.14)) even after adjusting for age, sex and comorbidities. Our models were validated in an external population of adults 80+. The extended model had a better predictive capacity for cardiovascular mortality [HR (95%CI)=5.05 (2.23-11.44)] compared to the baseline model [HR (95%CI)=2.17 (1.18-4.00)] in the external population. We developed and validated a new risk prediction score that may be used to identify older adults at higher risk for mortality in Russia. Additional studies need to determine which targeted interventions improve the outcomes of these at-risk individuals. Copyright © 2017 Elsevier B.V. All rights reserved.
de Witte, Annemarie M H; Hoozemans, Marco J M; Berger, Monique A M; van der Slikke, Rienk M A; van der Woude, Lucas H V; Veeger, Dirkjan H E J
2018-01-01
The aim of this study was to develop and describe a wheelchair mobility performance test in wheelchair basketball and to assess its construct validity and reliability. To mimic mobility performance of wheelchair basketball matches in a standardised manner, a test was designed based on observation of wheelchair basketball matches and expert judgement. Forty-six players performed the test to determine its validity and 23 players performed the test twice for reliability. Independent-samples t-tests were used to assess whether the times needed to complete the test were different for classifications, playing standards and sex. Intraclass correlation coefficients (ICC) were calculated to quantify reliability of performance times. Males performed better than females (P < 0.001, effect size [ES] = -1.26) and international men performed better than national men (P < 0.001, ES = -1.62). Performance time of low (≤2.5) and high (≥3.0) classification players was borderline not significant with a moderate ES (P = 0.06, ES = 0.58). The reliability was excellent for overall performance time (ICC = 0.95). These results show that the test can be used as a standardised mobility performance test to validly and reliably assess the capacity in mobility performance of elite wheelchair basketball athletes. Furthermore, the described methodology of development is recommended for use in other sports to develop sport-specific tests.
Jans, Marielle P; Slootweg, Vera C; Boot, Cecile R; de Morton, Natalie A; van der Sluis, Geert; van Meeteren, Nico L
2011-11-01
To examine the reproducibility, construct validity, and unidimensionality of the Dutch translation of the de Morton Mobility Index (DEMMI), a performance-based measure of mobility for older patients. Cross-sectional study. Rehabilitation center (reproducibility study) and hospital (validity study). Patients (N=28; age >65y) after orthopedic surgery (reproducibility study) and patients (N=219; age >65y) waiting for total hip or total knee arthroplasty (validity study). Not applicable. Not applicable. The intraclass correlation coefficient for interrater reliability was high (.85; 95% confidence interval, 71-.93), and minimal detectable change with 90% confidence was 7 on the 100-point DEMMI scale. Rasch analysis identified that the Dutch translation of the DEMMI is a unidimensional measure of mobility in this population. DEMMI scores showed high correlations with scores on other performance-based measures of mobility (Timed Up and Go test, Spearman r=-.73; Chair Rise Time, r=-.69; walking test, r=.74). A lower correlation of .44 was identified with the self-report measure Western Ontario and McMaster Universities Osteoarthritis Index. The Dutch translation of the DEMMI is a reproducible and valid performance-based measure for assessing mobility in older patients with knee or hip osteoarthritis. Copyright © 2011 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Balfanz, Robert; Legters, Nettie; West, Thomas C.; Weber, Lisa M.
2007-01-01
This article examines the extent to which adequate yearly progress (AYP) is a valid and reliable indicator of improvement in low-performing high schools. For a random subsample of 202 high schools, the authors investigate the school characteristics and the federal and state policy contexts that influence their AYP status. Logistic regression…
NASA Technical Reports Server (NTRS)
Foster, John V.; Ross, Holly M.; Ashley, Patrick A.
1993-01-01
Designers of the next-generation fighter and attack airplanes are faced with the requirements of good high-angle-of-attack maneuverability as well as efficient high speed cruise capability with low radar cross section (RCS) characteristics. As a result, they are challenged with the task of making critical design trades to achieve the desired levels of maneuverability and performance. This task has highlighted the need for comprehensive, flight-validated lateral-directional control power design guidelines for high angles of attack. A joint NASA/U.S. Navy study has been initiated to address this need and to investigate the complex flight dynamics characteristics and controls requirements for high-angle-of-attack lateral-directional maneuvering. A multi-year research program is underway which includes ground-based piloted simulation and flight validation. This paper will give a status update of this program that will include a program overview, description of test methodology and preliminary results.
NASA Technical Reports Server (NTRS)
Foster, John V.; Ross, Holly M.; Ashley, Patrick A.
1993-01-01
Designers of the next-generation fighter and attack airplanes are faced with the requirements of good high angle-of-attack maneuverability as well as efficient high speed cruise capability with low radar cross section (RCS) characteristics. As a result, they are challenged with the task of making critical design trades to achieve the desired levels of maneuverability and performance. This task has highlighted the need for comprehensive, flight-validated lateral-directional control power design guidelines for high angles of attack. A joint NASA/U.S. Navy study has been initiated to address this need and to investigate the complex flight dynamics characteristics and controls requirements for high angle-of-attack lateral-directional maneuvering. A multi-year research program is underway which includes groundbased piloted simulation and flight validation. This paper will give a status update of this program that will include a program overview, description of test methodology and preliminary results.
Dahlke, Jeffrey A; Kostal, Jack W; Sackett, Paul R; Kuncel, Nathan R
2018-05-03
We explore potential explanations for validity degradation using a unique predictive validation data set containing up to four consecutive years of high school students' cognitive test scores and four complete years of those students' college grades. This data set permits analyses that disentangle the effects of predictor-score age and timing of criterion measurements on validity degradation. We investigate the extent to which validity degradation is explained by criterion dynamism versus the limited shelf-life of ability scores. We also explore whether validity degradation is attributable to fluctuations in criterion variability over time and/or GPA contamination from individual differences in course-taking patterns. Analyses of multiyear predictor data suggest that changes to the determinants of performance over time have much stronger effects on validity degradation than does the shelf-life of cognitive test scores. The age of predictor scores had only a modest relationship with criterion-related validity when the criterion measurement occasion was held constant. Practical implications and recommendations for future research are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian
2017-06-01
There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Validity and reliability of a novel measure of activity performance and participation.
Murgatroyd, Phil; Karimi, Leila
2016-01-01
To develop and evaluate an innovative clinician-rated measure, which produces global numerical ratings of activity performance and participation. Repeated measures study with 48 community-dwelling participants investigating clinical sensibility, comprehensiveness, practicality, inter-rater reliability, responsiveness, sensitivity and concurrent validity with Barthel Index. Important clinimetric characteristics including comprehensiveness and ease of use were rated >8/10 by clinicians. Inter-rater reliability was excellent on the summary scores (intraclass correlation of 0.95-0.98). There was good evidence that the new outcome measure distinguished between known high and low functional scoring groups, including both responsiveness to change and sensitivity at the same time point in numerous tests. Concurrent validity with the Barthel Index was fair to high (Spearman Rank Order Correlation 0.32-0.85, p > 0.05). The new measure's summary scores were nearly twice as responsive to change compared with the Barthel Index. Other more detailed data could also be generated by the new measure. The Activity Performance Measure is an innovative outcome instrument that showed good clinimetric qualities in this initial study. Some of the results were strong, given the sample size, and further trial and evaluation is appropriate. Implications for Rehabilitation The Activity Performance Measure is an innovative outcome measure covering activity performance and participation. In an initial evaluation, it showed good clinimetric qualities including responsiveness to change, sensitivity, practicality, clinical sensibility, item coverage, inter-rater reliability and concurrent validity with the Barthel Index. Further trial and evaluation is appropriate.
Validation of the Information/Communications Technology Literacy Test
2016-10-01
nested set. Table 11 presents the results of incremental validity analyses for job knowledge/performance criteria by MOS. Figure 7 presents much...Systems Operator-Analyst (25B) and Nodal Network Systems Operator-Maintainer (25N) MOS. This report documents technical procedures and results of the...research effort. Results suggest that the ICTL test has potential as a valid and highly efficient predictor of valued outcomes in Signal school MOS. Not
USDA-ARS?s Scientific Manuscript database
A high-throughput qualitative screening and identification method for 9 aminoglycosides of regulatory interest has been developed, validated, and implemented for bovine kidney, liver, and muscle tissues. The method involves extraction at previously validated conditions, cleanup using disposable pip...
Assessing Meritorious Teacher Performance: A Differential Validity Study.
ERIC Educational Resources Information Center
Ellett, Chad D; Capie, William
The Teacher Assessment and Development System (TADS) - Meritorious Teacher Program (MTP) FORM instrument is used in the Dade County Public Schools, Miami, Florida, to evaluate teachers. Its validity for decisions concerning merit pay for master teachers was examined in this study. Specifically, its ability to discriminate between high performing…
Evaluation of coarse scale land surface remote sensing albedo product over rugged terrain
NASA Astrophysics Data System (ADS)
Wen, J.; Xinwen, L.; You, D.; Dou, B.
2017-12-01
Satellite derived Land surface albedo is an essential climate variable which controls the earth energy budget and it can be used in applications such as climate change, hydrology, and numerical weather prediction. The accuracy and uncertainty of surface albedo products should be evaluated with a reliable reference truth data prior to applications. And more literatures investigated the validation methods about the albedo validation in a flat or homogenous surface. However, the albedo performance over rugged terrain is still unknow due to the validation method limited. A multi-validation strategy is implemented to give a comprehensive albedo validation, which will involve the high resolution albedo processing, high resolution albedo validation based on in situ albedo, and the method to upscale the high resolution albedo to a coarse scale albedo. Among them, the high resolution albedo generation and the upscale method is the core step for the coarse scale albedo validation. In this paper, the high resolution albedo is generated by Angular Bin algorithm. And a albedo upscale method over rugged terrain is developed to obtain the coarse scale albedo truth. The in situ albedo located 40 sites in mountain area are selected globally to validate the high resolution albedo, and then upscaled to the coarse scale albedo by the upscale method. This paper takes MODIS and GLASS albedo product as a example, and the prelimarily results show the RMSE of MODIS and GLASS albedo product over rugged terrain are 0.047 and 0.057, respectively under the RMSE with 0.036 of high resolution albedo.
Jeong, Eun Ju; Chung, Hyun Soo; Choi, Jeong Yun; Kim, In Sook; Hong, Seong Hee; Yoo, Kyung Sook; Kim, Mi Kyoung; Won, Mi Yeol; Eum, So Yeon; Cho, Young Soon
2017-06-01
The aim of this study was to develop a simulation-based time-out learning programme targeted to nurses participating in high-risk invasive procedures and to figure out the effects of application of the new programme on acceptance of nurses. This study was performed using a simulation-based learning predesign and postdesign to figure out the effects of implementation of this programme. It was targeted to 48 registered nurses working in the general ward and the emergency department in a tertiary teaching hospital. Difference between acceptance and performance rates has been figured out by using mean, standard deviation, and Wilcoxon-signed rank test. The perception survey and score sheet have been validated through content validation index, and the reliability of evaluator has been verified by using intraclass correlation coefficient. Results showed high level of acceptance of high-risk invasive procedure (P<.01). Further, improvement was consistent regardless of clinical experience, workplace, or experience in simulation-based learning. The face validity of the programme showed over 4.0 out of 5.0. This simulation-based learning programme was effective in improving the recognition of time-out protocol and has given the participants the opportunity to become proactive in cases of high-risk invasive procedures performed outside of operating room. © 2017 John Wiley & Sons Australia, Ltd.
ERIC Educational Resources Information Center
Islam, M. Mazharul; Al-Ghassani, Asma
2015-01-01
The objective of this study was to evaluate the performance of students of college of Science of Sultan Qaboos University (SQU) in Calculus I course, and examine the predictive validity of student's high school performance and gender for Calculus I success. The data for the study was extracted from students' database maintained by the Deanship of…
Flight Test 4 Preliminary Results: NASA Ames SSI
NASA Technical Reports Server (NTRS)
Isaacson, Doug; Gong, Chester; Reardon, Scott; Santiago, Confesor
2016-01-01
Realization of the expected proliferation of Unmanned Aircraft System (UAS) operations in the National Airspace System (NAS) depends on the development and validation of performance standards for UAS Detect and Avoid (DAA) Systems. The RTCA Special Committee 228 is charged with leading the development of draft Minimum Operational Performance Standards (MOPS) for UAS DAA Systems. NASA, as a participating member of RTCA SC-228 is committed to supporting the development and validation of draft requirements as well as the safety substantiation and end-to-end assessment of DAA system performance. The Unmanned Aircraft System (UAS) Integration into the National Airspace System (NAS) Project conducted flight test program, referred to as Flight Test 4, at Armstrong Flight Research Center from April -June 2016. Part of the test flights were dedicated to the NASA Ames-developed Detect and Avoid (DAA) System referred to as JADEM (Java Architecture for DAA Extensibility and Modeling). The encounter scenarios, which involved NASA's Ikhana UAS and a manned intruder aircraft, were designed to collect data on DAA system performance in real-world conditions and uncertainties with four different surveillance sensor systems. Flight test 4 has four objectives: (1) validate DAA requirements in stressing cases that drive MOPS requirements, including: high-speed cooperative intruder, low-speed non-cooperative intruder, high vertical closure rate encounter, and Mode CS-only intruder (i.e. without ADS-B), (2) validate TCASDAA alerting and guidance interoperability concept in the presence of realistic sensor, tracking and navigational errors and in multiple-intruder encounters against both cooperative and non-cooperative intruders, (3) validate Well Clear Recovery guidance in the presence of realistic sensor, tracking and navigational errors, and (4) validate DAA alerting and guidance requirements in the presence of realistic sensor, tracking and navigational errors. The results will be presented at RTCA Special Committee 228 in support of final verification and validation of the DAA MOPS.
Dietary Effects on Cognition and Pilots' Flight Performance.
Lindseth, Glenda N; Lindseth, Paul D; Jensen, Warren C; Petros, Thomas V; Helland, Brian D; Fossum, Debra L
2011-01-01
The purpose of this study was to investigate the effects of diet on cognition and flight performance of 45 pilots. Based on a theory of self-care, this clinical study used a repeated-measure, counterbalanced crossover design. Pilots were randomly rotated through 4-day high-carbohydrate, high-protein, high-fat, and control diets. Cognitive flight performance was evaluated using a GAT-2 full-motion flight simulator. The Sternberg short-term memory test and Vandenberg's mental rotation test were used to validate cognitive flight test results. Pilots consuming a high-protein diet had significantly poorer ( p < .05) overall flight performance scores than pilots consuming high-fat and high-carbohydrate diets.
Löfmark, Anna; Mårtensson, Gunilla
2017-03-01
The aim of the present study was to establish the validity of the tool Assessment of Clinical Education (AssCE). The tool is widely used in Sweden and some Nordic countries for assessing nursing students' performance in clinical education. It is important that the tools in use be subjected to regular audit and critical reviews. The validation process, performed in two stages, was concluded with a high level of congruence. In the first stage, Delphi technique was used to elaborate the AssCE tool using a group of 35 clinical nurse lecturers. After three rounds, we reached consensus. In the second stage, a group of 46 clinical nurse lecturers representing 12 universities in Sweden and Norway audited the revised version of the AssCE in relation to learning outcomes from the last clinical course at their respective institutions. Validation of the revised AssCE was established with high congruence between the factors in the AssCE and examined learning outcomes. The revised AssCE tool seems to meet its objective to be a validated assessment tool for use in clinical nursing education. Copyright © 2016 Elsevier Ltd. All rights reserved.
Validating the simulation of large-scale parallel applications using statistical characteristics
Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...
2016-03-01
Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less
Coran, Silvia A; Mulas, Stefano; Mulinacci, Nadia
2012-01-13
A new HPTLC method was envisaged to determine rosmarinic acid (RA) in different matrices with the aim of testing the influence of optimizing the main HPTLC operative parameters in view of a more stringent validation process. HPTLC LiChrospher silica gel 60 F254s, 20 cm × 10 cm, plates with toluene:ethyl formate:formic acid (6:4:1, v/v) as the mobile phase were used. Densitometric determinations were performed in reflectance mode at 330 nm. The method was validated giving rise to a dependable and high throughput procedure well suited to routine applications. RA was quantified in the range of 132-660 ng with RSD of repeatability and intermediate precision not exceeding 2.0% and accuracy within the acceptance limits. The method was tested on several commercial preparations containing RA in different amounts. Copyright © 2011 Elsevier B.V. All rights reserved.
Demonstration of automated proximity and docking technologies
NASA Astrophysics Data System (ADS)
Anderson, Robert L.; Tsugawa, Roy K.; Bryan, Thomas C.
An autodock was demonstrated using straightforward techniques and real sensor hardware. A simulation testbed was established and validated. The sensor design was refined with improved optical performance and image processing noise mitigation techniques, and the sensor is ready for production from off-the-shelf components. The autonomous spacecraft architecture is defined. The areas of sensors, docking hardware, propulsion, and avionics are included in the design. The Guidance Navigation and Control architecture and requirements are developed. Modular structures suitable for automated control are used. The spacecraft system manager functions including configuration, resource, and redundancy management are defined. The requirements for autonomous spacecraft executive are defined. High level decisionmaking, mission planning, and mission contingency recovery are a part of this. The next step is to do flight demonstrations. After the presentation the following question was asked. How do you define validation? There are two components to validation definition: software simulation with formal and vigorous validation, and hardware and facility performance validated with respect to software already validated against analytical profile.
Validation Database Based Thermal Analysis of an Advanced RPS Concept
NASA Technical Reports Server (NTRS)
Balint, Tibor S.; Emis, Nickolas D.
2006-01-01
Advanced RPS concepts can be conceived, designed and assessed using high-end computational analysis tools. These predictions may provide an initial insight into the potential performance of these models, but verification and validation are necessary and required steps to gain confidence in the numerical analysis results. This paper discusses the findings from a numerical validation exercise for a small advanced RPS concept, based on a thermal analysis methodology developed at JPL and on a validation database obtained from experiments performed at Oregon State University. Both the numerical and experimental configurations utilized a single GPHS module enabled design, resembling a Mod-RTG concept. The analysis focused on operating and environmental conditions during the storage phase only. This validation exercise helped to refine key thermal analysis and modeling parameters, such as heat transfer coefficients, and conductivity and radiation heat transfer values. Improved understanding of the Mod-RTG concept through validation of the thermal model allows for future improvements to this power system concept.
Cousans, Fran; Patterson, Fiona; Edwards, Helena; Walker, Kim; McLachlan, John C; Good, David
2017-05-01
Although there is extensive evidence confirming the predictive validity of situational judgement tests (SJTs) in medical education, there remains a shortage of evidence for their predictive validity for performance of postgraduate trainees in their first role in clinical practice. Moreover, to date few researchers have empirically examined the complementary roles of academic and non-academic selection methods in predicting in-role performance. This is an important area of enquiry as despite it being common practice to use both types of methods within a selection system, there is currently no evidence that this approach translates into increased predictive validity of the selection system as a whole, over that achieved by the use of a single selection method. In this preliminary study, the majority of the range of scores achieved by successful applicants to the UK Foundation Programme provided a unique opportunity to address both of these areas of enquiry. Sampling targeted high (>80th percentile) and low (<20th percentile) scorers on the SJT. Supervisors rated 391 trainees' in-role performance, and incidence of remedial action was collected. SJT and academic performance scores correlated with supervisor ratings (r = .31 and .28, respectively). The relationship was stronger between the SJT and in-role performance for the low scoring group (r = .33, high scoring group r = .11), and between academic performance and in-role performance for the high scoring group (r = .29, low scoring group r = .11). Trainees with low SJT scores were almost five times more likely to receive remedial action. Results indicate that an SJT for entry into trainee physicians' first role in clinical practice has good predictive validity of supervisor-rated performance and incidence of remedial action. In addition, an SJT and a measure of academic performance appeared to be complementary to each other. These initial findings suggest that SJTs may be more predictive at the lower end of a scoring distribution, and academic attainment more predictive at the higher end.
Vincent, Mary Anne; Sheriff, Susan; Mellott, Susan
2015-02-01
High-fidelity simulation has become a growing educational modality among institutions of higher learning ever since the Institute of Medicine recommended that it be used to improve patient safety in 2000. However, there is limited research on the effect of high-fidelity simulation on psychomotor clinical performance improvement of undergraduate nursing students being evaluated by experts using reliable and valid appraisal instruments. The purpose of this integrative review and meta-analysis is to explore what researchers have established about the impact of high-fidelity simulation on improving the psychomotor clinical performance of undergraduate nursing students. Only eight of the 1120 references met inclusion criteria. A meta-analysis using Hedges' g to compute the effect size and direction of impact yielded a range of -0.26 to +3.39. A positive effect was shown in seven of eight studies; however, there were five different research designs and six unique appraisal instruments used among these studies. More research is necessary to determine if high-fidelity simulation improves psychomotor clinical performance in undergraduate nursing students. Nursing programs from multiple sites having a standardized curriculum and using the same appraisal instruments with established reliability and validity are ideal for this work.
Casartelli, Nicola; Müller, Roland; Maffiuletti, Nicola A
2010-11-01
The aim of the present study was to verify the validity and reliability of the Myotest accelerometric system (Myotest SA, Sion, Switzerland) for the assessment of vertical jump height. Forty-four male basketball players (age range: 9-25 years) performed series of squat, countermovement and repeated jumps during 2 identical test sessions separated by 2-15 days. Flight height was simultaneously quantified with the Myotest system and validated photoelectric cells (Optojump). Two calculation methods were used to estimate the jump height from Myotest recordings: flight time (Myotest-T) and vertical takeoff velocity (Myotest-V). Concurrent validity was investigated comparing Myotest-T and Myotest-V to the criterion method (Optojump), and test-retest reliability was also examined. As regards validity, Myotest-T overestimated jumping height compared to Optojump (p < 0.001) with a systematic bias of approximately 7 cm, even though random errors were low (2.7 cm) and intraclass correlation coefficients (ICCs) where high (>0.98), that is, excellent validity. Myotest-V overestimated jumping height compared to Optojump (p < 0.001), with high random errors (>12 cm), high limits of agreement ratios (>36%), and low ICCs (<0.75), that is, poor validity. As regards reliability, Myotest-T showed high ICCs (range: 0.92-0.96), whereas Myotest-V showed low ICCs (range: 0.56-0.89), and high random errors (>9 cm). In conclusion, Myotest-T is a valid and reliable method for the assessment of vertical jump height, and its use is legitimate for field-based evaluations, whereas Myotest-V is neither valid nor reliable.
Robert, Christelle; Brasseur, Pierre-Yves; Dubois, Michel; Delahaut, Philippe; Gillard, Nathalie
2016-08-01
A new multi-residue method for the analysis of veterinary drugs, namely amoxicillin, chlortetracycline, colistins A and B, doxycycline, fenbendazole, flubendazole, ivermectin, lincomycin, oxytetracycline, sulfadiazine, tiamulin, tilmicosin and trimethoprim, was developed and validated for feed. After acidic extraction, the samples were centrifuged, purified by SPE and analysed by ultra-high-performance liquid chromatography coupled to tandem mass spectrometry. Quantitative validation was done in accordance with the guidelines laid down in European Commission Decision 2002/657/CE. Matrix-matched calibration with internal standards was used to reduce matrix effects. The target level was set at the authorised carryover level (1%) and validation levels were set at 0.5%, 1% and 1.5%. Method performances were evaluated by the following parameters: linearity (0.986 < R(2) < 0.999), precision (repeatability < 12.4% and reproducibility < 14.0%), accuracy (89% < recovery < 107%), sensitivity, decision limit (CCα), detection capability (CCβ), selectivity and expanded measurement uncertainty (k = 2).This method has been used successfully for three years for routine monitoring of antibiotic residues in feeds during which period 20% of samples were found to exceed the 1% authorised carryover limit and were deemed non-compliant.
Nadkarni, Lindsay D; Roskind, Cindy G; Auerbach, Marc A; Calhoun, Aaron W; Adler, Mark D; Kessler, David O
2018-04-01
The aim of this study was to assess the validity of a formative feedback instrument for leaders of simulated resuscitations. This is a prospective validation study with a fully crossed (person × scenario × rater) study design. The Concise Assessment of Leader Management (CALM) instrument was designed by pediatric emergency medicine and graduate medical education experts to be used off the shelf to evaluate and provide formative feedback to resuscitation leaders. Four experts reviewed 16 videos of in situ simulated pediatric resuscitations and scored resuscitation leader performance using the CALM instrument. The videos consisted of 4 pediatric emergency department resuscitation teams each performing in 4 pediatric resuscitation scenarios (cardiac arrest, respiratory arrest, seizure, and sepsis). We report on content and internal structure (reliability) validity of the CALM instrument. Content validity was supported by the instrument development process that involved professional experience, expert consensus, focused literature review, and pilot testing. Internal structure validity (reliability) was supported by the generalizability analysis. The main component that contributed to score variability was the person (33%), meaning that individual leaders performed differently. The rater component had almost zero (0%) contribution to variance, which implies that raters were in agreement and argues for high interrater reliability. These results provide initial evidence to support the validity of the CALM instrument as a reliable assessment instrument that can facilitate formative feedback to leaders of pediatric simulated resuscitations.
Aldous, Jeffrey W F; Akubat, Ibrahim; Chrismas, Bryna C R; Watkins, Samuel L; Mauger, Alexis R; Midgley, Adrian W; Abt, Grant; Taylor, Lee
2014-07-01
This study investigated the reliability and validity of a novel nonmotorised treadmill (NMT)-based soccer simulation using a novel activity category called a "variable run" to quantify fatigue during high-speed running. Twelve male University soccer players completed 3 familiarization sessions and 1 peak speed assessment before completing the intermittent soccer performance test (iSPT) twice. The 2 iSPTs were separated by 6-10 days. The total distance, sprint distance, and high-speed running distance (HSD) were 8,968 ± 430 m, 980 ± 75 m and 2,122 ± 140 m, respectively. No significant difference (p > 0.05) was found between repeated trials of the iSPT for all physiological and performance variables. Reliability measures between iSPT1 and iSPT2 showed good agreement (coefficient of variation: <4.6%; intraclass correlation coefficient: >0.80). Furthermore, the variable run phase showed HSD significantly decreased (p ≤ 0.05) in the last 15 minutes (89 ± 6 m) compared with the first 15 minutes (85 ± 7 m), quantifying decrements in high-speed exercise compared with the previous literature. This study validates the iSPT as a NMT-based soccer simulation compared with the previous match-play data and is a reliable tool for assessing and monitoring physiological and performance variables in soccer players. The iSPT could be used in a number of ways including player rehabilitation, understanding the efficacy of nutritional interventions, and also the quantification of environmentally mediated decrements on soccer-specific performance.
Validity Evidence for Games as Assessment Environments. CRESST Report 773
ERIC Educational Resources Information Center
Delacruz, Girlie C.; Chung, Gregory K. W. K.; Baker, Eva L.
2010-01-01
This study provides empirical evidence of a highly specific use of games in education--the assessment of the learner. Linear regressions were used to examine the predictive and convergent validity of a math game as assessment of mathematical understanding. Results indicate that prior knowledge significantly predicts game performance. Results also…
Participation in Occupational Performance: Reliability and Validity of the Activity Card Sort.
ERIC Educational Resources Information Center
Katz, Noomi; Karpin, Hanah; Lak, Arit; Furman, Tania; Hartman-Maeir, Adina
2003-01-01
A study assessed the reliability and validity of the Activity Card Sort (ACS) within different adult groups (n=263): healthy adults, healthy older adults, Alzheimer's caregivers, multiple sclerosis patients, and stroke survivors. Found that the ACS had high internal consistency for daily living and social-cultural activities and a lower…
ERIC Educational Resources Information Center
Chang, Chi-Cheng; Wu, Bing-Hong
2012-01-01
This study explored the reliability and validity of teacher assessment under a Web-based portfolio assessment environment (or Web-based teacher portfolio assessment). Participants were 72 eleventh graders taking the "Computer Application" course. The students perform portfolio creation, inspection, self- and peer-assessment using the Web-based…
Rothenhöfer, Martin; Scherübl, Rosmarie; Bernhardt, Günther; Heilmann, Jörg; Buschauer, Armin
2012-07-27
Purified oligomers of hyalobiuronic acid are indispensable tools to elucidate the physiological and pathophysiological role of hyaluronan degradation by various hyaluronidase isoenzymes. Therefore, we established and validated a novel sensitive, convenient, rapid, and cost-effective high performance thin layer chromatography (HPTLC) method for the qualitative and quantitative analysis of small saturated hyaluronan oligosaccharides consisting of 2-4 hyalobiuronic acid moieties. The use of amino-modified silica as stationary phase allows a simple reagent-free in situ derivatization by heating, resulting in a very low limit of detection (7-19 pmol per band, depending on the analyzed saturated oligosaccharide). By this derivatization procedure for the first time densitometric quantification of the analytes could be performed by HPTLC. The validated method showed a quantification limit of 37-71 pmol per band and was proven to be superior in comparison to conventional detection of hyaluronan oligosaccharides. The analytes were identified by hyphenation of normal phase planar chromatography to mass spectrometry (TLC-MS) using electrospray ionization. As an alternative to sequential techniques such as high performance liquid chromatography (HPLC) and capillary electrophoresis (CE), the validated HPTLC quantification method can easily be automated and is applicable to the analysis of multiple samples in parallel. Copyright © 2012 Elsevier B.V. All rights reserved.
Oliva, Alexis; Monzón, Cecilia; Santoveña, Ana; Fariña, José B; Llabrés, Matías
2016-07-01
An ultra high performance liquid chromatography method was developed and validated for the quantitation of triamcinolone acetonide in an injectable ophthalmic hydrogel to determine the contribution of analytical method error in the content uniformity measurement. During the development phase, the design of experiments/design space strategy was used. For this, the free R-program was used as a commercial software alternative, a fast efficient tool for data analysis. The process capability index was used to find the permitted level of variation for each factor and to define the design space. All these aspects were analyzed and discussed under different experimental conditions by the Monte Carlo simulation method. Second, a pre-study validation procedure was performed in accordance with the International Conference on Harmonization guidelines. The validated method was applied for the determination of uniformity of dosage units and the reasons for variability (inhomogeneity and the analytical method error) were analyzed based on the overall uncertainty. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A high power ion thruster for deep space missions
NASA Astrophysics Data System (ADS)
Polk, James E.; Goebel, Dan M.; Snyder, John S.; Schneider, Analyn C.; Johnson, Lee K.; Sengupta, Anita
2012-07-01
The Nuclear Electric Xenon Ion System ion thruster was developed for potential outer planet robotic missions using nuclear electric propulsion (NEP). This engine was designed to operate at power levels ranging from 13 to 28 kW at specific impulses of 6000-8500 s and for burn times of up to 10 years. State-of-the-art performance and life assessment tools were used to design the thruster, which featured 57-cm-diameter carbon-carbon composite grids operating at voltages of 3.5-6.5 kV. Preliminary validation of the thruster performance was accomplished with a laboratory model thruster, while in parallel, a flight-like development model (DM) thruster was completed and two DM thrusters fabricated. The first thruster completed full performance testing and a 2000-h wear test. The second successfully completed vibration tests at the full protoflight levels defined for this NEP program and then passed performance validation testing. The thruster design, performance, and the experimental validation of the design tools are discussed in this paper.
A high power ion thruster for deep space missions.
Polk, James E; Goebel, Dan M; Snyder, John S; Schneider, Analyn C; Johnson, Lee K; Sengupta, Anita
2012-07-01
The Nuclear Electric Xenon Ion System ion thruster was developed for potential outer planet robotic missions using nuclear electric propulsion (NEP). This engine was designed to operate at power levels ranging from 13 to 28 kW at specific impulses of 6000-8500 s and for burn times of up to 10 years. State-of-the-art performance and life assessment tools were used to design the thruster, which featured 57-cm-diameter carbon-carbon composite grids operating at voltages of 3.5-6.5 kV. Preliminary validation of the thruster performance was accomplished with a laboratory model thruster, while in parallel, a flight-like development model (DM) thruster was completed and two DM thrusters fabricated. The first thruster completed full performance testing and a 2000-h wear test. The second successfully completed vibration tests at the full protoflight levels defined for this NEP program and then passed performance validation testing. The thruster design, performance, and the experimental validation of the design tools are discussed in this paper.
FRIEND, MARGARET; KEPLINGER, MELANIE
2017-01-01
Early language comprehension may be one of the most important predictors of developmental risk. The need for performance-based assessment is predicated on limitations identified in the exclusive use of parent report and on the need for a performance measure with which to assess the convergent validity of parent report of comprehension. Child performance data require the development of procedures to facilitate infant attention and compliance. Forty infants (20 at 1;4 and 20 at 1;8) acquiring English completed a standard picture book task and the same task was administered on a touch-sensitive screen. The computerized task significantly improved task attention, compliance and performance. Reliability was high, indicating that infants were not responding randomly. Convergent validity with parent report and 4-month stability was substantial. Preliminary data extending this approach to Mexican-Spanish are presented. Results are discussed in terms of the promise of this technique for clinical and research settings and the potential influences of cultural factors on performance. PMID:18300430
Prognostic models for complete recovery in ischemic stroke: a systematic review and meta-analysis.
Jampathong, Nampet; Laopaiboon, Malinee; Rattanakanokchai, Siwanon; Pattanittum, Porjai
2018-03-09
Prognostic models have been increasingly developed to predict complete recovery in ischemic stroke. However, questions arise about the performance characteristics of these models. The aim of this study was to systematically review and synthesize performance of existing prognostic models for complete recovery in ischemic stroke. We searched journal publications indexed in PUBMED, SCOPUS, CENTRAL, ISI Web of Science and OVID MEDLINE from inception until 4 December, 2017, for studies designed to develop and/or validate prognostic models for predicting complete recovery in ischemic stroke patients. Two reviewers independently examined titles and abstracts, and assessed whether each study met the pre-defined inclusion criteria and also independently extracted information about model development and performance. We evaluated validation of the models by medians of the area under the receiver operating characteristic curve (AUC) or c-statistic and calibration performance. We used a random-effects meta-analysis to pool AUC values. We included 10 studies with 23 models developed from elderly patients with a moderately severe ischemic stroke, mainly in three high income countries. Sample sizes for each study ranged from 75 to 4441. Logistic regression was the only analytical strategy used to develop the models. The number of various predictors varied from one to 11. Internal validation was performed in 12 models with a median AUC of 0.80 (95% CI 0.73 to 0.84). One model reported good calibration. Nine models reported external validation with a median AUC of 0.80 (95% CI 0.76 to 0.82). Four models showed good discrimination and calibration on external validation. The pooled AUC of the two validation models of the same developed model was 0.78 (95% CI 0.71 to 0.85). The performance of the 23 models found in the systematic review varied from fair to good in terms of internal and external validation. Further models should be developed with internal and external validation in low and middle income countries.
Dietary Effects on Cognition and Pilots’ Flight Performance
Lindseth, Glenda N.; Lindseth, Paul D.; Jensen, Warren C.; Petros, Thomas V.; Helland, Brian D.; Fossum, Debra L.
2017-01-01
The purpose of this study was to investigate the effects of diet on cognition and flight performance of 45 pilots. Based on a theory of self-care, this clinical study used a repeated-measure, counterbalanced crossover design. Pilots were randomly rotated through 4-day high-carbohydrate, high-protein, high-fat, and control diets. Cognitive flight performance was evaluated using a GAT-2 full-motion flight simulator. The Sternberg short-term memory test and Vandenberg’s mental rotation test were used to validate cognitive flight test results. Pilots consuming a high-protein diet had significantly poorer (p < .05) overall flight performance scores than pilots consuming high-fat and high-carbohydrate diets. PMID:29353985
NASA Astrophysics Data System (ADS)
Serevina, V.; Muliyati, D.
2018-05-01
This research aims to develop students’ performance assessment instrument based on scientific approach is valid and reliable in assessing the performance of students on basic physics lab of Simple Harmonic Motion (SHM). This study uses the ADDIE consisting of stages: Analyze, Design, Development, Implementation, and Evaluation. The student performance assessment developed can be used to measure students’ skills in observing, asking, conducting experiments, associating and communicate experimental results that are the ‘5M’ stages in a scientific approach. Each grain of assessment in the instrument is validated by the instrument expert and the evaluation with the result of all points of assessment shall be eligible to be used with a 100% eligibility percentage. The instrument is then tested for the quality of construction, material, and language by panel (lecturer) with the result: 85% or very good instrument construction aspect, material aspect 87.5% or very good, and language aspect 83% or very good. For small group trial obtained instrument reliability level of 0.878 or is in the high category, where r-table is 0.707. For large group trial obtained instrument reliability level of 0.889 or is in the high category, where r-table is 0.320. Instruments declared valid and reliable for 5% significance level. Based on the result of this research, it can be concluded that the student performance appraisal instrument based on the developed scientific approach is declared valid and reliable to be used in assessing student skill in SHM experimental activity.
Boka, Vasiliki-Ioanna; Argyropoulou, Aikaterini; Gikas, Evangelos; Angelis, Apostolis; Aligiannis, Nektarios; Skaltsounis, Alexios-Leandros
2015-11-01
A high-performance thin-layer chromatographic methodology was developed and validated for the isolation and quantitative determination of oleuropein in two extracts of Olea europaea leaves. OLE_A was a crude acetone extract, while OLE_AA was its defatted residue. Initially, high-performance thin-layer chromatography was employed for the purification process of oleuropein with fast centrifugal partition chromatography, replacing high-performance liquid-chromatography, in the stage of the determination of the distribution coefficient and the retention volume. A densitometric method was developed for the determination of the distribution coefficients, KC = CS/CM. The total concentrations of the target compound in the stationary phase (CS) and in the mobile phase (CM) were calculated by the area measured in the high-performance thin-layer chromatogram. The estimated Kc was also used for the calculation of the retention volume, VR, with a chromatographic retention equation. The obtained data were successfully applied for the purification of oleuropein and the experimental results confirmed the theoretical predictions, indicating that high-performance thin-layer chromatography could be an important counterpart in the phytochemical study of natural products. The isolated oleuropein (purity > 95%) was subsequently used for the estimation of its content in each extract with a simple, sensitive and accurate high-performance thin-layer chromatography method. The best fit calibration curve from 1.0 µg/track to 6.0 µg/track of oleuropein was polynomial and the quantification was achieved by UV detection at λ 240 nm. The method was validated giving rise to an efficient and high-throughput procedure, with the relative standard deviation % of repeatability and intermediate precision not exceeding 4.9% and accuracy between 92% and 98% (recovery rates). Moreover, the method was validated for robustness, limit of quantitation, and limit of detection. The amount of oleuropein for OLE_A, OLE_AA, and an aqueous extract of olive leaves was estimated to be 35.5% ± 2.7, 51.5% ± 1.4, and 12.5% ± 0.12, respectively. Statistical analysis proved that the method is repeatable and selective, and can be effectively applied for the estimation of oleuropein in olive leaves' extracts, and could potentially replace high-performance liquid chromatography methodologies developed so far. Thus, the phytochemical investigation of oleuropein could be based on high-performance thin-layer chromatography coupled with separation processes, such as fast centrifugal partition chromatography, showing efficacy and credibility. Georg Thieme Verlag KG Stuttgart · New York.
ERIC Educational Resources Information Center
Westrick, Paul A.; Le, Huy; Robbins, Steven B.; Radunzel, Justine M. R.; Schmidt, Frank L.
2015-01-01
This meta-analysis examines the strength of the relationships of ACT® Composite scores, high school grades, and socioeconomic status (SES) with academic performance and persistence into the 2nd and 3rd years at 4-year colleges and universities. Based upon a sample of 189,612 students at 50 institutions, ACT Composite scores and high school grade…
Vieira, Gisele de Lacerda Chaves; Pagano, Adriana Silvino; Reis, Ilka Afonso; Rodrigues, Júlia Santos Nunes; Torres, Heloísa de Carvalho
2018-01-01
ABSTRACT Objective: to perform the translation, adaptation and validation of the Diabetes Attitudes Scale - third version instrument into Brazilian Portuguese. Methods: methodological study carried out in six stages: initial translation, synthesis of the initial translation, back-translation, evaluation of the translated version by the Committee of Judges (27 Linguists and 29 health professionals), pre-test and validation. The pre-test and validation (test-retest) steps included 22 and 120 health professionals, respectively. The Content Validity Index, the analyses of internal consistency and reproducibility were performed using the R statistical program. Results: in the content validation, the instrument presented good acceptance among the Judges with a mean Content Validity Index of 0.94. The scale presented acceptable internal consistency (Cronbach’s alpha = 0.60), while the correlation of the total score at the test and retest moments was considered high (Polychoric Correlation Coefficient = 0.86). The Intra-class Correlation Coefficient, for the total score, presented a value of 0.65. Conclusion: the Brazilian version of the instrument (Escala de Atitudes dos Profissionais em relação ao Diabetes Mellitus) was considered valid and reliable for application by health professionals in Brazil. PMID:29319739
Hay, Ashley; Migliacci, Jocelyn; Zanoni, Daniella Karassawa; Patel, Snehal; Yu, Changhong; Kattan, Michael W; Ganly, Ian
2018-05-01
The purpose of this study was to investigate the performance of the Memorial Sloan Kettering Cancer Center salivary carcinoma nomograms predicting overall survival, cancer-specific survival, and recurrence with an external validation dataset. The validation dataset comprised 123 patients treated between 2010 and 2015 at our institution. They were evaluated by assessing discrimination (concordance index [C-index]) and calibration (plotting predicted vs actual probabilities for quintiles). The validation cohort (n = 123) showed some differences to the original cohort (n = 301). The validation cohort had less high-grade cancers (P = .006), less lymphovascular invasion (LVI; P < .001) and shorter follow-up of 19 months versus 45.6 months. Validation showed a C-index of 0.833 (95% confidence interval [CI] 0.758-0.908), 0.807 (95% CI 0.717-0.898), and 0.844 (95% CI 0.768-0.920) for overall survival, cancer-specific survival, and recurrence, respectively. The 3 salivary gland nomograms performed well using a contemporary validation dataset, despite limitations related to sample size, follow-up, and differences in clinical and pathology characteristics between the original and validation cohorts. © 2018 Wiley Periodicals, Inc.
2-D Circulation Control Airfoil Benchmark Experiments Intended for CFD Code Validation
NASA Technical Reports Server (NTRS)
Englar, Robert J.; Jones, Gregory S.; Allan, Brian G.; Lin, Johb C.
2009-01-01
A current NASA Research Announcement (NRA) project being conducted by Georgia Tech Research Institute (GTRI) personnel and NASA collaborators includes the development of Circulation Control (CC) blown airfoils to improve subsonic aircraft high-lift and cruise performance. The emphasis of this program is the development of CC active flow control concepts for both high-lift augmentation, drag control, and cruise efficiency. A collaboration in this project includes work by NASA research engineers, whereas CFD validation and flow physics experimental research are part of NASA s systematic approach to developing design and optimization tools for CC applications to fixed-wing aircraft. The design space for CESTOL type aircraft is focusing on geometries that depend on advanced flow control technologies that include Circulation Control aerodynamics. The ability to consistently predict advanced aircraft performance requires improvements in design tools to include these advanced concepts. Validation of these tools will be based on experimental methods applied to complex flows that go beyond conventional aircraft modeling techniques. This paper focuses on recent/ongoing benchmark high-lift experiments and CFD efforts intended to provide 2-D CFD validation data sets related to NASA s Cruise Efficient Short Take Off and Landing (CESTOL) study. Both the experimental data and related CFD predictions are discussed.
Morssinkhof, M L A; Wang, O; James, L; van der Heide, H J L; Winson, I G
2013-09-01
Many existing scoring systems assess ankle function, but there is no evidence that any of them has been validated in a group of patients with a higher demand on their ankle function. Problems include ceiling effects, not being able to detect change or they do not contain a sports-subscale. The aim of this study was to create a validated self-administered scoring system for ankle injuries in the higher performing athlete. First, 26 patients were interviewed to solicit opinions needed to create the final score, which is modified from the Foot and Ankle Outcome Score (FAOS). Second, SAFAS was validated in a group of 25 athletes with and 14 athletes without ankle injury. It is a self-administered region specific sports foot and ankle score that contains four subscales assessing the levels of symptoms, pain, daily living and sports. The Spearman correlation coefficients between SAFAS and the Foot and Ankle Ability Measure (FAAM) ranged from 0.78 to 0.88. Content validity is established by key informant interviews, expert opinions and a high satisfaction rate of 75%. Cronbach's alpha indicated good internal consistency of each subscale ranging from 0.77 to 0.92. SAFAS has shown good evidence for being a valid instrudent for assessing sports-related ankle injuries in high-performing athletes. Copyright © 2013 European Foot and Ankle Society. Published by Elsevier Ltd. All rights reserved.
The development and testing of a skin tear risk assessment tool.
Newall, Nelly; Lewin, Gill F; Bulsara, Max K; Carville, Keryln J; Leslie, Gavin D; Roberts, Pam A
2017-02-01
The aim of the present study is to develop a reliable and valid skin tear risk assessment tool. The six characteristics identified in a previous case control study as constituting the best risk model for skin tear development were used to construct a risk assessment tool. The ability of the tool to predict skin tear development was then tested in a prospective study. Between August 2012 and September 2013, 1466 tertiary hospital patients were assessed at admission and followed up for 10 days to see if they developed a skin tear. The predictive validity of the tool was assessed using receiver operating characteristic (ROC) analysis. When the tool was found not to have performed as well as hoped, secondary analyses were performed to determine whether a potentially better performing risk model could be identified. The tool was found to have high sensitivity but low specificity and therefore have inadequate predictive validity. Secondary analysis of the combined data from this and the previous case control study identified an alternative better performing risk model. The tool developed and tested in this study was found to have inadequate predictive validity. The predictive validity of an alternative, more parsimonious model now needs to be tested. © 2015 Medicalhelplines.com Inc and John Wiley & Sons Ltd.
Guise, Brian J; Thompson, Matthew D; Greve, Kevin W; Bianchini, Kevin J; West, Laura
2014-03-01
The current study assessed performance validity on the Stroop Color and Word Test (Stroop) in mild traumatic brain injury (TBI) using criterion-groups validation. The sample consisted of 77 patients with a reported history of mild TBI. Data from 42 moderate-severe TBI and 75 non-head-injured patients with other clinical diagnoses were also examined. TBI patients were categorized on the basis of Slick, Sherman, and Iverson (1999) criteria for malingered neurocognitive dysfunction (MND). Classification accuracy is reported for three indicators (Word, Color, and Color-Word residual raw scores) from the Stroop across a range of injury severities. With false-positive rates set at approximately 5%, sensitivity was as high as 29%. The clinical implications of these findings are discussed. © 2012 The British Psychological Society.
An evaluation of NASA's program in human factors research: Aircrew-vehicle system interaction
NASA Technical Reports Server (NTRS)
1982-01-01
Research in human factors in the aircraft cockpit and a proposed program augmentation were reviewed. The dramatic growth of microprocessor technology makes it entirely feasible to automate increasingly more functions in the aircraft cockpit; the promise of improved vehicle performance, efficiency, and safety through automation makes highly automated flight inevitable. An organized data base and validated methodology for predicting the effects of automation on human performance and thus on safety are lacking and without such a data base and validated methodology for analyzing human performance, increased automation may introduce new risks. Efforts should be concentrated on developing methods and techniques for analyzing man machine interactions, including human workload and prediction of performance.
Choudhry, Shahid A.; Li, Jing; Davis, Darcy; Erdmann, Cole; Sikka, Rishi; Sutariya, Bharat
2013-01-01
Introduction: Preventing the occurrence of hospital readmissions is needed to improve quality of care and foster population health across the care continuum. Hospitals are being held accountable for improving transitions of care to avert unnecessary readmissions. Advocate Health Care in Chicago and Cerner (ACC) collaborated to develop all-cause, 30-day hospital readmission risk prediction models to identify patients that need interventional resources. Ideally, prediction models should encompass several qualities: they should have high predictive ability; use reliable and clinically relevant data; use vigorous performance metrics to assess the models; be validated in populations where they are applied; and be scalable in heterogeneous populations. However, a systematic review of prediction models for hospital readmission risk determined that most performed poorly (average C-statistic of 0.66) and efforts to improve their performance are needed for widespread usage. Methods: The ACC team incorporated electronic health record data, utilized a mixed-method approach to evaluate risk factors, and externally validated their prediction models for generalizability. Inclusion and exclusion criteria were applied on the patient cohort and then split for derivation and internal validation. Stepwise logistic regression was performed to develop two predictive models: one for admission and one for discharge. The prediction models were assessed for discrimination ability, calibration, overall performance, and then externally validated. Results: The ACC Admission and Discharge Models demonstrated modest discrimination ability during derivation, internal and external validation post-recalibration (C-statistic of 0.76 and 0.78, respectively), and reasonable model fit during external validation for utility in heterogeneous populations. Conclusions: The ACC Admission and Discharge Models embody the design qualities of ideal prediction models. The ACC plans to continue its partnership to further improve and develop valuable clinical models. PMID:24224068
Hidau, Mahendra Kumar; Kolluru, Srikanth; Palakurthi, Srinath
2018-02-01
A sensitive and selective RP-HPLC method has been developed and validated for the quantification of a highly potent poly ADP ribose polymerase inhibitor talazoparib (TZP) in rat plasma. Chromatographic separation was performed with isocratic elution method. Absorbance for TZP was measured with a UV detector (SPD-20A UV-vis) at a λ max of 227 nm. Protein precipitation was used to extract the drug from plasma samples using methanol-acetonitrile (65:35) as the precipitating solvent. The method proved to be sensitive and reproducible over a 100-2000 ng/mL linearity range with a lower limit of quantification (LLQC) of 100 ng/mL. TZP recovery was found to be >85%. Following analytical method development and validation, it was successfully employed to determine the plasma protein binding of TZP. TZP has a high level of protein binding in rat plasma (95.76 ± 0.38%) as determined by dialysis method. Copyright © 2017 John Wiley & Sons, Ltd.
A diagnostic model for chronic hypersensitivity pneumonitis.
Johannson, Kerri A; Elicker, Brett M; Vittinghoff, Eric; Assayag, Deborah; de Boer, Kaïssa; Golden, Jeffrey A; Jones, Kirk D; King, Talmadge E; Koth, Laura L; Lee, Joyce S; Ley, Brett; Wolters, Paul J; Collard, Harold R
2016-10-01
The objective of this study was to develop a diagnostic model that allows for a highly specific diagnosis of chronic hypersensitivity pneumonitis using clinical and radiological variables alone. Chronic hypersensitivity pneumonitis and other interstitial lung disease cases were retrospectively identified from a longitudinal database. High-resolution CT scans were blindly scored for radiographic features (eg, ground-glass opacity, mosaic perfusion) as well as the radiologist's diagnostic impression. Candidate models were developed then evaluated using clinical and radiographic variables and assessed by the cross-validated C-statistic. Forty-four chronic hypersensitivity pneumonitis and eighty other interstitial lung disease cases were identified. Two models were selected based on their statistical performance, clinical applicability and face validity. Key model variables included age, down feather and/or bird exposure, radiographic presence of ground-glass opacity and mosaic perfusion and moderate or high confidence in the radiographic impression of chronic hypersensitivity pneumonitis. Models were internally validated with good performance, and cut-off values were established that resulted in high specificity for a diagnosis of chronic hypersensitivity pneumonitis. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
NASA Astrophysics Data System (ADS)
Anaperta, M.; Helendra, H.; Zulva, R.
2018-04-01
This study aims to describe the validity of physics module with Character Oriented Values Using Process Approach Skills at Dynamic Electrical Material in high school physics / MA and SMK. The type of research is development research. The module development model uses the development model proposed by Plomp which consists of (1) preliminary research phase, (2) the prototyping phase, and (3) assessment phase. In this research is done is initial investigation phase and designing. Data collecting technique to know validation is observation and questionnaire. In the initial investigative phase, curriculum analysis, student analysis, and concept analysis were conducted. In the design phase and the realization of module design for SMA / MA and SMK subjects in dynamic electrical materials. After that, the formative evaluation which include self evaluation, prototyping (expert reviews, one-to-one, and small group. At this stage validity is performed. This research data is obtained through the module validation sheet, which then generates a valid module.
A Tissue Systems Pathology Assay for High-Risk Barrett's Esophagus.
Critchley-Thorne, Rebecca J; Duits, Lucas C; Prichard, Jeffrey W; Davison, Jon M; Jobe, Blair A; Campbell, Bruce B; Zhang, Yi; Repa, Kathleen A; Reese, Lia M; Li, Jinhong; Diehl, David L; Jhala, Nirag C; Ginsberg, Gregory; DeMarshall, Maureen; Foxwell, Tyler; Zaidi, Ali H; Lansing Taylor, D; Rustgi, Anil K; Bergman, Jacques J G H M; Falk, Gary W
2016-06-01
Better methods are needed to predict risk of progression for Barrett's esophagus. We aimed to determine whether a tissue systems pathology approach could predict progression in patients with nondysplastic Barrett's esophagus, indefinite for dysplasia, or low-grade dysplasia. We performed a nested case-control study to develop and validate a test that predicts progression of Barrett's esophagus to high-grade dysplasia (HGD) or esophageal adenocarcinoma (EAC), based upon quantification of epithelial and stromal variables in baseline biopsies. Data were collected from Barrett's esophagus patients at four institutions. Patients who progressed to HGD or EAC in ≥1 year (n = 79) were matched with patients who did not progress (n = 287). Biopsies were assigned randomly to training or validation sets. Immunofluorescence analyses were performed for 14 biomarkers and quantitative biomarker and morphometric features were analyzed. Prognostic features were selected in the training set and combined into classifiers. The top-performing classifier was assessed in the validation set. A 3-tier, 15-feature classifier was selected in the training set and tested in the validation set. The classifier stratified patients into low-, intermediate-, and high-risk classes [HR, 9.42; 95% confidence interval, 4.6-19.24 (high-risk vs. low-risk); P < 0.0001]. It also provided independent prognostic information that outperformed predictions based on pathology analysis, segment length, age, sex, or p53 overexpression. We developed a tissue systems pathology test that better predicts risk of progression in Barrett's esophagus than clinicopathologic variables. The test has the potential to improve upon histologic analysis as an objective method to risk stratify Barrett's esophagus patients. Cancer Epidemiol Biomarkers Prev; 25(6); 958-68. ©2016 AACR. ©2016 American Association for Cancer Research.
Validation results of satellite mock-up capturing experiment using nets
NASA Astrophysics Data System (ADS)
Medina, Alberto; Cercós, Lorenzo; Stefanescu, Raluca M.; Benvenuto, Riccardo; Pesce, Vincenzo; Marcon, Marco; Lavagna, Michèle; González, Iván; Rodríguez López, Nuria; Wormnes, Kjetil
2017-05-01
The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly configured according to the parabolic flight scenario, and executed in order to generate the validation data. Both datasets have been compared according to different metrics in order to perform the validation of the PATENDER simulator.
ERIC Educational Resources Information Center
Woodburn, Jim; Sutcliffe, Nick
1996-01-01
The Objective Structured Clinical Examination (OSCE), initially developed for undergraduate medical education, has been adapted for assessment of clinical skills in podiatry students. A 12-month pilot study found the test had relatively low levels of reliability, high construct and criterion validity, and good stability of performance over time.…
Benchmark radar targets for the validation of computational electromagnetics programs
NASA Technical Reports Server (NTRS)
Woo, Alex C.; Wang, Helen T. G.; Schuh, Michael J.; Sanders, Michael L.
1993-01-01
Results are presented of a set of computational electromagnetics validation measurements referring to three-dimensional perfectly conducting smooth targets, performed for the Electromagnetic Code Consortium. Plots are presented for both the low- and high-frequency measurements of the NASA almond, an ogive, a double ogive, a cone-sphere, and a cone-sphere with a gap.
An FMRI-compatible Symbol Search task.
Liebel, Spencer W; Clark, Uraina S; Xu, Xiaomeng; Riskin-Jones, Hannah H; Hawkshead, Brittany E; Schwarz, Nicolette F; Labbe, Donald; Jerskey, Beth A; Sweet, Lawrence H
2015-03-01
Our objective was to determine whether a Symbol Search paradigm developed for functional magnetic resonance imaging (FMRI) is a reliable and valid measure of cognitive processing speed (CPS) in healthy older adults. As all older adults are expected to experience cognitive declines due to aging, and CPS is one of the domains most affected by age, establishing a reliable and valid measure of CPS that can be administered inside an MR scanner may prove invaluable in future clinical and research settings. We evaluated the reliability and construct validity of a newly developed FMRI Symbol Search task by comparing participants' performance in and outside of the scanner and to the widely used and standardized Symbol Search subtest of the Wechsler Adult Intelligence Scale (WAIS). A brief battery of neuropsychological measures was also administered to assess the convergent and discriminant validity of the FMRI Symbol Search task. The FMRI Symbol Search task demonstrated high test-retest reliability when compared to performance on the same task administered out of the scanner (r=.791; p<.001). The criterion validity of the new task was supported, as it exhibited a strong positive correlation with the WAIS Symbol Search (r=.717; p<.001). Predicted convergent and discriminant validity patterns of the FMRI Symbol Search task were also observed. The FMRI Symbol Search task is a reliable and valid measure of CPS in healthy older adults and exhibits expected sensitivity to the effects of age on CPS performance.
Validation of a school-based amblyopia screening protocol in a kindergarten population.
Casas-Llera, Pilar; Ortega, Paula; Rubio, Inmaculada; Santos, Verónica; Prieto, María J; Alio, Jorge L
2016-08-04
To validate a school-based amblyopia screening program model by comparing its outcomes to those of a state-of-the-art conventional ophthalmic clinic examination in a kindergarten population of children between the ages of 4 and 5 years. An amblyopia screening protocol, which consisted of visual acuity measurement using Lea charts, ocular alignment test, ocular motility assessment, and stereoacuity with TNO random-dot test, was performed at school in a pediatric 4- to 5-year-old population by qualified healthcare professionals. The outcomes were validated in a selected group by a conventional ophthalmologic examination performed in a fully equipped ophthalmologic center. The ophthalmologic evaluation was used to confirm whether or not children were correctly classified by the screening protocol. The sensitivity and specificity of the test model to detect amblyopia were established. A total of 18,587 4- to 5-year-old children were subjected to the amblyopia screening program during the 2010-2011 school year. A population of 100 children were selected for the ophthalmologic validation screening. A sensitivity of 89.3%, specificity of 93.1%, positive predictive value of 83.3%, negative predictive value of 95.7%, positive likelihood ratio of 12.86, and negative likelihood ratio of 0.12 was obtained for the amblyopia screening validation model. The amblyopia screening protocol model tested in this investigation shows high sensitivity and specificity in detecting high-risk cases of amblyopia compared to the standard ophthalmologic examination. This screening program may be highly relevant for amblyopia screening at schools.
ERIC Educational Resources Information Center
Casem, Remalyn Q.
2016-01-01
This study aimed to determine the effects of flipped instruction on the performance and attitude of high school students in Mathematics. The study made use of the true experimental design, specifically the pretest-posttest control group design. There were two instruments used to gather data, the pretest-posttest which was subjected to validity and…
ERIC Educational Resources Information Center
Frederickson, Edward W.; And Others
The development and evaluation of prototype hands-on equipment, job sample performance tests for a high skilled technical Military Occupational Specialty (MOS) are described. An electronic maintenance MOS (26C20) was used as the research vehicle. The results led to the conclusion that valid and reliable performance tests could be constructed, but…
Ma, Yunzhi; Lacroix, Fréderic; Lavallée, Marie-Claude; Beaulieu, Luc
2015-01-01
To validate the Advanced Collapsed cone Engine (ACE) dose calculation engine of Oncentra Brachy (OcB) treatment planning system using an (192)Ir source. Two levels of validation were performed, conformant to the model-based dose calculation algorithm commissioning guidelines of American Association of Physicists in Medicine TG-186 report. Level 1 uses all-water phantoms, and the validation is against TG-43 methodology. Level 2 uses real-patient cases, and the validation is against Monte Carlo (MC) simulations. For each case, the ACE and TG-43 calculations were performed in the OcB treatment planning system. ALGEBRA MC system was used to perform MC simulations. In Level 1, the ray effect depends on both accuracy mode and the number of dwell positions. The volume fraction with dose error ≥2% quickly reduces from 23% (13%) for a single dwell to 3% (2%) for eight dwell positions in the standard (high) accuracy mode. In Level 2, the 10% and higher isodose lines were observed overlapping between ACE (both standard and high-resolution modes) and MC. Major clinical indices (V100, V150, V200, D90, D50, and D2cc) were investigated and validated by MC. For example, among the Level 2 cases, the maximum deviation in V100 of ACE from MC is 2.75% but up to ~10% for TG-43. Similarly, the maximum deviation in D90 is 0.14 Gy between ACE and MC but up to 0.24 Gy for TG-43. ACE demonstrated good agreement with MC in most clinically relevant regions in the cases tested. Departure from MC is significant for specific situations but limited to low-dose (<10% isodose) regions. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Formiga, Magno F; Roach, Kathryn E; Vital, Isabel; Urdaneta, Gisel; Balestrini, Kira; Calderon-Candelario, Rafael A; Campos, Michael A; Cahalin, Lawrence P
2018-01-01
The Test of Incremental Respiratory Endurance (TIRE) provides a comprehensive assessment of inspiratory muscle performance by measuring maximal inspiratory pressure (MIP) over time. The integration of MIP over inspiratory duration (ID) provides the sustained maximal inspiratory pressure (SMIP). Evidence on the reliability and validity of these measurements in COPD is not currently available. Therefore, we assessed the reliability, responsiveness and construct validity of the TIRE measures of inspiratory muscle performance in subjects with COPD. Test-retest reliability, known-groups and convergent validity assessments were implemented simultaneously in 81 male subjects with mild to very severe COPD. TIRE measures were obtained using the portable PrO2 device, following standard guidelines. All TIRE measures were found to be highly reliable, with SMIP demonstrating the strongest test-retest reliability with a nearly perfect intraclass correlation coefficient (ICC) of 0.99, while MIP and ID clustered closely together behind SMIP with ICC values of about 0.97. Our findings also demonstrated known-groups validity of all TIRE measures, with SMIP and ID yielding larger effect sizes when compared to MIP in distinguishing between subjects of different COPD status. Finally, our analyses confirmed convergent validity for both SMIP and ID, but not MIP. The TIRE measures of MIP, SMIP and ID have excellent test-retest reliability and demonstrated known-groups validity in subjects with COPD. SMIP and ID also demonstrated evidence of moderate convergent validity and appear to be more stable measures in this patient population than the traditional MIP.
Sirimanna, Pramudith; Gladman, Marc A
2017-10-01
Proficiency-based virtual reality (VR) training curricula improve intraoperative performance, but have not been developed for laparoscopic appendicectomy (LA). This study aimed to develop an evidence-based training curriculum for LA. A total of 10 experienced (>50 LAs), eight intermediate (10-30 LAs) and 20 inexperienced (<10 LAs) operators performed guided and unguided LA tasks on a high-fidelity VR simulator using internationally relevant techniques. The ability to differentiate levels of experience (construct validity) was measured using simulator-derived metrics. Learning curves were analysed. Proficiency benchmarks were defined by the performance of the experienced group. Intermediate and experienced participants completed a questionnaire to evaluate the realism (face validity) and relevance (content validity). Of 18 surgeons, 16 (89%) considered the VR model to be visually realistic and 17 (95%) believed that it was representative of actual practice. All 'guided' modules demonstrated construct validity (P < 0.05), with learning curves that plateaued between sessions 6 and 9 (P < 0.01). When comparing inexperienced to intermediates to experienced, the 'unguided' LA module demonstrated construct validity for economy of motion (5.00 versus 7.17 versus 7.84, respectively; P < 0.01) and task time (864.5 s versus 477.2 s versus 352.1 s, respectively, P < 0.01). Construct validity was also confirmed for number of movements, path length and idle time. Validated modules were used for curriculum construction, with proficiency benchmarks used as performance goals. A VR LA model was realistic and representative of actual practice and was validated as a training and assessment tool. Consequently, the first evidence-based internationally applicable training curriculum for LA was constructed, which facilitates skill acquisition to proficiency. © 2017 Royal Australasian College of Surgeons.
Veldhuijzen van Zanten, Sophie E M; Lane, Adam; Heymans, Martijn W; Baugh, Joshua; Chaney, Brooklyn; Hoffman, Lindsey M; Doughman, Renee; Jansen, Marc H A; Sanchez, Esther; Vandertop, William P; Kaspers, Gertjan J L; van Vuurden, Dannis G; Fouladi, Maryam; Jones, Blaise V; Leach, James
2017-08-01
We aimed to perform external validation of the recently developed survival prediction model for diffuse intrinsic pontine glioma (DIPG), and discuss its utility. The DIPG survival prediction model was developed in a cohort of patients from the Netherlands, United Kingdom and Germany, registered in the SIOPE DIPG Registry, and includes age <3 years, longer symptom duration and receipt of chemotherapy as favorable predictors, and presence of ring-enhancement on MRI as unfavorable predictor. Model performance was evaluated by analyzing the discrimination and calibration abilities. External validation was performed using an unselected cohort from the International DIPG Registry, including patients from United States, Canada, Australia and New Zealand. Basic comparison with the results of the original study was performed using descriptive statistics, and univariate- and multivariable regression analyses in the validation cohort. External validation was assessed following a variety of analyses described previously. Baseline patient characteristics and results from the regression analyses were largely comparable. Kaplan-Meier curves of the validation cohort reproduced separated groups of standard (n = 39), intermediate (n = 125), and high-risk (n = 78) patients. This discriminative ability was confirmed by similar values for the hazard ratios across these risk groups. The calibration curve in the validation cohort showed a symmetric underestimation of the predicted survival probabilities. In this external validation study, we demonstrate that the DIPG survival prediction model has acceptable cross-cohort calibration and is able to discriminate patients with short, average, and increased survival. We discuss how this clinico-radiological model may serve a useful role in current clinical practice.
The analytical validation of the Oncotype DX Recurrence Score assay
Baehner, Frederick L
2016-01-01
In vitro diagnostic multivariate index assays are highly complex molecular assays that can provide clinically actionable information regarding the underlying tumour biology and facilitate personalised treatment. These assays are only useful in clinical practice if all of the following are established: analytical validation (i.e., how accurately/reliably the assay measures the molecular characteristics), clinical validation (i.e., how consistently/accurately the test detects/predicts the outcomes of interest), and clinical utility (i.e., how likely the test is to significantly improve patient outcomes). In considering the use of these assays, clinicians often focus primarily on the clinical validity/utility; however, the analytical validity of an assay (e.g., its accuracy, reproducibility, and standardisation) should also be evaluated and carefully considered. This review focuses on the rigorous analytical validation and performance of the Oncotype DX® Breast Cancer Assay, which is performed at the Central Clinical Reference Laboratory of Genomic Health, Inc. The assay process includes tumour tissue enrichment (if needed), RNA extraction, gene expression quantitation (using a gene panel consisting of 16 cancer genes plus 5 reference genes and quantitative real-time RT-PCR), and an automated computer algorithm to produce a Recurrence Score® result (scale: 0–100). This review presents evidence showing that the Recurrence Score result reported for each patient falls within a tight clinically relevant confidence interval. Specifically, the review discusses how the development of the assay was designed to optimise assay performance, presents data supporting its analytical validity, and describes the quality control and assurance programmes that ensure optimal test performance over time. PMID:27729940
The analytical validation of the Oncotype DX Recurrence Score assay.
Baehner, Frederick L
2016-01-01
In vitro diagnostic multivariate index assays are highly complex molecular assays that can provide clinically actionable information regarding the underlying tumour biology and facilitate personalised treatment. These assays are only useful in clinical practice if all of the following are established: analytical validation (i.e., how accurately/reliably the assay measures the molecular characteristics), clinical validation (i.e., how consistently/accurately the test detects/predicts the outcomes of interest), and clinical utility (i.e., how likely the test is to significantly improve patient outcomes). In considering the use of these assays, clinicians often focus primarily on the clinical validity/utility; however, the analytical validity of an assay (e.g., its accuracy, reproducibility, and standardisation) should also be evaluated and carefully considered. This review focuses on the rigorous analytical validation and performance of the Oncotype DX ® Breast Cancer Assay, which is performed at the Central Clinical Reference Laboratory of Genomic Health, Inc. The assay process includes tumour tissue enrichment (if needed), RNA extraction, gene expression quantitation (using a gene panel consisting of 16 cancer genes plus 5 reference genes and quantitative real-time RT-PCR), and an automated computer algorithm to produce a Recurrence Score ® result (scale: 0-100). This review presents evidence showing that the Recurrence Score result reported for each patient falls within a tight clinically relevant confidence interval. Specifically, the review discusses how the development of the assay was designed to optimise assay performance, presents data supporting its analytical validity, and describes the quality control and assurance programmes that ensure optimal test performance over time.
Bruner, L H; Carr, G J; Harbell, J W; Curren, R D
2002-06-01
An approach commonly used to measure new toxicity test method (NTM) performance in validation studies is to divide toxicity results into positive and negative classifications, and the identify true positive (TP), true negative (TN), false positive (FP) and false negative (FN) results. After this step is completed, the contingent probability statistics (CPS), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) are calculated. Although these statistics are widely used and often the only statistics used to assess the performance of toxicity test methods, there is little specific guidance in the validation literature on what values for these statistics indicate adequate performance. The purpose of this study was to begin developing data-based answers to this question by characterizing the CPS obtained from an NTM whose data have a completely random association with a reference test method (RTM). Determining the CPS of this worst-case scenario is useful because it provides a lower baseline from which the performance of an NTM can be judged in future validation studies. It also provides an indication of relationships in the CPS that help identify random or near-random relationships in the data. The results from this study of randomly associated tests show that the values obtained for the statistics vary significantly depending on the cut-offs chosen, that high values can be obtained for individual statistics, and that the different measures cannot be considered independently when evaluating the performance of an NTM. When the association between results of an NTM and RTM is random the sum of the complementary pairs of statistics (sensitivity + specificity, NPV + PPV) is approximately 1, and the prevalence (i.e., the proportion of toxic chemicals in the population of chemicals) and PPV are equal. Given that combinations of high sensitivity-low specificity or low specificity-high sensitivity (i.e., the sum of the sensitivity and specificity equal to approximately 1) indicate lack of predictive capacity, an NTM having these performance characteristics should be considered no better for predicting toxicity than by chance alone.
Experimental Validation of a Closed Brayton Cycle System Transient Simulation
NASA Technical Reports Server (NTRS)
Johnson, Paul K.; Hervol, David S.
2006-01-01
The Brayton Power Conversion Unit (BPCU) located at NASA Glenn Research Center (GRC) in Cleveland, Ohio was used to validate the results of a computational code known as Closed Cycle System Simulation (CCSS). Conversion system thermal transient behavior was the focus of this validation. The BPCU was operated at various steady state points and then subjected to transient changes involving shaft rotational speed and thermal energy input. These conditions were then duplicated in CCSS. Validation of the CCSS BPCU model provides confidence in developing future Brayton power system performance predictions, and helps to guide high power Brayton technology development.
The reliability and validity of fatigue measures during multiple-sprint work: an issue revisited.
Glaister, Mark; Howatson, Glyn; Pattison, John R; McInnes, Gill
2008-09-01
The ability to repeatedly produce a high-power output or sprint speed is a key fitness component of most field and court sports. The aim of this study was to evaluate the validity and reliability of eight different approaches to quantify this parameter in tests of multiple-sprint performance. Ten physically active men completed two trials of each of two multiple-sprint running protocols with contrasting recovery periods. Protocol 1 consisted of 12 x 30-m sprints repeated every 35 seconds; protocol 2 consisted of 12 x 30-m sprints repeated every 65 seconds. All testing was performed in an indoor sports facility, and sprint times were recorded using twin-beam photocells. All but one of the formulae showed good construct validity, as evidenced by similar within-protocol fatigue scores. However, the assumptions on which many of the formulae were based, combined with poor or inconsistent test-retest reliability (coefficient of variation range: 0.8-145.7%; intraclass correlation coefficient range: 0.09-0.75), suggested many problems regarding logical validity. In line with previous research, the results support the percentage decrement calculation as the most valid and reliable method of quantifying fatigue in tests of multiple-sprint performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owens, J; Koester, C
The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method for analysis of aldicarb, bromadiolone, carbofuran, oxamyl, and methomyl in water by high performance liquid chromatography tandem mass spectrometry (HPLC-MS/MS), titled Method EPA MS666. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures described in MS666 for analysis of carbamatemore » pesticides in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of Method EPA MS666 can be determined.« less
Femtosecond laser micro-inscription of optical coherence tomography resolution test artifacts.
Tomlins, Peter H; Smith, Graham N; Woolliams, Peter D; Rasakanthan, Janarthanan; Sugden, Kate
2011-04-25
Optical coherence tomography (OCT) systems are becoming more commonly used in biomedical imaging and, to enable continued uptake, a reliable method of characterizing their performance and validating their operation is required. This paper outlines the use of femtosecond laser subsurface micro-inscription techniques to fabricate an OCT test artifact for validating the resolution performance of a commercial OCT system. The key advantage of this approach is that by utilizing the nonlinear absorption a three dimensional grid of highly localized point and line defects can be written in clear fused silica substrates.
High-performance thin layer chromatography to assess pharmaceutical product quality.
Kaale, Eliangiringa; Manyanga, Vicky; Makori, Narsis; Jenkins, David; Michael Hope, Samuel; Layloff, Thomas
2014-06-01
To assess the sustainability, robustness and economic advantages of high-performance thin layer chromatography (HPTLC) for quality control of pharmaceutical products. We compared three laboratories where three lots of cotrimoxazole tablets were assessed using different techniques for quantifying the active ingredient. The average assay relative standard deviation for the three lots was 1.2 with a range of 0.65-2.0. High-performance thin layer chromatography assessments are yielding valid results suitable for assessing product quality. The local pharmaceutical manufacturer had evolved the capacity to produce very high quality products. © 2014 John Wiley & Sons Ltd.
Potyrailo, Radislav A; Chisholm, Bret J; Morris, William G; Cawse, James N; Flanagan, William P; Hassib, Lamyaa; Molaison, Chris A; Ezbiansky, Karin; Medford, George; Reitz, Hariklia
2003-01-01
Coupling of combinatorial chemistry methods with high-throughput (HT) performance testing and measurements of resulting properties has provided a powerful set of tools for the 10-fold accelerated discovery of new high-performance coating materials for automotive applications. Our approach replaces labor-intensive steps with automated systems for evaluation of adhesion of 8 x 6 arrays of coating elements that are discretely deposited on a single 9 x 12 cm plastic substrate. Performance of coatings is evaluated with respect to their resistance to adhesion loss, because this parameter is one of the primary considerations in end-use automotive applications. Our HT adhesion evaluation provides previously unavailable capabilities of high speed and reproducibility of testing by using a robotic automation, an expanded range of types of tested coatings by using the coating tagging strategy, and an improved quantitation by using high signal-to-noise automatic imaging. Upon testing, the coatings undergo changes that are impossible to quantitatively predict using existing knowledge. Using our HT methodology, we have developed several coatings leads. These HT screening results for the best coating compositions have been validated on the traditional scales of coating formulation and adhesion loss testing. These validation results have confirmed the superb performance of combinatorially developed coatings over conventional coatings on the traditional scale.
Zhang, Wei-Dong; Wang, Ying; Wang, Qing; Yang, Wan-Jun; Gu, Yi; Wang, Rong; Song, Xiao-Mei; Wang, Xiao-Juan
2012-08-01
A sensitive and reliable ultra-high performance liquid chromatography-electrospray ionization-tandem mass spectrometry has been developed and partially validated to evaluate the quality of Semen Cassiae (Cassia obtusifolia L.) through simultaneous determination of 11 anthraquinones and two naphtha-γ-pyrone compounds. The analysis was achieved on a Poroshell 120 EC-C(18) column (100 mm × 2.1 mm, 2.7 μm; Agilent, Palo Alto, CA, USA) with gradient elution using a mobile phase that consisted of acetonitrile-water (30 mM ammonium acetate) at a flow rate of 0.4 mL/min. For quantitative analysis, all calibration curves showed perfect linear regression (r(2) > 0.99) within the testing range. This method was also validated with respect to precision and accuracy, and was successfully applied to quantify the 13 components in nine batches of Semen Cassiae samples from different areas. The performance of developed method was compared with that of conventional high-performance liquid chromatography method. The significant advantages of the former include high-speed chromatographic separation, four times faster than high-performance liquid chromatography with conventional columns, and great enhancement in sensitivity. This developed method provided a new basis for overall assessment on quality of Semen Cassiae. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Development and validation of the Hospitality Axiological Scale for Humanization of Nursing Care
Galán González-Serna, José María; Ferreras-Mencia, Soledad; Arribas-Marín, Juan Manuel
2017-01-01
ABSTRACT Objective: to develop and validate a scale to evaluate nursing attitudes in relation to hospitality for the humanization of nursing care. Participants: the sample consisted of 499 nursing professionals and undergraduate students of the final two years of the Bachelor of Science in Nursing program. Method: the instrument has been developed and validated to evaluate the ethical values related to hospitality using a methodological approach. Subsequently, a model was developed to measure the dimensions forming the construct hospitality. Results: the Axiological Hospitality Scale showed a high internal consistency, with Cronbach’s Alpha=0.901. The validation of the measuring instrument was performed using factorial, exploratory and confirmatory analysis techniques with high goodness of fit measures. Conclusions: the developed instrument showed an adequate validity and a high internal consistency. Based on the consistency of its psychometric properties, it is possible to affirm that the scale provides a reliable measurement of the hospitality. It was also possible to determine the dimensions or sources that embrace it: respect, responsibility, quality and transpersonal care. PMID:28793127
Kim, Hyo Seon; Chun, Jin Mi; Kwon, Bo-In; Lee, A-Reum; Kim, Ho Kyoung; Lee, A Yeong
2016-10-01
Ultra-performance convergence chromatography, which integrates the advantages of supercritical fluid chromatography and ultra high performance liquid chromatography technologies, is an environmentally friendly analytical method that uses dramatically reduced amounts of organic solvents. An ultra-performance convergence chromatography method was developed and validated for the quantification of decursinol angelate and decursin in Angelica gigas using a CSH Fluoro-Phenyl column (2.1 mm × 150 mm, 1.7 μm) with a run time of 4 min. The method had an improved resolution and a shorter analysis time in comparison to the conventional high-performance liquid chromatography method. This method was validated in terms of linearity, precision, and accuracy. The limits of detection were 0.005 and 0.004 μg/mL for decursinol angelate and decursin, respectively, while the limits of quantitation were 0.014 and 0.012 μg/mL, respectively. The two components showed good regression (correlation coefficient (r 2 ) > 0.999), excellent precision (RSD < 2.28%), and acceptable recoveries (99.75-102.62%). The proposed method can be used to efficiently separate, characterize, and quantify decursinol angelate and decursin in Angelica gigas and its related medicinal materials or preparations, with the advantages of a shorter analysis time, greater sensitivity, and better environmental compatibility. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Validity of a smartphone protractor to measure sagittal parameters in adult spinal deformity.
Kunkle, William Aaron; Madden, Michael; Potts, Shannon; Fogelson, Jeremy; Hershman, Stuart
2017-10-01
Smartphones have become an integral tool in the daily life of health-care professionals (Franko 2011). Their ease of use and wide availability often make smartphones the first tool surgeons use to perform measurements. This technique has been validated for certain orthopedic pathologies (Shaw 2012; Quek 2014; Milanese 2014; Milani 2014), but never to assess sagittal parameters in adult spinal deformity (ASD). This study was designed to assess the validity, reproducibility, precision, and efficiency of using a smartphone protractor application to measure sagittal parameters commonly measured in ASD assessment and surgical planning. This study aimed to (1) determine the validity of smartphone protractor applications, (2) determine the intra- and interobserver reliability of smartphone protractor applications when used to measure sagittal parameters in ASD, (3) determine the efficiency of using a smartphone protractor application to measure sagittal parameters, and (4) elucidate whether a physician's level of experience impacts the reliability or validity of using a smartphone protractor application to measure sagittal parameters in ASD. An experimental validation study was carried out. Thirty standard 36″ standing lateral radiographs were examined. Three separate measurements were performed using a marker and protractor; then at a separate time point, three separate measurements were performed using a smartphone protractor application for all 30 radiographs. The first 10 radiographs were then re-measured two more times, for a total of three measurements from both the smartphone protractor and marker and protractor. The parameters included lumbar lordosis, pelvic incidence, and pelvic tilt. Three raters performed all measurements-a junior level orthopedic resident, a senior level orthopedic resident, and a fellowship-trained spinal deformity surgeon. All data, including the time to perform the measurements, were recorded, and statistical analysis was performed to determine intra- and interobserver reliability, as well as accuracy, efficiency, and precision. Statistical analysis using the intra- and interclass correlation coefficient was calculated using R (version 3.3.2, 2016) to determine the degree of intra- and interobserver reliability. High rates of intra- and interobserver reliability were observed between the junior resident, senior resident, and attending surgeon when using the smartphone protractor application as demonstrated by high inter- and intra-class correlation coefficients greater than 0.909 and 0.874 respectively. High rates of inter- and intraobserver reliability were also seen between the junior resident, senior resident, and attending surgeon when a marker and protractor were used as demonstrated by high inter- and intra-class correlation coefficients greater than 0.909 and 0.807 respectively. The lumbar lordosis, pelvic incidence, and pelvic tilt values were accurately measured by all three raters, with excellent inter- and intra-class correlation coefficient values. When the first 10 radiographs were re-measured at different time points, a high degree of precision was noted. Measurements performed using the smartphone application were consistently faster than using a marker and protractor-this difference reached statistical significance of p<.05. Adult spinal deformity radiographic parameters can be measured accurately, precisely, reliably, and more efficiently using a smartphone protractor application than with a standard protractor and wax pencil. A high degree of intra- and interobserver reliability was seen between the residents and attending surgeon, indicating measurements made with a smartphone protractor are unaffected by an observer's level of experience. As a result, smartphone protractors may be used when planning ASD surgery. Copyright © 2017 Elsevier Inc. All rights reserved.
An Integrated Study on a Novel High Temperature High Entropy Alloy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Shizhong
2016-12-31
This report summarizes our recent works of theoretical modeling, simulation, and experimental validation of the simulation results on the new refractory high entropy alloy (HEA) design and oxide doped refractory HEA research. The simulation of the stability and thermal dynamics simulation on potential thermal stable candidates were performed and related HEA with oxide doped samples were synthesized and characterized. The HEA ab initio density functional theory and molecular dynamics physical property simulation methods and experimental texture validation techniques development, achievements already reached, course work development, students and postdoc training, and future improvement research directions are briefly introduced.
Gasquoine, Philip G; Weimer, Amy A; Amador, Arnoldo
2017-04-01
To measure specificity as failure rates for non-clinical, bilingual, Mexican Americans on three popular performance validity measures: (a) the language format Reliable Digit Span; (b) visual-perceptual format Test of Memory Malingering; and (c) visual-perceptual format Dot Counting, using optimal/suboptimal effort cut scores developed for monolingual, English-speakers. Participants were 61 consecutive referrals, aged between 18 and 65 years, with <16 years of education who were subjectively bilingual (confirmed via formal assessment) and chose the language of assessment, Spanish or English, for the performance validity tests. Failure rates were 38% for Reliable Digit Span, 3% for the Test of Memory Malingering, and 7% for Dot Counting. For Reliable Digit Span, the failure rates for Spanish (46%) and English (31%) languages of administration did not differ significantly. Optimal/suboptimal effort cut scores derived for monolingual English-speakers can be used with Spanish/English bilinguals when using the visual-perceptual format Test of Memory Malingering and Dot Counting. The high failure rate for Reliable Digit Span suggests it should not be used as a performance validity measure with Spanish/English bilinguals, irrespective of the language of test administration, Spanish or English.
Validation of a wireless modular monitoring system for structures
NASA Astrophysics Data System (ADS)
Lynch, Jerome P.; Law, Kincho H.; Kiremidjian, Anne S.; Carryer, John E.; Kenny, Thomas W.; Partridge, Aaron; Sundararajan, Arvind
2002-06-01
A wireless sensing unit for use in a Wireless Modular Monitoring System (WiMMS) has been designed and constructed. Drawing upon advanced technological developments in the areas of wireless communications, low-power microprocessors and micro-electro mechanical system (MEMS) sensing transducers, the wireless sensing unit represents a high-performance yet low-cost solution to monitoring the short-term and long-term performance of structures. A sophisticated reduced instruction set computer (RISC) microcontroller is placed at the core of the unit to accommodate on-board computations, measurement filtering and data interrogation algorithms. The functionality of the wireless sensing unit is validated through various experiments involving multiple sensing transducers interfaced to the sensing unit. In particular, MEMS-based accelerometers are used as the primary sensing transducer in this study's validation experiments. A five degree of freedom scaled test structure mounted upon a shaking table is employed for system validation.
On the Validity of Beer-Lambert Law and its Significance for Sunscreens.
Herzog, Bernd; Schultheiss, Amélie; Giesinger, Jochen
2018-03-01
The sun protection factor (SPF) is the most important quantity to characterize the performance of sunscreens. As the standard method for its determination is based on clinical trials involving irradiation of human volunteers, calculations of sunscreen performance have become quite popular to reduce the number of in vivo studies. Such simulations imply the calculation of UV transmittance of the sunscreen film using the amounts and spectroscopic properties of the UV absorbers employed, and presuppose the validity of the Beer-Lambert law. As sunscreen films on human skin can contain considerable concentrations of UV absorbers, it is questioned whether the Beer-Lambert law is still valid for these systems. The results of this work show that the validity of the Beer-Lambert law is still given at the high concentrations at which UV absorbers occur in sunscreen films on human skin. © 2017 The American Society of Photobiology.
Coran, Silvia A; Bartolucci, Gianluca; Bambagiotti-Alberti, Massimo
2008-10-17
The validation of a HPTLC-densitometric method for the determination of secoisolariciresinol diglucoside (SDG) in flaxseed was performed improving the reproducibility of a previously reported HPTLC densitometric procedure by the use of fully wettable reversed phase plates (silica gel 60 RP18W F(254S), 10cmx10cm) with MeOH:HCOOH 0.1% (40:60, v/v) mobile phase. The analysis required only the alkaline hydrolysis in aqueous medium of undefatted samples and densitometry at 282nm of HPTLC runs. The method was validated following the protocol proposed by the Société Francaise des Sciences et Techniques Pharmaceutiques (SFSTP) giving rise to a dependable and high throughput procedure well suited to routine application. SDG was quantified in the range of 321-1071ng with RSD of repeatability and intermediate precision not exceeding 3.61% and accuracy inside the acceptance limits. Flaxseed of five cultivars of different origin was elected as test-bed.
Colour-cueing in visual search.
Laarni, J
2001-02-01
Several studies have shown that people can selectively attend to stimulus colour, e.g., in visual search, and that preknowledge of a target colour can improve response speed/accuracy. The purpose was to use a form-identification task to determine whether valid colour precues can produce benefits and invalid cues costs. The subject had to identify the orientation of a "T"-shaped element in a ring of randomly-oriented "L"s when either two or four of the elements were differently coloured. Contrary to Moore and Egeth's (1998) recent findings, colour-based attention did affect performance under data-limited conditions: Colour cues produced benefits when processing load was high; when the load was reduced, they incurred only costs. Surprisingly, a valid colour cue succeeded in improving performance in the high-load condition even when its validity was reduced to the chance level. Overall, the results suggest that knowledge of a target colour does not facilitate the processing of the target, but makes it possible to prioritize it.
Kawaguchi, Koji; Egi, Hiroyuki; Hattori, Minoru; Sawada, Hiroyuki; Suzuki, Takahisa; Ohdan, Hideki
2014-10-01
Virtual reality surgical simulators are becoming popular as a means of providing trainees with an opportunity to practice laparoscopic skills. The Lap-X (Epona Medical, Rotterdam, the Netherlands) is a novel VR simulator for training basic skills in laparoscopic surgery. The objective of this study was to validate the LAP-X laparoscopic virtual reality simulator by assessing the face and construct validity in order to determine whether the simulator is adequate for basic skills training. The face and content validity were evaluated using a structured questionnaire. To assess the construct validity, the participants, nine expert surgeons (median age: 40 (32-45)) (>100 laparoscopic procedures) and 11 novices performed three basic laparoscopic tasks using the Lap-X. The participants reported a high level of content validity. No significant differences were found between the expert surgeons and the novices (Ps > 0.246). The performance of the expert surgeons on the three tasks was significantly better than that of the novices in all parameters (Ps < 0.05). This study demonstrated the face, content and construct validity of the Lap-X. The Lap-X holds real potential as a home and hospital training device.
Hill, John C; Millán, Iñigo San
2014-09-01
Glycogen storage is essential for exercise performance. The ability to assess muscle glycogen levels should be an important advantage for performance. However, skeletal muscle glycogen assessment has only been available and validated through muscle biopsy. We have developed a new methodology using high-frequency ultrasound to assess skeletal muscle glycogen content in a rapid, portable, and noninvasive way using MuscleSound (MuscleSound, LCC, Denver, CO) technology. To validate the utilization of high-frequency musculoskeletal ultrasound for muscle glycogen assessment and correlate it with histochemical glycogen quantification through muscle biopsy. Twenty-two male competitive cyclists (categories: Pro, 1-4; average height, 183.7 ± 4.9 cm; average weight, 76.8 ± 7.8 kg) performed a steady-state test on a cyclergometer for 90 minutes at a moderate to high exercise intensity, eliciting a carbohydrate oxidation of 2-3 g·min⁻¹ and a blood lactate concentration of 2 to 3 mM. Pre- and post-exercise glycogen content from rectus femoris muscle was measured using histochemical analysis through muscle biopsy and through high-frequency ultrasound scans using MuscleSound technology. Correlations between muscle biopsy glycogen histochemical quantification (mmol·kg⁻¹) and high-frequency ultrasound methodology through MuscleSound technology were r = 0.93 (P < 0.0001) pre-exercise and r = 0.94 (P < 0.0001) post-exercise. The correlation between muscle biopsy glycogen quantification and high-frequency ultrasound methodology for the change in glycogen from pre- and post-exercise was r = 0.81 (P < 0.0001). These results demonstrate that skeletal muscle glycogen can be measured quickly and noninvasively through high-frequency ultrasound using MuscleSound technology.
Macdonald, Isabel K.; Allen, Jared; Murray, Andrea; Parsy-Kowalska, Celine B.; Healey, Graham F.; Chapman, Caroline J.; Sewell, Herbert F.; Robertson, John F. R.
2012-01-01
An assay employing a panel of tumor-associated antigens has been validated and is available commercially (EarlyCDT®-Lung) to aid the early detection of lung cancer by measurement of serum autoantibodies. The high throughput (HTP) strategy described herein was pursued to identify new antigens to add to the EarlyCDT-Lung panel and to assist in the development of new panels for other cancers. Two ligation-independent cloning vectors were designed and synthesized, producing fusion proteins suitable for the autoantibody ELISA. We developed an abridged HTP version of the validated autoantibody ELISA, determining that results reflected the performance of the EarlyCDT assay, by comparing results on both formats. Once validated this HTP ELISA was utilized to screen multiple fusion proteins prepared on small-scale, by a HTP expression screen. We determined whether the assay performance for these HTP protein batches was an accurate reflection of the performance of R&D or commercial batches. A HTP discovery platform for the identification and optimal production of tumor- associated antigens which detects autoantibodies has been developed and validated. The most favorable conditions for the exposure of immunogenic epitopes were assessed to produce discriminatory proteins for use in a commercial ELISA. This process is rapid and cost-effective compared to standard cloning and screening technologies and enables rapid advancement in the field of autoantibody assay discovery. This approach will significantly reduce timescale and costs for developing similar panels of autoantibody assays for the detection of other cancer types with the ultimate aim of improved overall survival due to early diagnosis and treatment. PMID:22815807
Artilheiro, Mariana Cunha; Fávero, Francis Meire; Caromano, Fátima Aparecida; Oliveira, Acary de Souza Bulle; Carvas, Nelson; Voos, Mariana Callil; Sá, Cristina Dos Santos Cardoso de
2017-12-08
The Jebsen-Taylor Test evaluates upper limb function by measuring timed performance on everyday activities. The test is used to assess and monitor the progression of patients with Parkinson disease, cerebral palsy, stroke and brain injury. To analyze the reliability, internal consistency and validity of the Jebsen-Taylor Test in people with Muscular Dystrophy and to describe and classify upper limb timed performance of people with Muscular Dystrophy. Fifty patients with Muscular Dystrophy were assessed. Non-dominant and dominant upper limb performances on the Jebsen-Taylor Test were filmed. Two raters evaluated timed performance for inter-rater reliability analysis. Test-retest reliability was investigated by using intraclass correlation coefficients. Internal consistency was assessed using the Cronbach alpha. Construct validity was conducted by comparing the Jebsen-Taylor Test with the Performance of Upper Limb. The internal consistency of Jebsen-Taylor Test was good (Cronbach's α=0.98). A very high inter-rater reliability (0.903-0.999), except for writing with an Intraclass correlation coefficient of 0.772-1.000. Strong correlations between the Jebsen-Taylor Test and the Performance of Upper Limb Module were found (rho=-0.712). The Jebsen-Taylor Test is a reliable and valid measure of timed performance for people with Muscular Dystrophy. Copyright © 2017 Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia. Publicado por Elsevier Editora Ltda. All rights reserved.
Lam, Lucia L.; Ghadessi, Mercedeh; Erho, Nicholas; Vergara, Ismael A.; Alshalalfa, Mohammed; Buerki, Christine; Haddad, Zaid; Sierocinski, Thomas; Triche, Timothy J.; Skinner, Eila C.; Davicioni, Elai; Daneshmand, Siamak; Black, Peter C.
2014-01-01
Background Nearly half of muscle-invasive bladder cancer patients succumb to their disease following cystectomy. Selecting candidates for adjuvant therapy is currently based on clinical parameters with limited predictive power. This study aimed to develop and validate genomic-based signatures that can better identify patients at risk for recurrence than clinical models alone. Methods Transcriptome-wide expression profiles were generated using 1.4 million feature-arrays on archival tumors from 225 patients who underwent radical cystectomy and had muscle-invasive and/or node-positive bladder cancer. Genomic (GC) and clinical (CC) classifiers for predicting recurrence were developed on a discovery set (n = 133). Performances of GC, CC, an independent clinical nomogram (IBCNC), and genomic-clinicopathologic classifiers (G-CC, G-IBCNC) were assessed in the discovery and independent validation (n = 66) sets. GC was further validated on four external datasets (n = 341). Discrimination and prognostic abilities of classifiers were compared using area under receiver-operating characteristic curves (AUCs). All statistical tests were two-sided. Results A 15-feature GC was developed on the discovery set with area under curve (AUC) of 0.77 in the validation set. This was higher than individual clinical variables, IBCNC (AUC = 0.73), and comparable to CC (AUC = 0.78). Performance was improved upon combining GC with clinical nomograms (G-IBCNC, AUC = 0.82; G-CC, AUC = 0.86). G-CC high-risk patients had elevated recurrence probabilities (P < .001), with GC being the best predictor by multivariable analysis (P = .005). Genomic-clinicopathologic classifiers outperformed clinical nomograms by decision curve and reclassification analyses. GC performed the best in validation compared with seven prior signatures. GC markers remained prognostic across four independent datasets. Conclusions The validated genomic-based classifiers outperform clinical models for predicting postcystectomy bladder cancer recurrence. This may be used to better identify patients who need more aggressive management. PMID:25344601
Wenzl, Thomas; Karasek, Lubomir; Rosen, Johan; Hellenaes, Karl-Erik; Crews, Colin; Castle, Laurence; Anklam, Elke
2006-11-03
A European inter-laboratory study was conducted to validate two analytical procedures for the determination of acrylamide in bakery ware (crispbreads, biscuits) and potato products (chips), within a concentration range from about 20 microg/kg to about 9000 microgg/kg. The methods are based on gas chromatography-mass spectrometry (GC-MS) of the derivatised analyte and on high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) of native acrylamide. Isotope dilution with isotopically labelled acrylamide was an integral part of both methods. The study was evaluated according to internationally accepted guidelines. The performance of the HPLC-MS/MS method was found to be superior to that of the GC-MS method and to be fit-for-the-purpose.
ERIC Educational Resources Information Center
Tsatsanis, Katherine D.; Dartnall, Nancy; Cicchetti, Domenic; Sparrow, Sara S.; Klin, Ami; Volkmar, Fred R.
2003-01-01
The concurrent validity of the original and revised versions of the Leiter International Performance Scale was examined with 26 children (ages 4-16) with autism. Although the correlation between the two tests was high (.87), there were significant intra-individual discrepancies present in 10 cases, two of which were both large and clinically…
USDA-ARS?s Scientific Manuscript database
The purpose of this study was to develop a Single-Lab Validated Method using high-performance liquid chromatography (HPLC) with different detectors (diode array detector - DAD, fluorescence detector - FLD, and mass spectrometer - MS) for determination of seven B-complex vitamins (B1 - thiamin, B2 – ...
Development of an interprofessional lean facilitator assessment scale.
Bravo-Sanchez, Cindy; Dorazio, Vincent; Denmark, Robert; Heuer, Albert J; Parrott, J Scott
2018-05-01
High reliability is important for optimising quality and safety in healthcare organisations. Reliability efforts include interprofessional collaborative practice (IPCP) and Lean quality/process improvement strategies, which require skilful facilitation. Currently, no validated Lean facilitator assessment tool for interprofessional collaboration exists. This article describes the development and pilot evaluation of such a tool; the Interprofessional Lean Facilitator Assessment Scale (ILFAS), which measures both technical and 'soft' skills, which have not been measured in other instruments. The ILFAS was developed using methodologies and principles from Lean/Shingo, IPCP, metacognition research and Bloom's Taxonomy of Learning Domains. A panel of experts confirmed the initial face validity of the instrument. Researchers independently assessed five facilitators, during six Lean sessions. Analysis included quantitative evaluation of rater agreement. Overall inter-rater agreement of the assessment of facilitator performance was high (92%), and discrepancies in the agreement statistics were analysed. Face and content validity were further established, and usability was evaluated, through primary stakeholder post-pilot feedback, uncovering minor concerns, leading to tool revision. The ILFAS appears comprehensive in the assessment of facilitator knowledge, skills, abilities, and may be useful in the discrimination between facilitators of different skill levels. Further study is needed to explore instrument performance and validity.
NASA Astrophysics Data System (ADS)
Glocer, A.; Rastätter, L.; Kuznetsova, M.; Pulkkinen, A.; Singer, H. J.; Balch, C.; Weimer, D.; Welling, D.; Wiltberger, M.; Raeder, J.; Weigel, R. S.; McCollough, J.; Wing, S.
2016-07-01
We present the latest result of a community-wide space weather model validation effort coordinated among the Community Coordinated Modeling Center (CCMC), NOAA Space Weather Prediction Center (SWPC), model developers, and the broader science community. Validation of geospace models is a critical activity for both building confidence in the science results produced by the models and in assessing the suitability of the models for transition to operations. Indeed, a primary motivation of this work is supporting NOAA/SWPC's effort to select a model or models to be transitioned into operations. Our validation efforts focus on the ability of the models to reproduce a regional index of geomagnetic disturbance, the local K-index. Our analysis includes six events representing a range of geomagnetic activity conditions and six geomagnetic observatories representing midlatitude and high-latitude locations. Contingency tables, skill scores, and distribution metrics are used for the quantitative analysis of model performance. We consider model performance on an event-by-event basis, aggregated over events, at specific station locations, and separated into high-latitude and midlatitude domains. A summary of results is presented in this report, and an online tool for detailed analysis is available at the CCMC.
NASA Astrophysics Data System (ADS)
Choiri, S.; Ainurofiq, A.; Ratri, R.; Zulmi, M. U.
2018-03-01
Nifedipin (NIF) is a photo-labile drug that easily degrades when it exposures a sunlight. This research aimed to develop of an analytical method using a high-performance liquid chromatography and implemented a quality by design approach to obtain effective, efficient, and validated analytical methods of NIF and its degradants. A 22 full factorial design approach with a curvature as a center point was applied to optimize of the analytical condition of NIF and its degradants. Mobile phase composition (MPC) and flow rate (FR) as factors determined on the system suitability parameters. The selected condition was validated by cross-validation using a leave one out technique. Alteration of MPC affected on time retention significantly. Furthermore, an increase of FR reduced the tailing factor. In addition, the interaction of both factors affected on an increase of the theoretical plates and resolution of NIF and its degradants. The selected analytical condition of NIF and its degradants has been validated at range 1 – 16 µg/mL that had good linearity, precision, accuration and efficient due to an analysis time within 10 min.
NASA Technical Reports Server (NTRS)
Glocer, A.; Rastaetter, L.; Kuznetsova, M.; Pulkkinen, A.; Singer, H. J.; Balch, C.; Weimer, D.; Welling, D.; Wiltberger, M.; Raeder, J.;
2016-01-01
We present the latest result of a community-wide space weather model validation effort coordinated among the Community Coordinated Modeling Center (CCMC), NOAA Space Weather Prediction Center (SWPC), model developers, and the broader science community. Validation of geospace models is a critical activity for both building confidence in the science results produced by the models and in assessing the suitability of the models for transition to operations. Indeed, a primary motivation of this work is supporting NOAA/SWPCs effort to select a model or models to be transitioned into operations. Our validation efforts focus on the ability of the models to reproduce a regional index of geomagnetic disturbance, the local K-index. Our analysis includes six events representing a range of geomagnetic activity conditions and six geomagnetic observatories representing midlatitude and high-latitude locations. Contingency tables, skill scores, and distribution metrics are used for the quantitative analysis of model performance. We consider model performance on an event-by-event basis, aggregated over events, at specific station locations, and separated into high-latitude and midlatitude domains. A summary of results is presented in this report, and an online tool for detailed analysis is available at the CCMC.
Hu, Ming-Hsia; Yeh, Chih-Jun; Chen, Tou-Rong; Wang, Ching-Yi
2014-01-01
A valid, time-efficient and easy-to-use instrument is important for busy clinical settings, large scale surveys, or community screening use. The purpose of this study was to validate the mobility hierarchical disability categorization model (an abbreviated model) by investigating its concurrent validity with the multidimensional hierarchical disability categorization model (a comprehensive model) and triangulating both models with physical performance measures in older adults. 604 community-dwelling older adults of at least 60 years in age volunteered to participate. Self-reported function on mobility, instrumental activities of daily living (IADL) and activities of daily living (ADL) domains were recorded and then the disability status determined based on both the multidimensional hierarchical categorization model and the mobility hierarchical categorization model. The physical performance measures, consisting of grip strength and usual and fastest gait speeds (UGS, FGS), were collected on the same day. Both categorization models showed high correlation (γs = 0.92, p < 0.001) and agreement (kappa = 0.61, p < 0.0001). Physical performance measures demonstrated significant different group means among the disability subgroups based on both categorization models. The results of multiple regression analysis indicated that both models individually explain similar amount of variance on all physical performances, with adjustments for age, sex, and number of comorbidities. Our results found that the mobility hierarchical disability categorization model is a valid and time efficient tool for large survey or screening use.
CTF (Subchannel) Calculations and Validation L3:VVI.H2L.P15.01
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, Natalie
The goal of the Verification and Validation Implementation (VVI) High to Low (Hi2Lo) process is utilizing a validated model in a high resolution code to generate synthetic data for improvement of the same model in a lower resolution code. This process is useful in circumstances where experimental data does not exist or it is not sufficient in quantity or resolution. Data from the high-fidelity code is treated as calibration data (with appropriate uncertainties and error bounds) which can be used to train parameters that affect solution accuracy in the lower-fidelity code model, thereby reducing uncertainty. This milestone presents a demonstrationmore » of the Hi2Lo process derived in the VVI focus area. The majority of the work performed herein describes the steps of the low-fidelity code used in the process with references to the work detailed in the companion high-fidelity code milestone (Reference 1). The CASL low-fidelity code used to perform this work was Cobra Thermal Fluid (CTF) and the high-fidelity code was STAR-CCM+ (STAR). The master branch version of CTF (pulled May 5, 2017 – Reference 2) was utilized for all CTF analyses performed as part of this milestone. The statistical and VVUQ components of the Hi2Lo framework were performed using Dakota version 6.6 (release date May 15, 2017 – Reference 3). Experimental data from Westinghouse Electric Company (WEC – Reference 4) was used throughout the demonstrated process to compare with the high-fidelity STAR results. A CTF parameter called Beta was chosen as the calibration parameter for this work. By default, Beta is defined as a constant mixing coefficient in CTF and is essentially a tuning parameter for mixing between subchannels. Since CTF does not have turbulence models like STAR, Beta is the parameter that performs the most similar function to the turbulence models in STAR. The purpose of the work performed in this milestone is to tune Beta to an optimal value that brings the CTF results closer to those measured in the WEC experiments.« less
Tsugawa, Yusuke; Ohbu, Sadayoshi; Cruess, Richard; Cruess, Sylvia; Okubo, Tomoya; Takahashi, Osamu; Tokuda, Yasuharu; Heist, Brian S; Bito, Seiji; Itoh, Toshiyuki; Aoki, Akiko; Chiba, Tsutomu; Fukui, Tsuguya
2011-08-01
Despite the growing importance of and interest in medical professionalism, there is no standardized tool for its measurement. The authors sought to verify the validity, reliability, and generalizability of the Professionalism Mini-Evaluation Exercise (P-MEX), a previously developed and tested tool, in the context of Japanese hospitals. A multicenter, cross-sectional evaluation study was performed to investigate the validity, reliability, and generalizability of the P-MEX in seven Japanese hospitals. In 2009-2010, 378 evaluators (attending physicians, nurses, peers, and junior residents) completed 360-degree assessments of 165 residents and fellows using the P-MEX. The content validity and criterion-related validity were examined, and the construct validity of the P-MEX was investigated by performing confirmatory factor analysis through a structural equation model. The reliability was tested using generalizability analysis. The contents of the P-MEX achieved good acceptance in a preliminary working group, and the poststudy survey revealed that 302 (79.9%) evaluators rated the P-MEX items as appropriate, indicating good content validity. The correlation coefficient between P-MEX scores and external criteria was 0.78 (P < .001), demonstrating good criterion-related validity. Confirmatory factor analysis verified high path coefficient (0.60-0.99) and adequate goodness of fit of the model. The generalizability analysis yielded a high dependability coefficient, suggesting good reliability, except when evaluators were peers or junior residents. Findings show evidence of adequate validity, reliability, and generalizability of the P-MEX in Japanese hospital settings. The P-MEX is the only evaluation tool for medical professionalism verified in both a Western and East Asian cultural context.
van der Meulen, Mirja W; Boerebach, Benjamin C M; Smirnova, Alina; Heeneman, Sylvia; Oude Egbrink, Mirjam G A; van der Vleuten, Cees P M; Arah, Onyebuchi A; Lombarts, Kiki M J M H
2017-01-01
Multisource feedback (MSF) instruments are used to and must feasibly provide reliable and valid data on physicians' performance from multiple perspectives. The "INviting Co-workers to Evaluate Physicians Tool" (INCEPT) is a multisource feedback instrument used to evaluate physicians' professional performance as perceived by peers, residents, and coworkers. In this study, we report on the validity, reliability, and feasibility of the INCEPT. The performance of 218 physicians was assessed by 597 peers, 344 residents, and 822 coworkers. Using explorative and confirmatory factor analyses, multilevel regression analyses between narrative and numerical feedback, item-total correlations, interscale correlations, Cronbach's α and generalizability analyses, the psychometric qualities, and feasibility of the INCEPT were investigated. For all respondent groups, three factors were identified, although constructed slightly different: "professional attitude," "patient-centeredness," and "organization and (self)-management." Internal consistency was high for all constructs (Cronbach's α ≥ 0.84 and item-total correlations ≥ 0.52). Confirmatory factor analyses indicated acceptable to good fit. Further validity evidence was given by the associations between narrative and numerical feedback. For reliable total INCEPT scores, three peer, two resident and three coworker evaluations were needed; for subscale scores, evaluations of three peers, three residents and three to four coworkers were sufficient. The INCEPT instrument provides physicians performance feedback in a valid and reliable way. The number of evaluations to establish reliable scores is achievable in a regular clinical department. When interpreting feedback, physicians should consider that respondent groups' perceptions differ as indicated by the different item clustering per performance factor.
Analysis of Flowfields over Four-Engine DC-X Rockets
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Cornelison, Joni
1996-01-01
The objective of this study is to validate a computational methodology for the aerodynamic performance of an advanced conical launch vehicle configuration. The computational methodology is based on a three-dimensional, viscous flow, pressure-based computational fluid dynamics formulation. Both wind-tunnel and ascent flight-test data are used for validation. Emphasis is placed on multiple-engine power-on effects. Computational characterization of the base drag in the critical subsonic regime is the focus of the validation effort; until recently, almost no multiple-engine data existed for a conical launch vehicle configuration. Parametric studies using high-order difference schemes are performed for the cold-flow tests, whereas grid studies are conducted for the flight tests. The computed vehicle axial force coefficients, forebody, aftbody, and base surface pressures compare favorably with those of tests. The results demonstrate that with adequate grid density and proper distribution, a high-order difference scheme, finite rate afterburning kinetics to model the plume chemistry, and a suitable turbulence model to describe separated flows, plume/air mixing, and boundary layers, computational fluid dynamics is a tool that can be used to predict the low-speed aerodynamic performance for rocket design and operations.
Irakli, Maria N; Samanidou, Victoria F; Papadoyannis, Ioannis N
2012-03-07
The separation and determination of tocopherols (Ts) and tocotrienols (T3s) by reversed-phase high-performance liquid chromatography with fluorescence detection has been developed and validated after optimization of various chromatographic conditions and other experimental parameters. Analytes were separated on a PerfectSil Target ODS-3 (250 × 4.6 mm, 3 μm) column filled with a novel sorbent material of ultrapure silica gel. The separation of Ts and T3s was optimized in terms of mobile-phase composition and column temperature on the basis of the best compromise among efficiency, resolution, and analysis time. Using a gradient elution of mobile phase composed of isopropanol/water and 7 °C column temperature, a satisfactory resolution was achieved within 62 min. For the quantitative determination, α-T acetate (50 μg/mL) was used as the internal standard. Detection limits ranged from 0.27 μg/mL (γ-T) to 0.76 μg/mL (γ-T3). The validation of the method was examined performing intraday (n = 5) and interday (n = 3) assays and was found to be satisfactory, with high accuracy and precision results. Solid-phase extraction provided high relative extraction recoveries from cereal samples: 87.0% for γ-T3 and 115.5% for δ-T. The method was successfully applied to cereals, such as durum wheat, bread wheat, rice, barley, oat, rye, and corn.
System-Level Radiation Hardening
NASA Technical Reports Server (NTRS)
Ladbury, Ray
2014-01-01
Although system-level radiation hardening can enable the use of high-performance components and enhance the capabilities of a spacecraft, hardening techniques can be costly and can compromise the very performance designers sought from the high-performance components. Moreover, such techniques often result in a complicated design, especially if several complex commercial microcircuits are used, each posing its own hardening challenges. The latter risk is particularly acute for Commercial-Off-The-Shelf components since high-performance parts (e.g. double-data-rate synchronous dynamic random access memories - DDR SDRAMs) may require other high-performance commercial parts (e.g. processors) to support their operation. For these reasons, it is essential that system-level radiation hardening be a coordinated effort, from setting requirements through testing up to and including validation.
Comprehensive Calibration and Validation Site for Information Remote Sensing
NASA Astrophysics Data System (ADS)
Li, C. R.; Tang, L. L.; Ma, L. L.; Zhou, Y. S.; Gao, C. X.; Wang, N.; Li, X. H.; Wang, X. H.; Zhu, X. H.
2015-04-01
As a naturally part of information technology, Remote Sensing (RS) is strongly required to provide very precise and accurate information product to serve industry, academy and the public at this information economic era. To meet the needs of high quality RS product, building a fully functional and advanced calibration system, including measuring instruments, measuring approaches and target site become extremely important. Supported by MOST of China via national plan, great progress has been made to construct a comprehensive calibration and validation (Cal&Val) site, which integrates most functions of RS sensor aviation testing, EO satellite on-orbit caration and performance assessment and RS product validation at this site located in Baotou, 600km west of Beijing. The site is equipped with various artificial standard targets, including portable and permanent targets, which supports for long-term calibration and validation. A number of fine-designed ground measuring instruments and airborne standard sensors are developed for realizing high-accuracy stepwise validation, an approach in avoiding or reducing uncertainties caused from nonsynchronized measurement. As part of contribution to worldwide Cal&Val study coordinated by CEOS-WGCV, Baotou site is offering its support to Radiometric Calibration Network of Automated Instruments (RadCalNet), with an aim of providing demonstrated global standard automated radiometric calibration service in cooperation with ESA, NASA, CNES and NPL. Furthermore, several Cal&Val campaigns have been performed during the past years to calibrate and validate the spaceborne/airborne optical and SAR sensors, and the results of some typical demonstration are discussed in this study.
Reliability and validity of an accele-rometric system for assessing vertical jumping performance.
Choukou, M-A; Laffaye, G; Taiar, R
2014-03-01
The validity of an accelerometric system (Myotest©) for assessing vertical jump height, vertical force and power, leg stiffness and reactivity index was examined. 20 healthy males performed 3×"5 hops in place", 3×"1 squat jump" and 3× "1 countermovement jump" during 2 test-retest sessions. The variables were simultaneously assessed using an accelerometer and a force platform at a frequency of 0.5 and 1 kHz, respectively. Both reliability and validity of the accelerometric system were studied. No significant differences between test and retest data were found (p < 0.05), showing a high level of reliability. Besides, moderate to high intraclass correlation coefficients (ICCs) (from 0.74 to 0.96) were obtained for all variables whereas weak to moderate ICCs (from 0.29 to 0.79) were obtained for force and power during the countermovement jump. With regards to validity, the difference between the two devices was not significant for 5 hops in place height (1.8 cm), force during squat (-1.4 N · kg(-1)) and countermovement (0.1 N · kg(-1)) jumps, leg stiffness (7.8 kN · m(-1)) and reactivity index (0.4). So, the measurements of these variables with this accelerometer are valid, which is not the case for the other variables. The main causes of non-validity for velocity, power and contact time assessment are temporal biases of the takeoff and touchdown moments detection.
RELIABILITY AND VALIDITY OF AN ACCELEROMETRIC SYSTEM FOR ASSESSING VERTICAL JUMPING PERFORMANCE
Laffaye, G.; Taiar, R.
2014-01-01
The validity of an accelerometric system (Myotest©) for assessing vertical jump height, vertical force and power, leg stiffness and reactivity index was examined. 20 healthy males performed 3ד5 hops in place”, 3ד1 squat jump” and 3× “1 countermovement jump” during 2 test-retest sessions. The variables were simultaneously assessed using an accelerometer and a force platform at a frequency of 0.5 and 1 kHz, respectively. Both reliability and validity of the accelerometric system were studied. No significant differences between test and retest data were found (p < 0.05), showing a high level of reliability. Besides, moderate to high intraclass correlation coefficients (ICCs) (from 0.74 to 0.96) were obtained for all variables whereas weak to moderate ICCs (from 0.29 to 0.79) were obtained for force and power during the countermovement jump. With regards to validity, the difference between the two devices was not significant for 5 hops in place height (1.8 cm), force during squat (-1.4 N · kg−1) and countermovement (0.1 N · kg−1) jumps, leg stiffness (7.8 kN · m−1) and reactivity index (0.4). So, the measurements of these variables with this accelerometer are valid, which is not the case for the other variables. The main causes of non-validity for velocity, power and contact time assessment are temporal biases of the takeoff and touchdown moments detection. PMID:24917690
Benchmarking the ATLAS software through the Kit Validation engine
NASA Astrophysics Data System (ADS)
De Salvo, Alessandro; Brasolin, Franco
2010-04-01
The measurement of the experiment software performance is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit Validation. The performance measurements, the data collection, the online analysis and display of the results will be presented. The results of the measurement on different platforms and architectures will be shown, giving a full report on the CPU power and memory consumption of the Monte Carlo generation, simulation, digitization and reconstruction of the most CPU-intensive channels. The impact of the multi-core computing on the ATLAS software performance will also be presented, comparing the behavior of different architectures when increasing the number of concurrent processes. The benchmark techniques described in this paper have been used in the HEPiX group since the beginning of 2008 to help defining the performance metrics for the High Energy Physics applications, based on the real experiment software.
Validation of the Cognition Test Battery for Spaceflight in a Sample of Highly Educated Adults.
Moore, Tyler M; Basner, Mathias; Nasrini, Jad; Hermosillo, Emanuel; Kabadi, Sushila; Roalf, David R; McGuire, Sarah; Ecker, Adrian J; Ruparel, Kosha; Port, Allison M; Jackson, Chad T; Dinges, David F; Gur, Ruben C
2017-10-01
Neuropsychological changes that may occur due to the environmental and psychological stressors of prolonged spaceflight motivated the development of the Cognition Test Battery. The battery was designed to assess multiple domains of neurocognitive functions linked to specific brain systems. Tests included in Cognition have been validated, but not in high-performing samples comparable to astronauts, which is an essential step toward ensuring their usefulness in long-duration space missions. We administered Cognition (on laptop and iPad) and the WinSCAT, counterbalanced for order and version, in a sample of 96 subjects (50% women; ages 25-56 yr) with at least a Master's degree in science, technology, engineering, or mathematics (STEM). We assessed the associations of age, sex, and administration device with neurocognitive performance, and compared the scores on the Cognition battery with those of WinSCAT. Confirmatory factor analysis compared the structure of the iPad and laptop administration methods using Wald tests. Age was associated with longer response times (mean β = 0.12) and less accurate (mean β = -0.12) performance, women had longer response times on psychomotor (β = 0.62), emotion recognition (β = 0.30), and visuo-spatial (β = 0.48) tasks, men outperformed women on matrix reasoning (β = -0.34), and performance on an iPad was generally faster (mean β = -0.55). The WinSCAT appeared heavily loaded with tasks requiring executive control, whereas Cognition assessed a larger variety of neurocognitive domains. Overall results supported the interpretation of Cognition scores as measuring their intended constructs in high performing astronaut analog samples.Moore TM, Basner M, Nasrini J, Hermosillo E, Kabadi S, Roalf DR, McGuire S, Ecker AJ, Ruparel K, Port AM, Jackson CT, Dinges DF, Gur RC. Validation of the Cognition Test Battery for spaceflight in a sample of highly educated adults. Aerosp Med Hum Perform. 2017; 88(10):937-946.
Morin, Ruth T; Axelrod, Bradley N
Latent Class Analysis (LCA) was used to classify a heterogeneous sample of neuropsychology data. In particular, we used measures of performance validity, symptom validity, cognition, and emotional functioning to assess and describe latent groups of functioning in these areas. A data-set of 680 neuropsychological evaluation protocols was analyzed using a LCA. Data were collected from evaluations performed for clinical purposes at an urban medical center. A four-class model emerged as the best fitting model of latent classes. The resulting classes were distinct based on measures of performance validity and symptom validity. Class A performed poorly on both performance and symptom validity measures. Class B had intact performance validity and heightened symptom reporting. The remaining two Classes performed adequately on both performance and symptom validity measures, differing only in cognitive and emotional functioning. In general, performance invalidity was associated with worse cognitive performance, while symptom invalidity was associated with elevated emotional distress. LCA appears useful in identifying groups within a heterogeneous sample with distinct performance patterns. Further, the orthogonal nature of performance and symptom validities is supported.
Mindfulness, burnout, and effects on performance evaluations in internal medicine residents
Braun, Sarah E; Auerbach, Stephen M; Rybarczyk, Bruce; Lee, Bennett; Call, Stephanie
2017-01-01
Purpose Burnout has been documented at high levels in medical residents with negative effects on performance. Some dispositional qualities, like mindfulness, may protect against burnout. The purpose of the present study was to assess burnout prevalence among internal medicine residents at a single institution, examine the relationship between mindfulness and burnout, and provide preliminary findings on the relation between burnout and performance evaluations in internal medicine residents. Methods Residents (n = 38) completed validated measures of burnout at three time points separated by 2 months and a validated measure of dispositional mindfulness at baseline. Program director end-of-year performance evaluations were also obtained on 22 milestones used to evaluate internal medicine resident performance; notably, these milestones have not yet been validated for research purposes; therefore, the investigation here is exploratory. Results Overall, 71.1% (n = 27) of the residents met criteria for burnout during the study. Lower scores on the “acting with awareness” facet of dispositional mindfulness significantly predicted meeting burnout criteria χ2(5) = 11.88, p = 0.04. Lastly, meeting burnout criteria significantly predicted performance on three of the performance milestones, with positive effects on milestones from the “system-based practices” and “professionalism” domains and negative effects on a milestone from the “patient care” domain. Conclusion Burnout rates were high in this sample of internal medicine residents and rates were consistent with other reports of burnout during medical residency. Dispositional mindfulness was supported as a protective factor against burnout. Importantly, results from the exploratory investigation of the relationship between burnout and resident evaluations suggested that burnout may improve performance on some domains of resident evaluations while compromising performance on other domains. Implications and directions for future research are discussed. PMID:28860889
Mindfulness, burnout, and effects on performance evaluations in internal medicine residents.
Braun, Sarah E; Auerbach, Stephen M; Rybarczyk, Bruce; Lee, Bennett; Call, Stephanie
2017-01-01
Burnout has been documented at high levels in medical residents with negative effects on performance. Some dispositional qualities, like mindfulness, may protect against burnout. The purpose of the present study was to assess burnout prevalence among internal medicine residents at a single institution, examine the relationship between mindfulness and burnout, and provide preliminary findings on the relation between burnout and performance evaluations in internal medicine residents. Residents (n = 38) completed validated measures of burnout at three time points separated by 2 months and a validated measure of dispositional mindfulness at baseline. Program director end-of-year performance evaluations were also obtained on 22 milestones used to evaluate internal medicine resident performance; notably, these milestones have not yet been validated for research purposes; therefore, the investigation here is exploratory. Overall, 71.1% (n = 27) of the residents met criteria for burnout during the study. Lower scores on the "acting with awareness" facet of dispositional mindfulness significantly predicted meeting burnout criteria χ 2 (5) = 11.88, p = 0.04. Lastly, meeting burnout criteria significantly predicted performance on three of the performance milestones, with positive effects on milestones from the "system-based practices" and "professionalism" domains and negative effects on a milestone from the "patient care" domain. Burnout rates were high in this sample of internal medicine residents and rates were consistent with other reports of burnout during medical residency. Dispositional mindfulness was supported as a protective factor against burnout. Importantly, results from the exploratory investigation of the relationship between burnout and resident evaluations suggested that burnout may improve performance on some domains of resident evaluations while compromising performance on other domains. Implications and directions for future research are discussed.
Formiga, Magno F; Roach, Kathryn E; Vital, Isabel; Urdaneta, Gisel; Balestrini, Kira; Calderon-Candelario, Rafael A
2018-01-01
Purpose The Test of Incremental Respiratory Endurance (TIRE) provides a comprehensive assessment of inspiratory muscle performance by measuring maximal inspiratory pressure (MIP) over time. The integration of MIP over inspiratory duration (ID) provides the sustained maximal inspiratory pressure (SMIP). Evidence on the reliability and validity of these measurements in COPD is not currently available. Therefore, we assessed the reliability, responsiveness and construct validity of the TIRE measures of inspiratory muscle performance in subjects with COPD. Patients and methods Test–retest reliability, known-groups and convergent validity assessments were implemented simultaneously in 81 male subjects with mild to very severe COPD. TIRE measures were obtained using the portable PrO2 device, following standard guidelines. Results All TIRE measures were found to be highly reliable, with SMIP demonstrating the strongest test–retest reliability with a nearly perfect intraclass correlation coefficient (ICC) of 0.99, while MIP and ID clustered closely together behind SMIP with ICC values of about 0.97. Our findings also demonstrated known-groups validity of all TIRE measures, with SMIP and ID yielding larger effect sizes when compared to MIP in distinguishing between subjects of different COPD status. Finally, our analyses confirmed convergent validity for both SMIP and ID, but not MIP. Conclusion The TIRE measures of MIP, SMIP and ID have excellent test–retest reliability and demonstrated known-groups validity in subjects with COPD. SMIP and ID also demonstrated evidence of moderate convergent validity and appear to be more stable measures in this patient population than the traditional MIP. PMID:29805255
Moreira, Paulo A; Oliveira, João Tiago; Cloninger, Kevin M; Azevedo, Carla; Sousa, Alexandra; Castro, Jorge; Cloninger, C Robert
2012-11-01
Personality traits related to persistence and self-regulation of long-term goals can predict academic performance as well or better than measures of intelligence. The 5-factor model has been suggested to outperform some other personality tests in predicting academic performance, but it has not been compared to Cloninger's psychobiological model for this purpose. The aims of this study were, first, to evaluate the psychometric properties of the Junior Temperament and Character Inventory (JTCI) in adolescents in Portugal, and second, to evaluate the comparative validity of age-appropriate versions of Cloninger's 7-factor psychobiological model, Costa and McCrae's five-factor NEO-Personality Inventory-Revised, and Cattell's 16-personality-factor inventory in predicting academic achievement. All dimensions of the Portuguese JTCI had moderate to strong internal consistency. The Cattell's sixteen-personality-factor and NEO inventories provided strong construct validity for the JTCI in students younger than 17 years and for the revised adult version (TCI-Revised) in those 17 years and older. High TCI Persistence predicted school grades regardless of age as much or more than intelligence. High TCI Harm Avoidance, high Self-Transcendence, and low TCI Novelty Seeking were additional predictors in students older than 17. The psychobiological model, as measured by the JTCI and TCI-Revised, performed as well or better than other measures of personality or intelligence in predicting academic achievement. Copyright © 2012 Elsevier Inc. All rights reserved.
Abd El-Hay, Soad S; Hashem, Hisham; Gouda, Ayman A
2016-03-01
A novel, simple and robust high-performance liquid chromatography (HPLC) method was developed and validated for simultaneous determination of xipamide (XIP), triamterene (TRI) and hydrochlorothiazide (HCT) in their bulk powders and dosage forms. Chromatographic separation was carried out in less than two minutes. The separation was performed on a RP C-18 stationary phase with an isocratic elution system consisting of 0.03 mol L(-1) orthophosphoric acid (pH 2.3) and acetonitrile (ACN) as the mobile phase in the ratio of 50:50, at 2.0 mL min(-1) flow rate at room temperature. Detection was performed at 220 nm. Validation was performed concerning system suitability, limits of detection and quantitation, accuracy, precision, linearity and robustness. Calibration curves were rectilinear over the range of 0.195-100 μg mL(-1) for all the drugs studied. Recovery values were 99.9, 99.6 and 99.0 % for XIP, TRI and HCT, respectively. The method was applied to simultaneous determination of the studied analytes in their pharmaceutical dosage forms.
Herrera, Michael; Ding, Haiqing; McClanahan, Robert; Owens, Jane G; Hunter, Robert P
2007-09-15
A highly sensitive and quantitative LC/MS/MS assay for the determination of tilmicosin in serum has been developed and validated. For sample preparation, 0.2 mL of canine serum was extracted with 3 mL of methyl tert-butyl ether. The organic layer was transferred to a new vessel and dried under nitrogen. The sample was then reconstituted for analysis by high performance liquid chromatography-tandem mass spectrometry. A Phenomenex Luna C8(2) analytical column was used for the chromatographic separation. The eluent was subsequently introduced to the mass spectrometer by electrospray ionization. A single range was validated for 50-5000 ng/mL for support of toxicokinetic studies. The inter-day relative error (inaccuracy) for the LLOQ samples ranged from -5.5% to 0.3%. The inter-day relative standard deviations (imprecision) at the respective LLOQ levels were < or =10.1%.
NASA Technical Reports Server (NTRS)
Harper, Richard E.; Elks, Carl
1995-01-01
An Army Fault Tolerant Architecture (AFTA) has been developed to meet real-time fault tolerant processing requirements of future Army applications. AFTA is the enabling technology that will allow the Army to configure existing processors and other hardware to provide high throughput and ultrahigh reliability necessary for TF/TA/NOE flight control and other advanced Army applications. A comprehensive conceptual study of AFTA has been completed that addresses a wide range of issues including requirements, architecture, hardware, software, testability, producibility, analytical models, validation and verification, common mode faults, VHDL, and a fault tolerant data bus. A Brassboard AFTA for demonstration and validation has been fabricated, and two operating systems and a flight-critical Army application have been ported to it. Detailed performance measurements have been made of fault tolerance and operating system overheads while AFTA was executing the flight application in the presence of faults.
Jacobs, K R; Guillemin, G J; Lovejoy, D B
2018-02-01
Kynurenine 3-monooxygenase (KMO) is a well-validated therapeutic target for the treatment of neurodegenerative diseases, including Alzheimer's disease (AD) and Huntington's disease (HD). This work reports a facile fluorescence-based KMO assay optimized for high-throughput screening (HTS) that achieves a throughput approximately 20-fold higher than the fastest KMO assay currently reported. The screen was run with excellent performance (average Z' value of 0.80) from 110,000 compounds across 341 plates and exceeded all statistical parameters used to describe a robust HTS assay. A subset of molecules was selected for validation by ultra-high-performance liquid chromatography, resulting in the confirmation of a novel hit with an IC 50 comparable to that of the well-described KMO inhibitor Ro-61-8048. A medicinal chemistry program is currently underway to further develop our novel KMO inhibitor scaffolds.
von Mackensen, S; Czepa, D; Herbsleb, M; Hilberg, T
2010-01-01
Specific research studies for the investigation of physical performance in haemophilic patients are rare. However, these instruments become increasingly more important to evaluate therapeutic treatments. Within the frame of the Haemophilia & Exercise Project (HEP), a new questionnaire, namely HEP-Test-Q, has been developed for the assessment of subjective physical performance in haemophilic adults. In this article, the development and validation of the HEP-Test-Q is described. The development consisted of different phases including item collection, pilot testing and field testing. The preliminary version was pilot-tested in 24 German HEP-participants. Following evaluation and preliminary psychometric analysis, the HEP-Test-Q was revised. The final version consists of 25 items pertaining to the domains 'mobility', 'strength & coordination', 'endurance' and 'body perception', which was administered to 43 German haemophilic patients (43.8 +/- 11.2 years). Psychometric analysis included reliability and validity testing. Convergent validity was tested correlating the HEP-Test-Q with SF-36, Haem-A-QoL, HAL and the Orthopaedic Joint Score. Discriminant validity tested different clinical subgroups. Patients accepted the questionnaire and found it easy to fill in. Psychometric testing revealed good values for reliability in terms of internal consistency (Cronbach's alpha = 0.96) and test-retest reliability (r = 0.90) as well as for convergent validity correlating highly with Haem-A-QoL, HAL and SF-36. Discriminant validity testing showed significant differences for age, hepatitis A and hepatitis B and the number of target joints. HEP-Test-Q is a short and well-accepted questionnaire, assessing subjective physical performance of haemophiliacs, which might be combined with objective assessments to reveal aspects, which cannot be measured objectively, such as body perception.
Gu, Jifeng; Wu, Weijun; Huang, Mengwei; Long, Fen; Liu, Xinhua; Zhu, Yizhun
2018-04-11
A method for high-performance liquid chromatography coupled with linear ion trap quadrupole Orbitrap high-resolution mass spectrometry (HPLC-LTQ-Orbitrap MS) was developed and validated for the qualitative and quantitative assessment of Shejin-liyan Granule. According to the fragmentation mechanism and high-resolution MS data, 54 compounds, including fourteen isoflavones, eleven ligands, eight flavonoids, six physalins, six organic acids, four triterpenoid saponins, two xanthones, two alkaloids, and one licorice coumarin, were identified or tentatively characterized. In addition, ten of the representative compounds (matrine, galuteolin, tectoridin, iridin, arctiin, tectorigenin, glycyrrhizic acid, irigenin, arctigenin, and irisflorentin) were quantified using the validated HPLC-LTQ-Orbitrap MS method. The method validation showed a good linearity with coefficients of determination (r²) above 0.9914 for all analytes. The accuracy of the intra- and inter-day variation of the investigated compounds was 95.0-105.0%, and the precision values were less than 4.89%. The mean recoveries and reproducibilities of each analyte were 95.1-104.8%, with relative standard deviations below 4.91%. The method successfully quantified the ten compounds in Shejin-liyan Granule, and the results show that the method is accurate, sensitive, and reliable.
Validation of the M. D. Anderson Symptom Inventory multiple myeloma module
2013-01-01
Background The symptom burden associated with multiple myeloma (MM) is often severe. Presently, no instrument comprehensively assesses disease-related and treatment-related symptoms in patients with MM. We sought to validate a module of the M. D. Anderson Symptom Inventory (MDASI) developed specifically for patients with MM (MDASI-MM). Methods The MDASI-MM was developed with clinician input, cognitive debriefing, and literature review, and administered to 132 patients undergoing induction chemotherapy or stem cell transplantation. We demonstrated the MDASI-MM’s reliability (Cronbach α values); criterion validity (item and subscale correlations between the MDASI-MM and the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire (EORTC QLQ-C30) and the EORTC MM module (QLQ-MY20)), and construct validity (differences between groups by performance status). Ratings from transplant patients were examined to demonstrate the MDASI-MM’s sensitivity in detecting the acute worsening of symptoms post-transplantation. Results The MDASI-MM demonstrated excellent correlations with subscales of the 2 EORTC instruments, strong ability to distinguish clinically different patient groups, high sensitivity in detecting change in patients’ performance status, and high reliability. Cognitive debriefing confirmed that the MDASI-MM encompasses the breadth of symptoms relevant to patients with MM. Conclusion The MDASI-MM is a valid, reliable, comprehensive-yet-concise tool that is recommended as a uniform symptom assessment instrument for patients with MM. PMID:23384030
Weinstock, Peter; Rehder, Roberta; Prabhu, Sanjay P; Forbes, Peter W; Roussin, Christopher J; Cohen, Alan R
2017-07-01
OBJECTIVE Recent advances in optics and miniaturization have enabled the development of a growing number of minimally invasive procedures, yet innovative training methods for the use of these techniques remain lacking. Conventional teaching models, including cadavers and physical trainers as well as virtual reality platforms, are often expensive and ineffective. Newly developed 3D printing technologies can recreate patient-specific anatomy, but the stiffness of the materials limits fidelity to real-life surgical situations. Hollywood special effects techniques can create ultrarealistic features, including lifelike tactile properties, to enhance accuracy and effectiveness of the surgical models. The authors created a highly realistic model of a pediatric patient with hydrocephalus via a unique combination of 3D printing and special effects techniques and validated the use of this model in training neurosurgery fellows and residents to perform endoscopic third ventriculostomy (ETV), an effective minimally invasive method increasingly used in treating hydrocephalus. METHODS A full-scale reproduction of the head of a 14-year-old adolescent patient with hydrocephalus, including external physical details and internal neuroanatomy, was developed via a unique collaboration of neurosurgeons, simulation engineers, and a group of special effects experts. The model contains "plug-and-play" replaceable components for repetitive practice. The appearance of the training model (face validity) and the reproducibility of the ETV training procedure (content validity) were assessed by neurosurgery fellows and residents of different experience levels based on a 14-item Likert-like questionnaire. The usefulness of the training model for evaluating the performance of the trainees at different levels of experience (construct validity) was measured by blinded observers using the Objective Structured Assessment of Technical Skills (OSATS) scale for the performance of ETV. RESULTS A combination of 3D printing technology and casting processes led to the creation of realistic surgical models that include high-fidelity reproductions of the anatomical features of hydrocephalus and allow for the performance of ETV for training purposes. The models reproduced the pulsations of the basilar artery, ventricles, and cerebrospinal fluid (CSF), thus simulating the experience of performing ETV on an actual patient. The results of the 14-item questionnaire showed limited variability among participants' scores, and the neurosurgery fellows and residents gave the models consistently high ratings for face and content validity. The mean score for the content validity questions (4.88) was higher than the mean score for face validity (4.69) (p = 0.03). On construct validity scores, the blinded observers rated performance of fellows significantly higher than that of residents, indicating that the model provided a means to distinguish between novice and expert surgical skills. CONCLUSIONS A plug-and-play lifelike ETV training model was developed through a combination of 3D printing and special effects techniques, providing both anatomical and haptic accuracy. Such simulators offer opportunities to accelerate the development of expertise with respect to new and novel procedures as well as iterate new surgical approaches and innovations, thus allowing novice neurosurgeons to gain valuable experience in surgical techniques without exposing patients to risk of harm.
ERIC Educational Resources Information Center
Müller, Nico; Baumeister, Sarah; Dziobek, Isabel; Banaschewski, Tobias; Poustka, Luise
2016-01-01
Impaired social cognition is one of the core characteristics of autism spectrum disorders (ASD). Appropriate measures of social cognition for high-functioning adolescents with ASD are, however, lacking. The Movie for the Assessment of Social Cognition (MASC) uses dynamic social stimuli, ensuring ecological validity, and has proven to be a…
Software Quality Metrics Enhancements. Volume 1
1980-04-01
the mathematical relationships which relate metrics to ratings of the various quality factors) for factors which were not validated previously were...function, provides a mathematical relationship between the metrics and the quality factors. (3) Validation of these normalization functions was performed by...samples, further research is needed before a high degree of confidence can be placed on the mathematical relationships established to date l (3.3.3) 6
Design and Validation of High Date Rate Ka-Band Software Defined Radio for Small Satellite
NASA Technical Reports Server (NTRS)
Xia, Tian
2016-01-01
The Design and Validation of High Date Rate Ka- Band Software Defined Radio for Small Satellite project will develop a novel Ka-band software defined radio (SDR) that is capable of establishing high data rate inter-satellite links with a throughput of 500 megabits per second (Mb/s) and providing millimeter ranging precision. The system will be designed to operate with high performance and reliability that is robust against various interference effects and network anomalies. The Ka-band radio resulting from this work will improve upon state of the art Ka-band radios in terms of dimensional size, mass and power dissipation, which limit their use in small satellites.
ERIC Educational Resources Information Center
Howell, Abraham L.
2012-01-01
In the high tech factories of today robots can be used to perform various tasks that span a wide spectrum that encompasses the act of performing high-speed, automated assembly of cell phones, laptops and other electronic devices to the compounding, filling, packaging and distribution of life-saving pharmaceuticals. As robot usage continues to…
NASA Technical Reports Server (NTRS)
Pholsiri, Chalongrath; English, James; Seberino, Charles; Lim, Yi-Je
2010-01-01
The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete element models and fast analytical models. By using the first to establish parameters for the second, a system has been created that can be executed in real time, or faster than real time, on a desktop PC. This allows Monte Carlo simulations to be performed on a computer platform available to all researchers, and it allows human interaction to be included in a real-time simulation process. Metrics on excavator performance are established that work with the simulation architecture. Both static and dynamic metrics are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, Bryan Scott; MacQuigg, Michael Robert; Wysong, Andrew Russell
In this document, the code MCNP is validated with ENDF/B-VII.1 cross section data under the purview of ANSI/ANS-8.24-2007, for use with uranium systems. MCNP is a computer code based on Monte Carlo transport methods. While MCNP has wide reading capability in nuclear transport simulation, this validation is limited to the functionality related to neutron transport and calculation of criticality parameters such as k eff.
Yasukawa, Keiko; Shimosawa, Tatsuo; Okubo, Shigeo; Yatomi, Yutaka
2018-01-01
Background Human mercaptalbumin and human non-mercaptalbumin have been reported as markers for various pathological conditions, such as kidney and liver diseases. These markers play important roles in redox regulations throughout the body. Despite the recognition of these markers in various pathophysiologic conditions, the measurements of human mercaptalbumin and non-mercaptalbumin have not been popular because of the technical complexity and long measurement time of conventional methods. Methods Based on previous reports, we explored the optimal analytical conditions for a high-performance liquid chromatography method using an anion-exchange column packed with a hydrophilic polyvinyl alcohol gel. The method was then validated using performance tests as well as measurements of various patients' serum samples. Results We successfully established a reliable high-performance liquid chromatography method with an analytical time of only 12 min per test. The repeatability (within-day variability) and reproducibility (day-to-day variability) were 0.30% and 0.27% (CV), respectively. A very good correlation was obtained with the results of the conventional method. Conclusions A practical method for the clinical measurement of human mercaptalbumin and non-mercaptalbumin was established. This high-performance liquid chromatography method is expected to be a powerful tool enabling the expansion of clinical usefulness and ensuring the elucidation of the roles of albumin in redox reactions throughout the human body.
Bai, Lu; Guo, Sen; Liu, Qingchao; Cui, Xueqin; Zhang, Xinxin; Zhang, Li; Yang, Xinwen; Hou, Manwei; Ho, Chi-Tang; Bai, Naisheng
2016-04-01
Polyphenols are important bioactive substances in apple. To explore the profiles of the nine representative polyphenols in this fruit, a high-performance liquid chromatography method has been established and validated. The validated method was successfully applied for the simultaneous characterization and quantification of these nine apple polyphenols in 11 apple extracts, which were obtained from six cultivars from Shaanxi Province, China. The results showed that only abscission of the Fuji apple sample was rich in the nine apple polyphenols, and the polyphenol contents of other samples varied. Although all the samples were collected in the same region, the contents of nine polyphenols were different. The proposed method could serve as a prerequisite for quality control of Malus products. Copyright © 2015. Published by Elsevier B.V.
Srivastava, Nishi; Srivastava, Amit; Srivastava, Sharad; Rawat, Ajay Kumar Singh; Khan, Abdul Rahman
2016-03-01
A rapid, sensitive, selective and robust quantitative densitometric high-performance thin-layer chromatographic method was developed and validated for separation and quantification of syringic acid (SYA) and kaempferol (KML) in the hydrolyzed extracts of Bergenia ciliata and Bergenia stracheyi. The separation was performed on silica gel 60F254 high-performance thin-layer chromatography plates using toluene : ethyl acetate : formic acid (5 : 4: 1, v/v/v) as the mobile phase. The quantification of SYA and KML was carried out using a densitometric reflection/absorption mode at 290 nm. A dense spot of SYA and KML appeared on the developed plate at a retention factor value of 0.61 ± 0.02 and 0.70 ± 0.01. A precise and accurate quantification was performed using linear regression analysis by plotting the peak area vs concentration 100-600 ng/band (correlation coefficient: r = 0.997, regression coefficient: R(2) = 0.996) for SYA and 100-600 ng/band (correlation coefficient: r = 0.995, regression coefficient: R(2) = 0.991) for KML. The developed method was validated in terms of accuracy, recovery and inter- and intraday study as per International Conference on Harmonisation guidelines. The limit of detection and limit of quantification of SYA and KML were determined, respectively, as 91.63, 142.26 and 277.67, 431.09 ng. The statistical data analysis showed that the method is reproducible and selective for the estimation of SYA and KML in extracts of B. ciliata and B. stracheyi. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Nonlinear system identification of smart structures under high impact loads
NASA Astrophysics Data System (ADS)
Sarp Arsava, Kemal; Kim, Yeesock; El-Korchi, Tahar; Park, Hyo Seon
2013-05-01
The main purpose of this paper is to develop numerical models for the prediction and analysis of the highly nonlinear behavior of integrated structure control systems subjected to high impact loading. A time-delayed adaptive neuro-fuzzy inference system (TANFIS) is proposed for modeling of the complex nonlinear behavior of smart structures equipped with magnetorheological (MR) dampers under high impact forces. Experimental studies are performed to generate sets of input and output data for training and validation of the TANFIS models. The high impact load and current signals are used as the input disturbance and control signals while the displacement and acceleration responses from the structure-MR damper system are used as the output signals. The benchmark adaptive neuro-fuzzy inference system (ANFIS) is used as a baseline. Comparisons of the trained TANFIS models with experimental results demonstrate that the TANFIS modeling framework is an effective way to capture nonlinear behavior of integrated structure-MR damper systems under high impact loading. In addition, the performance of the TANFIS model is much better than that of ANFIS in both the training and the validation processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drake, Richard R.
Vvtools is a suite of testing tools, with a focus on reproducible verification and validation. They are written in pure Python, and contain a test harness and an automated process management tool. Users of vvtools can develop suites of verification and validation tests and run them on small to large high performance computing resources in an automated and reproducible way. The test harness enables complex processes to be performed in each test and even supports a one-level parent/child dependency between tests. It includes a built in capability to manage workloads requiring multiple processors and platforms that use batch queueing systems.
Performance Validation of Version 152.0 ANSER Control Laws for the F-18 HARV
NASA Technical Reports Server (NTRS)
Messina, Michael D.
1996-01-01
The Actuated Nose Strakes for Enhanced Rolling (ANSER) Control Laws were modified as a result of Phase 3 F/A-18 High Alpha Research Vehicle (HARV) flight testing. The control law modifications for the next software release were designated version 152.0. The Ada implementation was tested in the Hardware-In-the-Loop (HIL) simulation and results were compared to those obtained with the NASA Langley batch Fortran implementation of the control laws which are considered the 'truth model.' This report documents the performance validation test results between these implementations for ANSER control law version 152.0.
Field validation of the dnph method for aldehydes and ketones. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Workman, G.S.; Steger, J.L.
1996-04-01
A stationary source emission test method for selected aldehydes and ketones has been validated. The method employs a sampling train with impingers containing 2,4-dinitrophenylhydrazine (DNPH) to derivatize the analytes. The resulting hydrazones are recovered and analyzed by high performance liquid chromatography. Nine analytes were studied; the method was validated for formaldehyde, acetaldehyde, propionaldehyde, acetophenone and isophorone. Acrolein, menthyl ethyl ketone, menthyl isobutyl ketone, and quinone did not meet the validation criteria. The study employed the validation techniques described in EPA method 301, which uses train spiking to determine bias, and collocated sampling trains to determine precision. The studies were carriedmore » out at a plywood veneer dryer and a polyester manufacturing plant.« less
Mrazek, Michael D.; Phillips, Dawa T.; Franklin, Michael S.; Broadway, James M.; Schooler, Jonathan W.
2013-01-01
Mind-wandering is the focus of extensive investigation, yet until recently there has been no validated scale to directly measure trait levels of task-unrelated thought. Scales commonly used to assess mind-wandering lack face validity, measuring related constructs such as daydreaming or behavioral errors. Here we report four studies validating a Mind-Wandering Questionnaire (MWQ) across college, high school, and middle school samples. The 5-item scale showed high internal consistency, as well as convergent validity with existing measures of mind-wandering and related constructs. Trait levels of mind-wandering, as measured by the MWQ, were correlated with task-unrelated thought measured by thought sampling during a test of reading comprehension. In both middle school and high school samples, mind-wandering during testing was associated with worse reading comprehension. By contrast, elevated trait levels of mind-wandering predicted worse mood, less life-satisfaction, greater stress, and lower self-esteem. By extending the use of thought sampling to measure mind-wandering among adolescents, our findings also validate the use of this methodology with younger populations. Both the MWQ and thought sampling indicate that mind-wandering is a pervasive—and problematic—influence on the performance and well-being of adolescents. PMID:23986739
Glauser, Gaétan; Grund, Baptiste; Gassner, Anne-Laure; Menin, Laure; Henry, Hugues; Bromirski, Maciej; Schütz, Frédéric; McMullen, Justin; Rochat, Bertrand
2016-03-15
A paradigm shift is underway in the field of quantitative liquid chromatography-mass spectrometry (LC-MS) analysis thanks to the arrival of recent high-resolution mass spectrometers (HRMS). The capability of HRMS to perform sensitive and reliable quantifications of a large variety of analytes in HR-full scan mode is showing that it is now realistic to perform quantitative and qualitative analysis with the same instrument. Moreover, HR-full scan acquisition offers a global view of sample extracts and allows retrospective investigations as virtually all ionized compounds are detected with a high sensitivity. In time, the versatility of HRMS together with the increasing need for relative quantification of hundreds of endogenous metabolites should promote a shift from triple-quadrupole MS to HRMS. However, a current "pitfall" in quantitative LC-HRMS analysis is the lack of HRMS-specific guidance for validated quantitative analyses. Indeed, false positive and false negative HRMS detections are rare, albeit possible, if inadequate parameters are used. Here, we investigated two key parameters for the validation of LC-HRMS quantitative analyses: the mass accuracy (MA) and the mass-extraction-window (MEW) that is used to construct the extracted-ion-chromatograms. We propose MA-parameters, graphs, and equations to calculate rational MEW width for the validation of quantitative LC-HRMS methods. MA measurements were performed on four different LC-HRMS platforms. Experimentally determined MEW values ranged between 5.6 and 16.5 ppm and depended on the HRMS platform, its working environment, the calibration procedure, and the analyte considered. The proposed procedure provides a fit-for-purpose MEW determination and prevents false detections.
Gettman, Matthew T; Pereira, Claudio W; Lipsky, Katja; Wilson, Torrence; Arnold, Jacqueline J; Leibovich, Bradley C; Karnes, R Jeffrey; Dong, Yue
2009-03-01
Structured opportunities for learning communication, teamwork and laparoscopic principles are limited for urology residents. We evaluated and taught teamwork, communication and laparoscopic skills to urology residents in a simulated operating room. Scenarios related to laparoscopy (insufflator failure, carbon dioxide embolism) were developed using mannequins, urology residents and nurses. These scenarios were developed based on Accreditation Council for Graduate Medical Education core competencies and performed in a simulation center. Between the pretest scenario (insufflation failure) and the posttest scenario (carbon dioxide embolism) instruction was given on teamwork, communication and laparoscopic skills. A total of 19 urology residents participated in the training that involved participation in at least 2 scenarios. Performance was evaluated using validated teamwork instruments, questionnaires and videotape analysis. Significant improvement was noted on validated teamwork instruments between scenarios based on resident (pretest 24, posttest 27, p = 0.01) and expert (pretest 16, posttest 25, p = 0.008) evaluation. Increased teamwork and team performance were also noted between scenarios on videotape analysis with significant improvement for adherence to best practice (p = 0.01) and maintenance of positive rapport among team members (p = 0.02). Significant improvement in the setup of the laparoscopic procedure was observed (p = 0.01). Favorable face and content validity was noted for both scenarios. Teamwork, intraoperative communication and laparoscopic skills of urology residents improved during the high fidelity simulation course. Face and content validity of the individual sessions was favorable. In this study high fidelity simulation was effective for assessing and teaching Accreditation Council for Graduate Medical Education core competencies related to intraoperative communication, teamwork and laparoscopic skills.
NASA Astrophysics Data System (ADS)
Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa
2018-03-01
In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.
NASA Astrophysics Data System (ADS)
Viswanath, Kamal; Johnson, Ryan; Kailasanath, Kailas; Malla, Bhupatindra; Gutmark, Ephraim
2017-11-01
The noise from high performance jet engines of both civilian and military aircraft is an area of active concern. Asymmetric exhaust nozzle configurations, in particular rectangular, potentially offer a passive way of modulating the farfield noise and are likely to become more important in the future. High aspect ratio nozzles offer the further benefit of easier airframe integration. In this study we validate the far field noise for ideally and over expanded supersonic jets issuing from a high aspect ratio rectangular nozzle geometry. Validation of the acoustic data is performed against experimentally recorded sound pressure level (SPL) spectra for a host of observer locations around the asymmetric nozzle. Data is presented for a slightly heated jet case for both nozzle pressure ratios. The contrast in the noise profile from low aspect ratio rectangular and circular nozzle jets are highlighted, especially the variation in the azimuthal direction that shows ``quiet'' and ``loud'' planes in the farfield in the peak noise direction. This variation is analyzed in the context of the effect of mixing at the sharp corners, the sense of the vortex pairs setup in the exit plane, and the evolution of the high aspect ratio exit cross-section as it propagates downstream including possible axis-switching. Supported by Office of Naval Research (ONR) through the Computational Physics Task Area under the NRL 6.1 Base Program.
Dinç, Erdal; Ertekin, Zehra Ceren; Büker, Eda
2016-09-01
Two-way and three-way calibration models were applied to ultra high performance liquid chromatography with photodiode array data with coeluted peaks in the same wavelength and time regions for the simultaneous quantitation of ciprofloxacin and ornidazole in tablets. The chromatographic data cube (tensor) was obtained by recording chromatographic spectra of the standard and sample solutions containing ciprofloxacin and ornidazole with sulfadiazine as an internal standard as a function of time and wavelength. Parallel factor analysis and trilinear partial least squares were used as three-way calibrations for the decomposition of the tensor, whereas three-way unfolded partial least squares was applied as a two-way calibration to the unfolded dataset obtained from the data array of ultra high performance liquid chromatography with photodiode array detection. The validity and ability of two-way and three-way analysis methods were tested by analyzing validation samples: synthetic mixture, interday and intraday samples, and standard addition samples. Results obtained from two-way and three-way calibrations were compared to those provided by traditional ultra high performance liquid chromatography. The proposed methods, parallel factor analysis, trilinear partial least squares, unfolded partial least squares, and traditional ultra high performance liquid chromatography were successfully applied to the quantitative estimation of the solid dosage form containing ciprofloxacin and ornidazole. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yoon, Sungroh; Park, Man Sik; Choi, Hoon; Bae, Jae Hyun; Moon, Du Geon; Hong, Sung Kyu; Lee, Sang Eun; Park, Chanwang
2017-01-01
Purpose We developed the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer (KPCRC-HG) that predicts the probability of prostate cancer (PC) of Gleason score 7 or higher at the initial prostate biopsy in a Korean cohort (http://acl.snu.ac.kr/PCRC/RISC/). In addition, KPCRC-HG was validated and compared with internet-based Western risk calculators in a validation cohort. Materials and Methods Using a logistic regression model, KPCRC-HG was developed based on the data from 602 previously unscreened Korean men who underwent initial prostate biopsies. Using 2,313 cases in a validation cohort, KPCRC-HG was compared with the European Randomized Study of Screening for PC Risk Calculator for high-grade cancer (ERSPCRC-HG) and the Prostate Cancer Prevention Trial Risk Calculator 2.0 for high-grade cancer (PCPTRC-HG). The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC) and calibration plots. Results PC was detected in 172 (28.6%) men, 120 (19.9%) of whom had PC of Gleason score 7 or higher. Independent predictors included prostate-specific antigen levels, digital rectal examination findings, transrectal ultrasound findings, and prostate volume. The AUC of the KPCRC-HG (0.84) was higher than that of the PCPTRC-HG (0.79, p<0.001) but not different from that of the ERSPCRC-HG (0.83) on external validation. Calibration plots also revealed better performance of KPCRC-HG and ERSPCRC-HG than that of PCPTRC-HG on external validation. At a cut-off of 5% for KPCRC-HG, 253 of the 2,313 men (11%) would not have been biopsied, and 14 of the 614 PC cases with Gleason score 7 or higher (2%) would not have been diagnosed. Conclusions KPCRC-HG is the first web-based high-grade prostate cancer prediction model in Korea. It had higher predictive accuracy than PCPTRC-HG in a Korean population and showed similar performance with ERSPCRC-HG in a Korean population. This prediction model could help avoid unnecessary biopsy and reduce overdiagnosis and overtreatment in clinical settings. PMID:28046017
Park, Jae Young; Yoon, Sungroh; Park, Man Sik; Choi, Hoon; Bae, Jae Hyun; Moon, Du Geon; Hong, Sung Kyu; Lee, Sang Eun; Park, Chanwang; Byun, Seok-Soo
2017-01-01
We developed the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer (KPCRC-HG) that predicts the probability of prostate cancer (PC) of Gleason score 7 or higher at the initial prostate biopsy in a Korean cohort (http://acl.snu.ac.kr/PCRC/RISC/). In addition, KPCRC-HG was validated and compared with internet-based Western risk calculators in a validation cohort. Using a logistic regression model, KPCRC-HG was developed based on the data from 602 previously unscreened Korean men who underwent initial prostate biopsies. Using 2,313 cases in a validation cohort, KPCRC-HG was compared with the European Randomized Study of Screening for PC Risk Calculator for high-grade cancer (ERSPCRC-HG) and the Prostate Cancer Prevention Trial Risk Calculator 2.0 for high-grade cancer (PCPTRC-HG). The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC) and calibration plots. PC was detected in 172 (28.6%) men, 120 (19.9%) of whom had PC of Gleason score 7 or higher. Independent predictors included prostate-specific antigen levels, digital rectal examination findings, transrectal ultrasound findings, and prostate volume. The AUC of the KPCRC-HG (0.84) was higher than that of the PCPTRC-HG (0.79, p<0.001) but not different from that of the ERSPCRC-HG (0.83) on external validation. Calibration plots also revealed better performance of KPCRC-HG and ERSPCRC-HG than that of PCPTRC-HG on external validation. At a cut-off of 5% for KPCRC-HG, 253 of the 2,313 men (11%) would not have been biopsied, and 14 of the 614 PC cases with Gleason score 7 or higher (2%) would not have been diagnosed. KPCRC-HG is the first web-based high-grade prostate cancer prediction model in Korea. It had higher predictive accuracy than PCPTRC-HG in a Korean population and showed similar performance with ERSPCRC-HG in a Korean population. This prediction model could help avoid unnecessary biopsy and reduce overdiagnosis and overtreatment in clinical settings.
Kang, Homan; Jeong, Sinyoung; Jo, Ahla; Chang, Hyejin; Yang, Jin-Kyoung; Jeong, Cheolhwan; Kyeong, San; Lee, Youn Woo; Samanta, Animesh; Maiti, Kaustabh Kumar; Cha, Myeong Geun; Kim, Taek-Keun; Lee, Sukmook; Jun, Bong-Hyun; Chang, Young-Tae; Chung, Junho; Lee, Ho-Young; Jeong, Dae Hong; Lee, Yoon-Sik
2018-02-01
Immunotargeting ability of antibodies may show significant difference between in vitro and in vivo. To select antibody leads with high affinity and specificity, it is necessary to perform in vivo validation of antibody candidates following in vitro antibody screening. Herein, a robust in vivo validation of anti-tetraspanin-8 antibody candidates against human colon cancer using ratiometric quantification method is reported. The validation is performed on a single mouse and analyzed by multiplexed surface-enhanced Raman scattering using ultrasensitive and near infrared (NIR)-active surface-enhanced resonance Raman scattering nanoprobes (NIR-SERRS dots). The NIR-SERRS dots are composed of NIR-active labels and Au/Ag hollow-shell assembled silica nanospheres. A 93% of NIR-SERRS dots is detectable at a single-particle level and signal intensity is 100-fold stronger than that from nonresonant molecule-labeled spherical Au NPs (80 nm). The result of SERRS-based antibody validation is comparable to that of the conventional method using single-photon-emission computed tomography. The NIR-SERRS-based strategy is an alternate validation method which provides cost-effective and accurate multiplexing measurements for antibody-based drug development. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Naveen, P.; Lingaraju, H. B.; Prasad, K. Shyam
2017-01-01
Mangiferin, a polyphenolic xanthone glycoside from Mangifera indica, is used as traditional medicine for the treatment of numerous diseases. The present study was aimed to develop and validate a reversed-phase high-performance liquid chromatography (RP-HPLC) method for the quantification of mangiferin from the bark extract of M. indica. RP-HPLC analysis was performed by isocratic elution with a low-pressure gradient using 0.1% formic acid: acetonitrile (87:13) as a mobile phase with a flow rate of 1.5 ml/min. The separation was done at 26°C using a Kinetex XB-C18 column as stationary phase and the detection wavelength at 256 nm. The proposed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification, and robustness by the International Conference on Harmonisation guidelines. In linearity, the excellent correlation coefficient more than 0.999 indicated good fitting of the curve and also good linearity. The intra- and inter-day precision showed < 1% of relative standard deviation of peak area indicated high reliability and reproducibility of the method. The recovery values at three different levels (50%, 100%, and 150%) of spiked samples were found to be 100.47, 100.89, and 100.99, respectively, and low standard deviation value < 1% shows high accuracy of the method. In robustness, the results remain unaffected by small variation in the analytical parameters, which shows the robustness of the method. Liquid chromatography–mass spectrometry analysis confirmed the presence of mangiferin with M/Z value of 421. The assay developed by HPLC method is a simple, rapid, and reliable for the determination of mangiferin from M. indica. SUMMARY The present study was intended to develop and validate an RP-HPLC method for the quantification of mangiferin from the bark extract of M. indica. The developed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification and robustness by International Conference on Harmonization guidelines. This study proved that the developed assay by HPLC method is a simple, rapid and reliable for the quantification of the mangiferin from M. indica. Abbreviations Used: M. indica: Mangifera indica, RP-HPLC: Reversed-phase high-performance liquid chromatography, M/Z: Mass to charge ratio, ICH: International conference on harmonization, % RSD: Percentage of relative standard deviation, ppm: Parts per million, LOD: Limit of detection, LOQ: Limit of quantification. PMID:28539748
Naveen, P; Lingaraju, H B; Prasad, K Shyam
2017-01-01
Mangiferin, a polyphenolic xanthone glycoside from Mangifera indica , is used as traditional medicine for the treatment of numerous diseases. The present study was aimed to develop and validate a reversed-phase high-performance liquid chromatography (RP-HPLC) method for the quantification of mangiferin from the bark extract of M. indica . RP-HPLC analysis was performed by isocratic elution with a low-pressure gradient using 0.1% formic acid: acetonitrile (87:13) as a mobile phase with a flow rate of 1.5 ml/min. The separation was done at 26°C using a Kinetex XB-C18 column as stationary phase and the detection wavelength at 256 nm. The proposed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification, and robustness by the International Conference on Harmonisation guidelines. In linearity, the excellent correlation coefficient more than 0.999 indicated good fitting of the curve and also good linearity. The intra- and inter-day precision showed < 1% of relative standard deviation of peak area indicated high reliability and reproducibility of the method. The recovery values at three different levels (50%, 100%, and 150%) of spiked samples were found to be 100.47, 100.89, and 100.99, respectively, and low standard deviation value < 1% shows high accuracy of the method. In robustness, the results remain unaffected by small variation in the analytical parameters, which shows the robustness of the method. Liquid chromatography-mass spectrometry analysis confirmed the presence of mangiferin with M/Z value of 421. The assay developed by HPLC method is a simple, rapid, and reliable for the determination of mangiferin from M. indica . The present study was intended to develop and validate an RP-HPLC method for the quantification of mangiferin from the bark extract of M. indica . The developed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification and robustness by International Conference on Harmonization guidelines. This study proved that the developed assay by HPLC method is a simple, rapid and reliable for the quantification of the mangiferin from M. indica . Abbreviations Used: M. indica : Mangifera indica , RP-HPLC: Reversed-phase high-performance liquid chromatography, M/Z: Mass to charge ratio, ICH: International conference on harmonization, % RSD: Percentage of relative standard deviation, ppm: Parts per million, LOD: Limit of detection, LOQ: Limit of quantification.
Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L
2016-07-01
Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.
Kontic, Dean; Zenic, Natasa; Uljevic, Ognjen; Sekulic, Damir; Lesnik, Blaz
2017-06-01
Swimming capacities are hypothesized to be important determinants of water polo performance but there is an evident lack of studies examining different swimming capacities in relation to specific offensive and defensive performance variables in this sport. The aim of this study was to determine the relationship between five swimming capacities and six performance determinants in water polo. The sample comprised 79 high-level youth water polo players (all males, 17-18 years of age). The variables included six performance-related variables (agility in offence and defense, efficacy in offence and defense, polyvalence in offence and defense), and five swimming-capacity tests (water polo sprint test [15 m], swimming sprint test [25 m], short-distance [100 m], aerobic endurance [400 m] and an anaerobic lactate endurance test [4× 50 m]). First, multiple regressions were calculated for one-half of the sample of subjects which were then validated with the remaining half of the sample. The 25-m swim was not included in the regression analyses due to the multicollinearity with other predictors. The originally calculated regression models were validated for defensive agility (R=0.67 and R=0.55 for the original regression calculation and validation subsample, respectively) offensive agility (R=0.59 and R=0.61), and offensive efficacy (R=0.64 and R=0.58). Anaerobic lactate endurance is a significant predictor of offensive and defensive agility, while 15 m sprint significantly contributes to offensive efficacy. Swimming capacities are not found to be related to the polyvalence of the players. The most superior offensive performance can be expected from those players with a high level of anaerobic lactate endurance and advanced sprinting capacity, while anaerobic lactate endurance is recognized as most important quality in defensive duties. Future studies should observe players' polyvalence in relation to (theoretical) knowledge of technical and tactical tasks. Results reinforce the need for the cross-validation of the prediction-models in sport and exercise sciences.
Urdea, Mickey; Kolberg, Janice; Wilber, Judith; Gerwien, Robert; Moler, Edward; Rowe, Michael; Jorgensen, Paul; Hansen, Torben; Pedersen, Oluf; Jørgensen, Torben; Borch-Johnsen, Knut
2009-01-01
Background Improved identification of subjects at high risk for development of type 2 diabetes would allow preventive interventions to be targeted toward individuals most likely to benefit. In previous research, predictive biomarkers were identified and used to develop multivariate models to assess an individual's risk of developing diabetes. Here we describe the training and validation of the PreDx™ Diabetes Risk Score (DRS) model in a clinical laboratory setting using baseline serum samples from subjects in the Inter99 cohort, a population-based primary prevention study of cardiovascular disease. Methods Among 6784 subjects free of diabetes at baseline, 215 subjects progressed to diabetes (converters) during five years of follow-up. A nested case-control study was performed using serum samples from 202 converters and 597 randomly selected nonconverters. Samples were randomly assigned to equally sized training and validation sets. Seven biomarkers were measured using assays developed for use in a clinical reference laboratory. Results The PreDx DRS model performed better on the training set (area under the curve [AUC] = 0.837) than fasting plasma glucose alone (AUC = 0.779). When applied to the sequestered validation set, the PreDx DRS showed the same performance (AUC = 0.838), thus validating the model. This model had a better AUC than any other single measure from a fasting sample. Moreover, the model provided further risk stratification among high-risk subpopulations with impaired fasting glucose or metabolic syndrome. Conclusions The PreDx DRS provides the absolute risk of diabetes conversion in five years for subjects identified to be “at risk” using the clinical factors. PMID:20144324
Urdea, Mickey; Kolberg, Janice; Wilber, Judith; Gerwien, Robert; Moler, Edward; Rowe, Michael; Jorgensen, Paul; Hansen, Torben; Pedersen, Oluf; Jørgensen, Torben; Borch-Johnsen, Knut
2009-07-01
Improved identification of subjects at high risk for development of type 2 diabetes would allow preventive interventions to be targeted toward individuals most likely to benefit. In previous research, predictive biomarkers were identified and used to develop multivariate models to assess an individual's risk of developing diabetes. Here we describe the training and validation of the PreDx Diabetes Risk Score (DRS) model in a clinical laboratory setting using baseline serum samples from subjects in the Inter99 cohort, a population-based primary prevention study of cardiovascular disease. Among 6784 subjects free of diabetes at baseline, 215 subjects progressed to diabetes (converters) during five years of follow-up. A nested case-control study was performed using serum samples from 202 converters and 597 randomly selected nonconverters. Samples were randomly assigned to equally sized training and validation sets. Seven biomarkers were measured using assays developed for use in a clinical reference laboratory. The PreDx DRS model performed better on the training set (area under the curve [AUC] = 0.837) than fasting plasma glucose alone (AUC = 0.779). When applied to the sequestered validation set, the PreDx DRS showed the same performance (AUC = 0.838), thus validating the model. This model had a better AUC than any other single measure from a fasting sample. Moreover, the model provided further risk stratification among high-risk subpopulations with impaired fasting glucose or metabolic syndrome. The PreDx DRS provides the absolute risk of diabetes conversion in five years for subjects identified to be "at risk" using the clinical factors. Copyright 2009 Diabetes Technology Society.
Factor- and Item-Level Analyses of the 38-Item Activities Scale for Kids-Performance
ERIC Educational Resources Information Center
Bagley, Anita M.; Gorton, George E.; Bjornson, Kristie; Bevans, Katherine; Stout, Jean L.; Narayanan, Unni; Tucker, Carole A.
2011-01-01
Aim: Children and adolescents highly value their ability to participate in relevant daily life and recreational activities. The Activities Scale for Kids-performance (ASKp) instrument measures the frequency of performance of 30 common childhood activities, and has been shown to be valid and reliable. A revised and expanded 38-item ASKp (ASKp38)…
ERIC Educational Resources Information Center
Akrofi, Solomon
2016-01-01
In spite of decades of research into high-performance work systems, very few studies have examined the relationship between executive learning and development and organisational performance. In an attempt to close this gap, this study explores the effects of a validated four-dimensional executive learning and development measure on a composite…
Larsson, Helena; Tegern, Matthias; Monnier, Andreas; Skoglund, Jörgen; Helander, Charlotte; Persson, Emelie; Malm, Christer; Broman, Lisbet; Aasa, Ulrika
2015-01-01
The objective of this study was to examine the content validity of commonly used muscle performance tests in military personnel and to investigate the reliability of a proposed test battery. For the content validity investigation, thirty selected tests were those described in the literature and/or commonly used in the Nordic and North Atlantic Treaty Organization (NATO) countries. Nine selected experts rated, on a four-point Likert scale, the relevance of these tests in relation to five different work tasks: lifting, carrying equipment on the body or in the hands, climbing, and digging. Thereafter, a content validity index (CVI) was calculated for each work task. The result showed excellent CVI (≥0.78) for sixteen tests, which comprised of one or more of the military work tasks. Three of the tests; the functional lower-limb loading test (the Ranger test), dead-lift with kettlebells, and back extension, showed excellent content validity for four of the work tasks. For the development of a new muscle strength/endurance test battery, these three tests were further supplemented with two other tests, namely, the chins and side-bridge test. The inter-rater reliability was high (intraclass correlation coefficient, ICC2,1 0.99) for all five tests. The intra-rater reliability was good to high (ICC3,1 0.82–0.96) with an acceptable standard error of mean (SEM), except for the side-bridge test (SEM%>15). Thus, the final suggested test battery for a valid and reliable evaluation of soldiers’ muscle performance comprised the following four tests; the Ranger test, dead-lift with kettlebells, chins, and back extension test. The criterion-related validity of the test battery should be further evaluated for soldiers exposed to varying physical workload. PMID:26177030
How Many Batches Are Needed for Process Validation under the New FDA Guidance?
Yang, Harry
2013-01-01
The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of assurance that the process will consistently produce future batches meeting quality standards. Several examples based on simulated data are presented to illustrate the use of the Bayesian method in helping manufacturers make risk-based decisions for Stage 2 PPQ, and they highlight the advantages of the method over traditional Frequentist approaches. The discussions in the paper lend support for a life cycle and risk-based approach to process validation recommended in the new FDA guidance. The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of assurance that the process will consistently produce future batches meeting quality standards. Several examples based on simulated data are presented to illustrate the use of the Bayesian method in helping manufacturers make risk-based decisions for Stage 2 PPQ, and THEY highlight the advantages of the method over traditional Frequentist approaches. The discussions in the paper lend support for a life cycle and risk-based approach to process validation recommended in the new FDA guidance.
Hamada, Sophie Rym; Rosa, Anne; Gauss, Tobias; Desclefs, Jean-Philippe; Raux, Mathieu; Harrois, Anatole; Follin, Arnaud; Cook, Fabrice; Boutonnet, Mathieu; Attias, Arie; Ausset, Sylvain; Boutonnet, Mathieu; Dhonneur, Gilles; Duranteau, Jacques; Langeron, Olivier; Paugam-Burtz, Catherine; Pirracchio, Romain; de St Maurice, Guillaume; Vigué, Bernard; Rouquette, Alexandra; Duranteau, Jacques
2018-05-05
Haemorrhagic shock is the leading cause of early preventable death in severe trauma. Delayed treatment is a recognized prognostic factor that can be prevented by efficient organization of care. This study aimed to develop and validate Red Flag, a binary alert identifying blunt trauma patients with high risk of severe haemorrhage (SH), to be used by the pre-hospital trauma team in order to trigger an adequate intra-hospital standardized haemorrhage control response: massive transfusion protocol and/or immediate haemostatic procedures. A multicentre retrospective study of prospectively collected data from a trauma registry (Traumabase®) was performed. SH was defined as: packed red blood cell (RBC) transfusion in the trauma room, or transfusion ≥ 4 RBC in the first 6 h, or lactate ≥ 5 mmol/L, or immediate haemostatic surgery, or interventional radiology and/or death of haemorrhagic shock. Pre-hospital characteristics were selected using a multiple logistic regression model in a derivation cohort to develop a Red Flag binary alert whose performances were confirmed in a validation cohort. Among the 3675 patients of the derivation cohort, 672 (18%) had SH. The final prediction model included five pre-hospital variables: Shock Index ≥ 1, mean arterial blood pressure ≤ 70 mmHg, point of care haemoglobin ≤ 13 g/dl, unstable pelvis and pre-hospital intubation. The Red Flag alert was triggered by the presence of any combination of at least two criteria. Its predictive performances were sensitivity 75% (72-79%), specificity 79% (77-80%) and area under the receiver operating characteristic curve 0.83 (0.81-0.84) in the derivation cohort, and were not significantly different in the independent validation cohort of 2999 patients. The Red Flag alert developed and validated in this study has high performance to accurately predict or exclude SH.
Knight, Sophie; Aggarwal, Rajesh; Agostini, Aubert; Loundou, Anderson; Berdah, Stéphane; Crochet, Patrice
2018-01-01
Total Laparoscopic hysterectomy (LH) requires an advanced level of operative skills and training. The aim of this study was to develop an objective scale specific for the assessment of technical skills for LH (H-OSATS) and to demonstrate feasibility of use and validity in a virtual reality setting. The scale was developed using a hierarchical task analysis and a panel of international experts. A Delphi method obtained consensus among experts on relevant steps that should be included into the H-OSATS scale for assessment of operative performances. Feasibility of use and validity of the scale were evaluated by reviewing video recordings of LH performed on a virtual reality laparoscopic simulator. Three groups of operators of different levels of experience were assessed in a Marseille teaching hospital (10 novices, 8 intermediates and 8 experienced surgeons). Correlations with scores obtained using a recognised generic global rating tool (OSATS) were calculated. A total of 76 discrete steps were identified by the hierarchical task analysis. 14 experts completed the two rounds of the Delphi questionnaire. 64 steps reached consensus and were integrated in the scale. During the validation process, median time to rate each video recording was 25 minutes. There was a significant difference between the novice, intermediate and experienced group for total H-OSATS scores (133, 155.9 and 178.25 respectively; p = 0.002). H-OSATS scale demonstrated high inter-rater reliability (intraclass correlation coefficient [ICC] = 0.930; p<0.001) and test retest reliability (ICC = 0.877; p<0.001). High correlations were found between total H-OSATS scores and OSATS scores (rho = 0.928; p<0.001). The H-OSATS scale displayed evidence of validity for assessment of technical performances for LH performed on a virtual reality simulator. The implementation of this scale is expected to facilitate deliberate practice. Next steps should focus on evaluating the validity of the scale in the operating room.
Snell, Kym I E; Hua, Harry; Debray, Thomas P A; Ensor, Joie; Look, Maxime P; Moons, Karel G M; Riley, Richard D
2016-01-01
Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of "good" performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of "good" performance (defined by C statistic ≥0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of "good" performance. Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
van der Meulen, Ineke; van de Sandt-Koenderman, W Mieke E; Duivenvoorden, Hugo J; Ribbers, Gerard M
2010-01-01
This study explores the psychometric qualities of the Scenario Test, a new test to assess daily-life communication in severe aphasia. The test is innovative in that it: (1) examines the effectiveness of verbal and non-verbal communication; and (2) assesses patients' communication in an interactive setting, with a supportive communication partner. To determine the reliability, validity, and sensitivity to change of the Scenario Test and discuss its clinical value. The Scenario Test was administered to 122 persons with aphasia after stroke and to 25 non-aphasic controls. Analyses were performed for the entire group of persons with aphasia, as well as for a subgroup of persons unable to communicate verbally (n = 43). Reliability (internal consistency, test-retest reliability, inter-judge, and intra-judge reliability) and validity (internal validity, convergent validity, known-groups validity) and sensitivity to change were examined using standard psychometric methods. The Scenario Test showed high levels of reliability. Internal consistency (Cronbach's alpha = 0.96; item-rest correlations = 0.58-0.82) and test-retest reliability (ICC = 0.98) were high. Agreement between judges in total scores was good, as indicated by the high inter- and intra-judge reliability (ICC = 0.86-1.00). Agreement in scores on the individual items was also good (square-weighted kappa values 0.61-0.92). The test demonstrated good levels of validity. A principal component analysis for categorical data identified two dimensions, interpreted as general communication and communicative creativity. Correlations with three other instruments measuring communication in aphasia, that is, Spontaneous Speech interview from the Aachen Aphasia Test (AAT), Amsterdam-Nijmegen Everyday Language Test (ANELT), and Communicative Effectiveness Index (CETI), were moderate to strong (0.50-0.85) suggesting good convergent validity. Group differences were observed between persons with aphasia and non-aphasic controls, as well as between persons with aphasia unable to use speech to convey information and those able to communicate verbally; this indicates good known-groups validity. The test was sensitive to changes in performance, measured over a period of 6 months. The data support the reliability and validity of the Scenario Test as an instrument for examining daily-life communication in aphasia. The test focuses on multimodal communication; its psychometric qualities enable future studies on the effect of Alternative and Augmentative Communication (AAC) training in aphasia.
Ensor, Joie; Riley, Richard D; Jowett, Sue; Monahan, Mark; Snell, Kym Ie; Bayliss, Susan; Moore, David; Fitzmaurice, David
2016-02-01
Unprovoked first venous thromboembolism (VTE) is defined as VTE in the absence of a temporary provoking factor such as surgery, immobility and other temporary factors. Recurrent VTE in unprovoked patients is highly prevalent, but easily preventable with oral anticoagulant (OAC) therapy. The unprovoked population is highly heterogeneous in terms of risk of recurrent VTE. The first aim of the project is to review existing prognostic models which stratify individuals by their recurrence risk, therefore potentially allowing tailored treatment strategies. The second aim is to enhance the existing research in this field, by developing and externally validating a new prognostic model for individual risk prediction, using a pooled database containing individual patient data (IPD) from several studies. The final aim is to assess the economic cost-effectiveness of the proposed prognostic model if it is used as a decision rule for resuming OAC therapy, compared with current standard treatment strategies. Standard systematic review methodology was used to identify relevant prognostic model development, validation and cost-effectiveness studies. Bibliographic databases (including MEDLINE, EMBASE and The Cochrane Library) were searched using terms relating to the clinical area and prognosis. Reviewing was undertaken by two reviewers independently using pre-defined criteria. Included full-text articles were data extracted and quality assessed. Critical appraisal of included full texts was undertaken and comparisons made of model performance. A prognostic model was developed using IPD from the pooled database of seven trials. A novel internal-external cross-validation (IECV) approach was used to develop and validate a prognostic model, with external validation undertaken in each of the trials iteratively. Given good performance in the IECV approach, a final model was developed using all trials data. A Markov patient-level simulation was used to consider the economic cost-effectiveness of using a decision rule (based on the prognostic model) to decide on resumption of OAC therapy (or not). Three full-text articles were identified by the systematic review. Critical appraisal identified methodological and applicability issues; in particular, all three existing models did not have external validation. To address this, new prognostic models were sought with external validation. Two potential models were considered: one for use at cessation of therapy (pre D-dimer), and one for use after cessation of therapy (post D-dimer). Model performance measured in the external validation trials showed strong calibration performance for both models. The post D-dimer model performed substantially better in terms of discrimination (c = 0.69), better separating high- and low-risk patients. The economic evaluation identified that a decision rule based on the final post D-dimer model may be cost-effective for patients with predicted risk of recurrence of over 8% annually; this suggests continued therapy for patients with predicted risks ≥ 8% and cessation of therapy otherwise. The post D-dimer model performed strongly and could be useful to predict individuals' risk of recurrence at any time up to 2-3 years, thereby aiding patient counselling and treatment decisions. A decision rule using this model may be cost-effective for informing clinical judgement and patient opinion in treatment decisions. Further research may investigate new predictors to enhance model performance and aim to further externally validate to confirm performance in new, non-trial populations. Finally, it is essential that further research is conducted to develop a model predicting bleeding risk on therapy, to manage the balance between the risks of recurrence and bleeding. This study is registered as PROSPERO CRD42013003494. The National Institute for Health Research Health Technology Assessment programme.
FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.
Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver
2014-06-14
Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.
Digital pre-compensation techniques enabling high-capacity bandwidth variable transponders
NASA Astrophysics Data System (ADS)
Napoli, Antonio; Berenguer, Pablo Wilke; Rahman, Talha; Khanna, Ginni; Mezghanni, Mahdi M.; Gardian, Lennart; Riccardi, Emilio; Piat, Anna Chiadò; Calabrò, Stefano; Dris, Stefanos; Richter, André; Fischer, Johannes Karl; Sommerkorn-Krombholz, Bernd; Spinnler, Bernhard
2018-02-01
Digital pre-compensation techniques are among the enablers for cost-efficient high-capacity transponders. In this paper we describe various methods to mitigate the impairments introduced by state-of-the-art components within modern optical transceivers. Numerical and experimental results validate their performance and benefits.
Does High School Performance Predict College Math Placement?
ERIC Educational Resources Information Center
Kowski, Lynne E.
2013-01-01
Predicting student success has long been a question of interest for postsecondary admission counselors throughout the United States. Past research has examined the validity of several methods designed for predicting undergraduate success. High school record, standardized test scores, extracurricular activities, and combinations of all three have…
First validation of the PASSPORT training environment for arthroscopic skills.
Tuijthof, Gabriëlle J M; van Sterkenburg, Maayke N; Sierevelt, Inger N; van Oldenrijk, Jakob; Van Dijk, C Niek; Kerkhoffs, Gino M M J
2010-02-01
The demand for high quality care is in contrast to reduced training time for residents to develop arthroscopic skills. Thereto, simulators are introduced to train skills away from the operating room. In our clinic, a physical simulation environment to Practice Arthroscopic Surgical Skills for Perfect Operative Real-life Treatment (PASSPORT) is being developed. The PASSPORT concept consists of maintaining the normal arthroscopic equipment, replacing the human knee joint by a phantom, and integrating registration devices to provide performance feedback. The first prototype of the knee phantom allows inspection, treatment of menisci, irrigation, and limb stressing. PASSPORT was evaluated for face and construct validity. Construct validity was assessed by measuring the performance of two groups with different levels of arthroscopic experience (20 surgeons and 8 residents). Participants performed a navigation task five times on PASSPORT. Task times were recorded. Face validity was assessed by completion of a short questionnaire on the participants' impressions and comments for improvements. Construct validity was demonstrated as the surgeons (median task time 19.7 s [8.0-37.6]) were more efficient than the residents (55.2 s [27.9-96.6]) in task completion for each repetition (Mann-Whitney U test, P < 0.05). The prototype of the knee phantom sufficiently imitated limb outer appearance (79%), portal resistance (82%), and arthroscopic view (81%). Improvements are required for the stressing device and the material of cruciate ligaments. Our physical simulation environment (PASSPORT) demonstrates its potential to evolve as a training modality. In future, automated performance feedback is aimed for.
DC and small-signal physical models for the AlGaAs/GaAs high electron mobility transistor
NASA Technical Reports Server (NTRS)
Sarker, J. C.; Purviance, J. E.
1991-01-01
Analytical and numerical models are developed for the microwave small-signal performance, such as transconductance, gate-to-source capacitance, current gain cut-off frequency and the optimum cut-off frequency of the AlGaAs/GaAs High Electron Mobility Transistor (HEMT), in both normal and compressed transconductance regions. The validated I-V characteristics and the small-signal performances of four HeMT's are presented.
ERIC Educational Resources Information Center
Brown, Barbara J.
2013-01-01
The researcher investigated teacher factors contributing to English language arts (ELA) achievement of English language learners (ELLs) over 2 consecutive years, in high and low performing elementary schools with a Hispanic/Latino student population greater than or equal to 30 percent. These factors included personal teacher efficacy, teacher…
Natural language processing in pathology: a scoping review.
Burger, Gerard; Abu-Hanna, Ameen; de Keizer, Nicolette; Cornet, Ronald
2016-07-22
Encoded pathology data are key for medical registries and analyses, but pathology information is often expressed as free text. We reviewed and assessed the use of NLP (natural language processing) for encoding pathology documents. Papers addressing NLP in pathology were retrieved from PubMed, Association for Computing Machinery (ACM) Digital Library and Association for Computational Linguistics (ACL) Anthology. We reviewed and summarised the study objectives; NLP methods used and their validation; software implementations; the performance on the dataset used and any reported use in practice. The main objectives of the 38 included papers were encoding and extraction of clinically relevant information from pathology reports. Common approaches were word/phrase matching, probabilistic machine learning and rule-based systems. Five papers (13%) compared different methods on the same dataset. Four papers did not specify the method(s) used. 18 of the 26 studies that reported F-measure, recall or precision reported values of over 0.9. Proprietary software was the most frequently mentioned category (14 studies); General Architecture for Text Engineering (GATE) was the most applied architecture overall. Practical system use was reported in four papers. Most papers used expert annotation validation. Different methods are used in NLP research in pathology, and good performances, that is, high precision and recall, high retrieval/removal rates, are reported for all of these. Lack of validation and of shared datasets precludes performance comparison. More comparative analysis and validation are needed to provide better insight into the performance and merits of these methods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Patyra, Ewelina; Nebot, Carolina; Gavilán, Rosa Elvira; Cepeda, Alberto; Kwiatek, Krzysztof
2018-05-01
A new multi-compound method for the analysis of veterinary drugs, namely tiamulin, trimethoprim, tylosin, sulfadiazine and sulfamethazine was developed and validated in medicated feeds. After extraction, the samples were centrifuged, diluted in Milli-Q water, filtered and analysed by high performance liquid chromatography coupled to tandem mass spectrometry. The separation of the analytes was performed on a biphenyl column with a gradient of 0.1% formic acid in acetonitrile and 0.1% formic acid in Milli-Q water. Quantitative validation was done in accordance with the guidelines laid down in European Commission Decision 2002/657/EC. Method performances were evaluated by the following parameters: linearity (R 2 < 0.99), precision (repeatability <14% and within-laboratory reproducibility <24%), recovery (73.58-115.21%), sensitivity, limit of detection (LOD), limit of quantification (LOQ), selectivity and expanded measurement uncertainty (k. = 2). The validated method was successfully applied to the 2 medicated feeds obtained from the interlaboratory studies and feed manufactures from Spain in August 2017. In these samples, tiamulin, tylosin and sulfamethazine were detected at the concentration levels declared by the manufacturers. The developed method can therefore be successfully used to routinely control the content and homogeneity of these antibacterial substances in medicated feed. Abbreviations AAFCO - Association of American Feed Control Officials; TYL - tylosin; TIAM - tiamulin fumarate; TRIM - trimethoprim; SDZ - sulfadiazine; SMZ - sulfamethazine; UV - ultraviolet detector; FLD - fluorescence detector; HPLC - high performance liquid chromatography; MS/MS - tandem mass spectrometry; LOD - limit of detection; LOQ - limit of quantification; CV - coefficient of variation; SD - standard deviation; U - uncertainty.
Siliquini, R; Saulle, R; Rabacchi, G; Bert, F; Massimi, A; Bulzomì, V; Boccia, A; La Torre, G
2012-01-01
Objective of this pilot study was to evaluate the reliability and validity of the web-based questionnaire in pregnant women as a tool to examine prevalence, knowledge and attitudes about internet utilization for health-related purposes, in a sample of Italian pregnant women. The questionnaire was composed by 9 sections for a total of 73 items. Reliability analysis was tested and content validity was evaluated using Cronbach's alpha to check internal consistency. Statistical analysis was performed through SPSS 13.0. Questionnaire was administered to 56 pregnant women. The higher value of Cronbach's alpha resulted on 61 items: alpha = 0.786 (n. 73 items: alpha = 0.579). High rate of pregnant women generally utilized internet (87.5%) and the 92.1% confirmed to use internet with the focus to acquire information about pregnancy (p < 0.0001). The questionnaire showed a good reliability property in the pilot study. In terms of internal consistency and validity appeared to have a good performance. Given the high prevalence of pregnant women that use internet to search information about their pregnancy status, professional healthcare workers should give advice regarding official websites where they could retrieve safe information and learn knowledge based on scientific evidence.
Evaluation of high fidelity patient simulator in assessment of performance of anaesthetists.
Weller, J M; Bloch, M; Young, S; Maze, M; Oyesola, S; Wyner, J; Dob, D; Haire, K; Durbridge, J; Walker, T; Newble, D
2003-01-01
There is increasing emphasis on performance-based assessment of clinical competence. The High Fidelity Patient Simulator (HPS) may be useful for assessment of clinical practice in anaesthesia, but needs formal evaluation of validity, reliability, feasibility and effect on learning. We set out to assess the reliability of a global rating scale for scoring simulator performance in crisis management. Using a global rating scale, three judges independently rated videotapes of anaesthetists in simulated crises in the operating theatre. Five anaesthetists then independently rated subsets of these videotapes. There was good agreement between raters for medical management, behavioural attributes and overall performance. Agreement was high for both the initial judges and the five additional raters. Using a global scale to assess simulator performance, we found good inter-rater reliability for scoring performance in a crisis. We estimate that two judges should provide a reliable assessment. High fidelity simulation should be studied further for assessing clinical performance.
A high-performance spatial database based approach for pathology imaging algorithm evaluation
Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.
2013-01-01
Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. Results: Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. Conclusions: Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation. PMID:23599905
Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud
2013-09-01
The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)
Liyanaarachchi, G V V; Mahanama, K R R; Somasiri, H P P S; Punyasiri, P A N
2018-02-01
The study presents the validation results of the method carried out for analysis of free amino acids (FAAs) in rice using l-theanine as the internal standard (IS) with o-phthalaldehyde (OPA) reagent using high-performance liquid chromatography-fluorescence detection. The detection and quantification limits of the method were in the range 2-16μmol/kg and 3-19μmol/kg respectively. The method had a wide working range from 25 to 600μmol/kg for each individual amino acid, and good linearity with regression coefficients greater than 0.999. Precision measured in terms of repeatability and reproducibility, expressed as percentage relative standard deviation (% RSD) was below 9% for all the amino acids analyzed. The recoveries obtained after fortification at three concentration levels were in the range 75-105%. In comparison to l-norvaline, findings revealed that l-theanine is suitable as an IS and the validated method can be used for FAA determination in rice. Copyright © 2017 Elsevier Ltd. All rights reserved.
Assessment of MARMOT. A Mesoscale Fuel Performance Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tonks, M. R.; Schwen, D.; Zhang, Y.
2015-04-01
MARMOT is the mesoscale fuel performance code under development as part of the US DOE Nuclear Energy Advanced Modeling and Simulation Program. In this report, we provide a high level summary of MARMOT, its capabilities, and its current state of validation. The purpose of MARMOT is to predict the coevolution of microstructure and material properties of nuclear fuel and cladding. It accomplished this using the phase field method coupled to solid mechanics and heat conduction. MARMOT is based on the Multiphysics Object-Oriented Simulation Environment (MOOSE), and much of its basic capability in the areas of the phase field method, mechanics,more » and heat conduction come directly from MOOSE modules. However, additional capability specific to fuel and cladding is available in MARMOT. While some validation of MARMOT has been completed in the areas of fission gas behavior and grain growth, much more validation needs to be conducted. However, new mesoscale data needs to be obtained in order to complete this validation.« less
NASA Technical Reports Server (NTRS)
Ray, Ronald J.
1994-01-01
New flight test maneuvers and analysis techniques for evaluating the dynamic response of in-flight thrust models during throttle transients have been developed and validated. The approach is based on the aircraft and engine performance relationship between thrust and drag. Two flight test maneuvers, a throttle step and a throttle frequency sweep, were developed and used in the study. Graphical analysis techniques, including a frequency domain analysis method, were also developed and evaluated. They provide quantitative and qualitative results. Four thrust calculation methods were used to demonstrate and validate the test technique. Flight test applications on two high-performance aircraft confirmed the test methods as valid and accurate. These maneuvers and analysis techniques were easy to implement and use. Flight test results indicate the analysis techniques can identify the combined effects of model error and instrumentation response limitations on the calculated thrust value. The methods developed in this report provide an accurate approach for evaluating, validating, or comparing thrust calculation methods for dynamic flight applications.
NASA Astrophysics Data System (ADS)
Brindha, Elumalai; Rajasekaran, Ramu; Aruna, Prakasarao; Koteeswaran, Dornadula; Ganesan, Singaravelu
2017-01-01
Urine has emerged as one of the diagnostically potential bio fluids, as it has many metabolites. As the concentration and the physiochemical properties of the urinary metabolites may vary under pathological transformation, Raman spectroscopic characterization of urine has been exploited as a significant tool in identifying several diseased conditions, including cancers. In the present study, an attempt was made to study the high wavenumber (HWVN) Raman spectroscopic characterization of urine samples of normal subjects, oral premalignant and malignant patients. It is concluded that the urinary metabolites flavoproteins, tryptophan and phenylalanine are responsible for the observed spectral variations between the normal and abnormal groups. Principal component analysis-based linear discriminant analysis was carried out to verify the diagnostic potentiality of the present technique. The discriminant analysis performed across normal and oral premalignant subjects classifies 95.6% of the original and 94.9% of the cross-validated grouped cases correctly. In the second analysis performed across normal and oral malignant groups, the accuracy of the original and cross-validated grouped cases was 96.4% and 92.1% respectively. Similarly, the third analysis performed across three groups, normal, oral premalignant and malignant groups, classifies 93.3% and 91.2% of the original and cross-validated grouped cases correctly.
Determination of Ochratoxin A in Rye and Rye-Based Products by Fluorescence Polarization Immunoassay
Lippolis, Vincenzo; Porricelli, Anna C. R.; Cortese, Marina; Zanardi, Sandro; Pascale, Michelangelo
2017-01-01
A rapid fluorescence polarization immunoassay (FPIA) was optimized and validated for the determination of ochratoxin A (OTA) in rye and rye crispbread. Samples were extracted with a mixture of acetonitrile/water (60:40, v/v) and purified by SPE-aminopropyl column clean-up before performing the FPIA. Overall mean recoveries were 86 and 95% for spiked rye and rye crispbread with relative standard deviations lower than 6%. Limits of detection (LOD) of the optimized FPIA was 0.6 μg/kg for rye and rye crispbread, respectively. Good correlations (r > 0.977) were observed between OTA contents in contaminated samples obtained by FPIA and high-performance liquid chromatography (HPLC) with immunoaffinity cleanup used as reference method. Furthermore, single laboratory validation and small-scale collaborative trials were carried out for the determination of OTA in rye according to Regulation 519/2014/EU laying down procedures for the validation of screening methods. The precision profile of the method, cut-off level and rate of false suspect results confirm the satisfactory analytical performances of assay as a screening method. These findings show that the optimized FPIA is suitable for high-throughput screening, and permits reliable quantitative determination of OTA in rye and rye crispbread at levels that fall below the EU regulatory limits. PMID:28954398
2015-01-01
Background microRNA (miRNA) expression plays an influential role in cancer classification and malignancy, and miRNAs are feasible as alternative diagnostic markers for pancreatic cancer, a highly aggressive neoplasm with silent early symptoms, high metastatic potential, and resistance to conventional therapies. Methods In this study, we evaluated the benefits of multi-omics data analysis by integrating miRNA and mRNA expression data in pancreatic cancer. Using support vector machine (SVM) modelling and leave-one-out cross validation (LOOCV), we evaluated the diagnostic performance of single- or multi-markers based on miRNA and mRNA expression profiles from 104 PDAC tissues and 17 benign pancreatic tissues. For selecting even more reliable and robust markers, we performed validation by independent datasets from the Gene Expression Omnibus (GEO) and the Cancer Genome Atlas (TCGA) data depositories. For validation, miRNA activity was estimated by miRNA-target gene interaction and mRNA expression datasets in pancreatic cancer. Results Using a comprehensive identification approach, we successfully identified 705 multi-markers having powerful diagnostic performance for PDAC. In addition, these marker candidates annotated with cancer pathways using gene ontology analysis. Conclusions Our prediction models have strong potential for the diagnosis of pancreatic cancer. PMID:26328610
Puscas, Anitta; Hosu, Anamaria; Cimpoiu, Claudia
2013-01-11
Honey is a saturated solution of sugars, used for a long time as a natural source of sugars and is an important ingredient in traditional medicine due to its antimicrobial, anti-inflammatory and antioxidant effects. Therefore, methods for quality control of honey and detection of its adulteration are very important. For this reason, the aim of this study is to develop and validate a new, simple and economical analytical method for detecting the adulteration of some Romanian honeys based on high-performance thin-layer chromatography (HPTLC) combined with image analysis. The proposed method involved the chromatographic separations of glucose, fructose and sucrose on silica gel HPTLC plates, developed twice with ethyl acetate-pyridine-water-acetic acid, 6:3:1:0.5 (v/v/v/v), followed by dipping in an immersion solution. The documentation of plates was performed using TLC visualization device and the images of plates were processed using a digital processor. The developed HPTLC method was validated for selectivity, linearity and range, LOD and LOQ, precision, robustness and accuracy. The method was then applied for quantitative determination of glucose, fructose and sucrose from different types of Romanian honeys, commercially available. Copyright © 2012 Elsevier B.V. All rights reserved.
Whelan, Michelle; Kinsella, Brian; Furey, Ambrose; Moloney, Mary; Cantwell, Helen; Lehotay, Steven J; Danaher, Martin
2010-07-02
A new UHPLC-MS/MS (ultra high performance liquid chromatography coupled to tandem mass spectrometry) method was developed and validated to detect 38 anthelmintic drug residues, consisting of benzimidazoles, avermectins and flukicides. A modified QuEChERS-type extraction method was developed with an added concentration step to detect most of the analytes at <1 microg kg(-1) levels in milk. Anthelmintic residues were extracted into acetonitrile using magnesium sulphate and sodium chloride to induce liquid-liquid partitioning followed by dispersive solid phase extraction for cleanup. The extract was concentrated into dimethyl sulphoxide, which was used as a keeper to ensure analytes remain in solution. Using rapid polarity switching in electrospray ionisation, a single injection was capable of detecting both positively and negatively charged ions in a 13 min run time. The method was validated at two levels: the unapproved use level and at the maximum residue level (MRL) according to Commission Decision (CD) 2002/657/EC criteria. The decision limit (CCalpha) of the method was in the range of 0.14-1.9 and 11-123 microg kg(-1) for drugs validated at unapproved and MRL levels, respectively. The performance of the method was successfully verified for benzimidazoles and levamisole by participating in a proficiency study.
[Balanced scorecard for performance measurement of a nursing organization in a Korean hospital].
Hong, Yoonmi; Hwang, Kyung Ja; Kim, Mi Ja; Park, Chang Gi
2008-02-01
The purpose of this study was to develop a balanced scorecard (BSC) for performance measurement of a Korean hospital nursing organization and to evaluate the validity and reliability of performance measurement indicators. Two hundred fifty-nine nurses in a Korean hospital participated in a survey questionnaire that included 29-item performance evaluation indicators developed by investigators of this study based on the Kaplan and Norton's BSC (1992). Cronbach's alpha was used to test the reliability of the BSC. Exploratory and confirmatory factor analysis with a structure equation model (SEM) was applied to assess the construct validity of the BSC. Cronbach's alpha of 29 items was .948. Factor analysis of the BSC showed 5 principal components (eigen value >1.0) which explained 62.7% of the total variance, and it included a new one, community service. The SEM analysis results showed that 5 components were significant for the hospital BSC tool. High degree of reliability and validity of this BSC suggests that it may be used for performance measurements of a Korean hospital nursing organization. Future studies may consider including a balanced number of nurse managers and staff nurses in the study. Further data analysis on the relationships among factors is recommended.
Waples, Robin S
2010-07-01
Recognition of the importance of cross-validation ('any technique or instance of assessing how the results of a statistical analysis will generalize to an independent dataset'; Wiktionary, en.wiktionary.org) is one reason that the U.S. Securities and Exchange Commission requires all investment products to carry some variation of the disclaimer, 'Past performance is no guarantee of future results.' Even a cursory examination of financial behaviour, however, demonstrates that this warning is regularly ignored, even by those who understand what an independent dataset is. In the natural sciences, an analogue to predicting future returns for an investment strategy is predicting power of a particular algorithm to perform with new data. Once again, the key to developing an unbiased assessment of future performance is through testing with independent data--that is, data that were in no way involved in developing the method in the first place. A 'gold-standard' approach to cross-validation is to divide the data into two parts, one used to develop the algorithm, the other used to test its performance. Because this approach substantially reduces the sample size that can be used in constructing the algorithm, researchers often try other variations of cross-validation to accomplish the same ends. As illustrated by Anderson in this issue of Molecular Ecology Resources, however, not all attempts at cross-validation produce the desired result. Anderson used simulated data to evaluate performance of several software programs designed to identify subsets of loci that can be effective for assigning individuals to population of origin based on multilocus genetic data. Such programs are likely to become increasingly popular as researchers seek ways to streamline routine analyses by focusing on small sets of loci that contain most of the desired signal. Anderson found that although some of the programs made an attempt at cross-validation, all failed to meet the 'gold standard' of using truly independent data and therefore produced overly optimistic assessments of power of the selected set of loci--a phenomenon known as 'high grading bias.'
Validation of asthma recording in electronic health records: a systematic review
Nissen, Francis; Quint, Jennifer K; Wilkinson, Samantha; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J
2017-01-01
Objective To describe the methods used to validate asthma diagnoses in electronic health records and summarize the results of the validation studies. Background Electronic health records are increasingly being used for research on asthma to inform health services and health policy. Validation of the recording of asthma diagnoses in electronic health records is essential to use these databases for credible epidemiological asthma research. Methods We searched EMBASE and MEDLINE databases for studies that validated asthma diagnoses detected in electronic health records up to October 2016. Two reviewers independently assessed the full text against the predetermined inclusion criteria. Key data including author, year, data source, case definitions, reference standard, and validation statistics (including sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV]) were summarized in two tables. Results Thirteen studies met the inclusion criteria. Most studies demonstrated a high validity using at least one case definition (PPV >80%). Ten studies used a manual validation as the reference standard; each had at least one case definition with a PPV of at least 63%, up to 100%. We also found two studies using a second independent database to validate asthma diagnoses. The PPVs of the best performing case definitions ranged from 46% to 58%. We found one study which used a questionnaire as the reference standard to validate a database case definition; the PPV of the case definition algorithm in this study was 89%. Conclusion Attaining high PPVs (>80%) is possible using each of the discussed validation methods. Identifying asthma cases in electronic health records is possible with high sensitivity, specificity or PPV, by combining multiple data sources, or by focusing on specific test measures. Studies testing a range of case definitions show wide variation in the validity of each definition, suggesting this may be important for obtaining asthma definitions with optimal validity. PMID:29238227
NASA Technical Reports Server (NTRS)
Gaddis, Stephen W.; Hudson, Susan T.; Johnson, P. D.
1992-01-01
NASA's Marshall Space Flight Center has established a cold airflow turbine test program to experimentally determine the performance of liquid rocket engine turbopump drive turbines. Testing of the SSME alternate turbopump development (ATD) fuel turbine was conducted for back-to-back comparisons with the baseline SSME fuel turbine results obtained in the first quarter of 1991. Turbine performance, Reynolds number effects, and turbine diagnostics, such as stage reactions and exit swirl angles, were investigated at the turbine design point and at off-design conditions. The test data showed that the ATD fuel turbine test article was approximately 1.4 percent higher in efficiency and flowed 5.3 percent more than the baseline fuel turbine test article. This paper describes the method and results used to validate the ATD fuel turbine aerodynamic design. The results are being used to determine the ATD high pressure fuel turbopump (HPFTP) turbine performance over its operating range, anchor the SSME ATD steady-state performance model, and validate various prediction and design analyses.
Reliable and valid assessment of Lichtenstein hernia repair skills.
Carlsen, C G; Lindorff-Larsen, K; Funch-Jensen, P; Lund, L; Charles, P; Konge, L
2014-08-01
Lichtenstein hernia repair is a common surgical procedure and one of the first procedures performed by a surgical trainee. However, formal assessment tools developed for this procedure are few and sparsely validated. The aim of this study was to determine the reliability and validity of an assessment tool designed to measure surgical skills in Lichtenstein hernia repair. Key issues were identified through a focus group interview. On this basis, an assessment tool with eight items was designed. Ten surgeons and surgical trainees were video recorded while performing Lichtenstein hernia repair, (four experts, three intermediates, and three novices). The videos were blindly and individually assessed by three raters (surgical consultants) using the assessment tool. Based on these assessments, validity and reliability were explored. The internal consistency of the items was high (Cronbach's alpha = 0.97). The inter-rater reliability was very good with an intra-class correlation coefficient (ICC) = 0.93. Generalizability analysis showed a coefficient above 0.8 even with one rater. The coefficient improved to 0.92 if three raters were used. One-way analysis of variance found a significant difference between the three groups which indicates construct validity, p < 0.001. Lichtenstein hernia repair skills can be assessed blindly by a single rater in a reliable and valid fashion with the new procedure-specific assessment tool. We recommend this tool for future assessment of trainees performing Lichtenstein hernia repair to ensure that the objectives of competency-based surgical training are met.
Individual Differences in Reported Visual Imagery and Memory Performance.
ERIC Educational Resources Information Center
McKelvie, Stuart J.; Demers, Elizabeth G.
1979-01-01
High- and low-visualizing males, identified by the self-report VVIQ, participated in a memory experiment involving abstract words, concrete words, and pictures. High-visualizers were superior on all items in short-term recall but superior only on pictures in long-term recall, supporting the VVIQ's validity. (Author/SJL)
Offshore Standards and Research Validation | Wind | NREL
Research Capabilities 35 years of wind turbine testing experience Custom high speed data acquisition system turbine testing expertise, NREL has developed instrumentation for high resolution measurements at sea by and technicians, who conduct a wide range of field measurements to verify turbine performance and
Brunelle, Sharon L
2016-01-01
A previously validated method for determination of chondroitin sulfate in raw materials and dietary supplements was submitted to the AOAC Expert Review Panel (ERP) for Stakeholder Panel on Dietary Supplements Set 1 Ingredients (Anthocyanins, Chondroitin, and PDE5 Inhibitors) for consideration of First Action Official Methods(SM) status. The ERP evaluated the single-laboratory validation results against AOAC Standard Method Performance Requirements 2014.009. With recoveries of 100.8-101.6% in raw materials and 105.4-105.8% in finished products and precision of 0.25-1.8% RSDr within-day and 1.6-4.72% RSDr overall, the ERP adopted the method for First Action Official Methods status and provided recommendations for achieving Final Action status.
Louveau, B; Fernandez, C; Zahr, N; Sauvageon-Martre, H; Maslanka, P; Faure, P; Mourah, S; Goldwirt, L
2016-12-01
A precise and accurate high-performance liquid chromatography (HPLC) quantification method of rifampicin in human plasma was developed and validated using ultraviolet detection after an automatized solid-phase extraction. The method was validated with respect to selectivity, extraction recovery, linearity, intra- and inter-day precision, accuracy, lower limit of quantification and stability. Chromatographic separation was performed on a Chromolith RP 8 column using a mixture of 0.05 m acetate buffer pH 5.7-acetonitrile (35:65, v/v) as mobile phase. The compounds were detected at a wavelength of 335 nm with a lower limit of quantification of 0.05 mg/L in human plasma. Retention times for rifampicin and 6,7-dimethyl-2,3-di(2-pyridyl) quinoxaline used as internal standard were respectively 3.77 and 4.81 min. This robust and exact method was successfully applied in routine for therapeutic drug monitoring in patients treated with rifampicin. Copyright © 2016 John Wiley & Sons, Ltd.
High-Temperature Strain Sensing for Aerospace Applications
NASA Technical Reports Server (NTRS)
Piazza, Anthony; Richards, Lance W.; Hudson, Larry D.
2008-01-01
Thermal protection systems (TPS) and hot structures are utilizing advanced materials that operate at temperatures that exceed abilities to measure structural performance. Robust strain sensors that operate accurately and reliably beyond 1800 F are needed but do not exist. These shortcomings hinder the ability to validate analysis and modeling techniques and hinders the ability to optimize structural designs. This presentation examines high-temperature strain sensing for aerospace applications and, more specifically, seeks to provide strain data for validating finite element models and thermal-structural analyses. Efforts have been made to develop sensor attachment techniques for relevant structural materials at the small test specimen level and to perform laboratory tests to characterize sensor and generate corrections to apply to indicated strains. Areas highlighted in this presentation include sensors, sensor attachment techniques, laboratory evaluation/characterization of strain measurement, and sensor use in large-scale structures.
High performance thin layer chromatography fingerprint analysis of guava (Psidium guajava) leaves
NASA Astrophysics Data System (ADS)
Astuti, M.; Darusman, L. K.; Rafi, M.
2017-05-01
High-performance thin layer chromatography (HPTLC) fingerprint analysis is commonly used for quality control of medicinal plants in term of identification and authentication. In this study, we have been developed HPTLC fingerprint analysis for identification of guava (Psidium guajava) leaves raw material. A mixture of chloroform, acetone, and formic acid in the ratio 10:2:1 was used as the optimum mobile phase in HPTLC silica plate and with 13 bands were detected. As reference marker we chose gallic acid (Rf = 0.21) and catechin (Rf = 0.11). The two compound were detected as pale black bands at 366 nm after derivatization with sulfuric acid 10% v/v (in methanol) reagent. Validation of the method was met within validation criteria, so the developed method could be used for quality control of guava leaves.
Coran, Silvia A; Mulas, Stefano
2012-11-01
A novel HPTLC-densitometric method was developed for separation and quantitation of primulasaponin I and II in different matrices. HPTLC silica gel 60 F254(S), 20 cm × 10 cm, plates with ethyl acetate:water:formic acid (5:1:1 v/v) as the mobile phase were used. Densitometric determinations were performed in reflectance mode at 540 nm after derivatization with vanillin reagent. The method was validated giving rise to a dependable and high throughput procedure well suited to routine applications. Primulasaponins were quantified in the range of 150-450 ng with RSD of repeatability and intermediate precision between 0.8 and 1.4% and accuracy within the acceptance limits. The method was tested on commercial herbal medicinal preparations claiming to contain primula root extract. Copyright © 2012 Elsevier B.V. All rights reserved.
Sun, Zhi; Kong, Xiangzhen; Zuo, Lihua; Kang, Jian; Hou, Lei; Zhang, Xiaojian
2016-02-01
A novel and rapid microwave extraction and ultra high performance liquid chromatography with tandem mass spectrometry method was developed and validated for the simultaneous determination of 25 bioactive constituents (including two new constituents) in Fructus Alpinia oxyphylla. The optimized conditions of the microwave extraction was a microwave power of 300 W, extraction temperature of 80°C, solvent-to-solid ratio of 30 mL/g and extraction time of 8 min. Separation was achieved on a Waters ACQUITY UPLC(®) HSS C18 column (2.1 mm× 50 mm, 1.8 μm) using gradient elution with a mobile phase consisting of acetonitrile and 1 mM ammonium acetate at a flow rate of 0.2 mL/min. This is the first report of the simultaneous determination of 25 bioactive constituents in Fructus Alpinia oxyphylla by ultra high performance liquid chromatography with tandem mass spectrometry. The method was validated with good linearity, acceptable precision and accuracy. The validated method was successfully applied to determine the contents of 25 bioactive constituents in Fructus Alpinia oxyphylla from different sources and the analysis results were classified by hierarchical cluster analysis, which indicated the effect of different cultivation regions on the contents of constituents. This study provides powerful and practical guidance in the quality control of Alpinia oxyphylla and lays the foundation for further research of Alpinia oxyphylla. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Schwartz, C E; Vollmer, T; Lee, H
1999-01-01
To describe the results of a multicenter study that validated two new patient-reported measures of neurologic impairment and disability for use in MS clinical research. Self-reported data can provide a cost-effective means to assess patient functioning, and can be useful for screening patients who require additional evaluation. Thirteen MS centers from the United States and Canada implemented a cross-sectional validation study of two new measures of neurologic function. The Symptom Inventory is a measure of neurologic impairment with six subscales designed to correlate with localization of brain lesion. The Performance Scales measure disability in eight domains of function: mobility, hand function, vision, fatigue, cognition, bladder/bowel, sensory, and spasticity. Measures given for comparison included a neurologic examination (Expanded Disability Status Scale, Ambulation Index, Disease Steps) as well as the patient-reported Health Status Questionnaire and the Quality of Well-being Index. Participants included 274 MS patients and 296 healthy control subjects who were matched to patients on age, gender, and education. Both the Symptom Inventory and the Performance Scales showed high test-retest and internal consistency reliability. Correlational analyses supported the construct validity of both measures. Discriminant function analysis reduced the Symptom Inventory to 29 items without sacrificing reliability and increased its discriminant validity. The Performance Scales explained more variance in clinical outcomes and global quality of life than the Symptom Inventory, and there was some evidence that the two measures complemented each other in predicting Quality of Well-being Index scores. The Symptom Inventory and the Performance Scales are reliable and valid measures.
GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.
Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim
2016-08-01
In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.
Zamanzadeh, Vahid; Ghahramanian, Akram; Rassouli, Maryam; Abbaszadeh, Abbas; Alavi-Majd, Hamid; Nikanfar, Ali-Reza
2015-01-01
Introduction: The importance of content validity in the instrument psychometric and its relevance with reliability, have made it an essential step in the instrument development. This article attempts to give an overview of the content validity process and to explain the complexity of this process by introducing an example. Methods: We carried out a methodological study conducted to examine the content validity of the patient-centered communication instrument through a two-step process (development and judgment). At the first step, domain determination, sampling (item generation) and instrument formation and at the second step, content validity ratio, content validity index and modified kappa statistic was performed. Suggestions of expert panel and item impact scores are used to examine the instrument face validity. Results: From a set of 188 items, content validity process identified seven dimensions includes trust building (eight items), informational support (seven items), emotional support (five items), problem solving (seven items), patient activation (10 items), intimacy/friendship (six items) and spirituality strengthening (14 items). Content validity study revealed that this instrument enjoys an appropriate level of content validity. The overall content validity index of the instrument using universal agreement approach was low; however, it can be advocated with respect to the high number of content experts that makes consensus difficult and high value of the S-CVI with the average approach, which was equal to 0.93. Conclusion: This article illustrates acceptable quantities indices for content validity a new instrument and outlines them during design and psychometrics of patient-centered communication measuring instrument. PMID:26161370
Chang, Yuanhan; Tambe, Abhijit Anil; Maeda, Yoshinobu; Wada, Masahiro; Gonda, Tomoya
2018-03-08
A literature review of finite element analysis (FEA) studies of dental implants with their model validation process was performed to establish the criteria for evaluating validation methods with respect to their similarity to biological behavior. An electronic literature search of PubMed was conducted up to January 2017 using the Medical Subject Headings "dental implants" and "finite element analysis." After accessing the full texts, the context of each article was searched using the words "valid" and "validation" and articles in which these words appeared were read to determine whether they met the inclusion criteria for the review. Of 601 articles published from 1997 to 2016, 48 that met the eligibility criteria were selected. The articles were categorized according to their validation method as follows: in vivo experiments in humans (n = 1) and other animals (n = 3), model experiments (n = 32), others' clinical data and past literature (n = 9), and other software (n = 2). Validation techniques with a high level of sufficiency and efficiency are still rare in FEA studies of dental implants. High-level validation, especially using in vivo experiments tied to an accurate finite element method, needs to become an established part of FEA studies. The recognition of a validation process should be considered when judging the practicality of an FEA study.
Reliability and validity of two isometric squat tests.
Blazevich, Anthony J; Gill, Nicholas; Newton, Robert U
2002-05-01
The purpose of the present study was first to examine the reliability of isometric squat (IS) and isometric forward hack squat (IFHS) tests to determine if repeated measures on the same subjects yielded reliable results. The second purpose was to examine the relation between isometric and dynamic measures of strength to assess validity. Fourteen male subjects performed maximal IS and IFHS tests on 2 occasions and 1 repetition maximum (1-RM) free-weight squat and forward hack squat (FHS) tests on 1 occasion. The 2 tests were found to be highly reliable (intraclass correlation coefficient [ICC](IS) = 0.97 and ICC(IFHS) = 1.00). There was a strong relation between average IS and 1-RM squat performance, and between IFHS and 1-RM FHS performance (r(squat) = 0.77, r(FHS) = 0.76; p < 0.01), but a weak relation between squat and FHS test performances (r < 0.55). There was also no difference between observed 1-RM values and those predicted by our regression equations. Errors in predicting 1-RM performance were in the order of 8.5% (standard error of the estimate [SEE] = 13.8 kg) and 7.3% (SEE = 19.4 kg) for IS and IFHS respectively. Correlations between isometric and 1-RM tests were not of sufficient size to indicate high validity of the isometric tests. Together the results suggest that IS and IFHS tests could detect small differences in multijoint isometric strength between subjects, or performance changes over time, and that the scores in the isometric tests are well related to 1-RM performance. However, there was a small error when predicting 1-RM performance from isometric performance, and these tests have not been shown to discriminate between small changes in dynamic strength. The weak relation between squat and FHS test performance can be attributed to differences in the movement patterns of the tests
Individual Passive Chemical Sampler Testing Continued Chemical Agent and TIC Performance Validation
2002-04-01
period of high temperature, although the atmosphere was wet. 4.3 Post-Deployment Activities The deployment of the samplers did not go as...4.4 Day 0 Adsorption and Recovery Comparison Between Gore Low-Level and Gore High -Level Samplers at Varying Temperatures...43 Figure 4.5 Day 0 Adsorption and Recovery Comparison Between SKC High Level and Gore High -Level Samplers
Dulan, Genevieve; Rege, Robert V; Hogg, Deborah C; Gilberg-Fisher, Kristine K; Tesfay, Seifu T; Scott, Daniel J
2012-04-01
The authors previously developed a comprehensive, proficiency-based robotic training curriculum that aimed to address 23 unique skills identified via task deconstruction of robotic operations. The purpose of this study was to determine the content and face validity of this curriculum. Expert robotic surgeons (n = 12) rated each deconstructed skill regarding relevance to robotic operations, were oriented to the curricular components, performed 3 to 5 repetitions on the 9 exercises, and rated each exercise. In terms of content validity, experts rated all 23 deconstructed skills as highly relevant (4.5 on a 5-point scale). Ratings for the 9 inanimate exercises indicated moderate to thorough measurement of designated skills. For face validity, experts indicated that each exercise effectively measured relevant skills (100% agreement) and was highly effective for training and assessment (4.5 on a 5-point scale). These data indicate that the 23 deconstructed skills accurately represent the appropriate content for robotic skills training and strongly support content and face validity for this curriculum. Copyright © 2012. Published by Elsevier Inc.
Traceability validation of a high speed short-pulse testing method used in LED production
NASA Astrophysics Data System (ADS)
Revtova, Elena; Vuelban, Edgar Moreno; Zhao, Dongsheng; Brenkman, Jacques; Ulden, Henk
2017-12-01
Industrial processes of LED (light-emitting diode) production include LED light output performance testing. Most of them are monitored and controlled by optically, electrically and thermally measuring LEDs by high speed short-pulse measurement methods. However, these are not standardized and a lot of information is proprietary that it is impossible for third parties, such as NMIs, to trace and validate. It is known, that these techniques have traceability issue and metrological inadequacies. Often due to these, the claimed performance specifications of LEDs are overstated, which consequently results to manufacturers experiencing customers' dissatisfaction and a large percentage of failures in daily use of LEDs. In this research a traceable setup is developed to validate one of the high speed testing techniques, investigate inadequacies and work out the traceability issues. A well-characterised short square pulse of 25 ms is applied to chip-on-board (CoB) LED modules to investigate the light output and colour content. We conclude that the short-pulse method is very efficient in case a well-defined electrical current pulse is applied and the stabilization time of the device is "a priori" accurately determined. No colour shift is observed. The largest contributors to the measurement uncertainty include badly-defined current pulse and inaccurate calibration factor.
Modeling the Space Debris Environment with MASTER-2009 and ORDEM2010
NASA Technical Reports Server (NTRS)
Flegel, S.; Gelhaus, J.; Wiedemann, C.; Mockel, M.; Vorsmann, P.; Krisko, P.; Xu, Y. -L.; Horstman, M. F.; Opiela, J. N.; Matney, M.;
2010-01-01
Spacecraft analysis using ORDEM2010 uses a high-fidelity population model to compute risk to on-orbit assets. The ORDEM2010 GUI allows visualization of spacecraft flux in 2-D and 1-D. The population was produced using a Bayesian statistical approach with measured and modeled environment data. Validation of sizes < 1mm were performed using Shuttle window and radiator impact measurements. Validation of sizes > 1mm is on-going.
Clinical prognostic rules for severe acute respiratory syndrome in low- and high-resource settings.
Cowling, Benjamin J; Muller, Matthew P; Wong, Irene O L; Ho, Lai-Ming; Lo, Su-Vui; Tsang, Thomas; Lam, Tai Hing; Louie, Marie; Leung, Gabriel M
2006-07-24
An accurate prognostic model for patients with severe acute respiratory syndrome (SARS) could provide a practical clinical decision aid. We developed and validated prognostic rules for both high- and low-resource settings based on data available at the time of admission. We analyzed data on all 1755 and 291 patients with SARS in Hong Kong (derivation cohort) and Toronto (validation cohort), respectively, using a multivariable logistic scoring method with internal and external validation. Scores were assigned on the basis of patient history in a basic model, and a full model additionally incorporated radiological and laboratory results. The main outcome measure was death. Predictors for mortality in the basic model included older age, male sex, and the presence of comorbid conditions. Additional predictors in the full model included haziness or infiltrates on chest radiography, less than 95% oxygen saturation on room air, high lactate dehydrogenase level, and high neutrophil and low platelet counts. The basic model had an area under the receiver operating characteristic (ROC) curve of 0.860 in the derivation cohort, which was maintained on external validation with an area under the ROC curve of 0.882. The full model improved discrimination with areas under the ROC curve of 0.877 and 0.892 in the derivation and validation cohorts, respectively. The model performs well and could be useful in assessing prognosis for patients who are infected with re-emergent SARS.
Loureiro, Luiz de França Bahia; de Freitas, Paulo Barbosa
2016-04-01
Badminton requires open and fast actions toward the shuttlecock, but there is no specific agility test for badminton players with specific movements. To develop an agility test that simultaneously assesses perception and motor capacity and examine the test's concurrent and construct validity and its test-retest reliability. The Badcamp agility test consists of running as fast as possible to 6 targets placed on the corners and middle points of a rectangular area (5.6 × 4.2 m) from the start position located in the center of it, following visual stimuli presented in a luminous panel. The authors recruited 43 badminton players (17-32 y old) to evaluate concurrent (with shuttle-run agility test--SRAT) and construct validity and test-retest reliability. Results revealed that Badcamp presents concurrent and construct validity, as its performance is strongly related to SRAT (ρ = 0.83, P < .001), with performance of experts being better than nonexpert players (P < .01). In addition, Badcamp is reliable, as no difference (P = .07) and a high intraclass correlation (ICC = .93) were found in the performance of the players on 2 different occasions. The findings indicate that Badcamp is an effective, valid, and reliable tool to measure agility, allowing coaches and athletic trainers to evaluate players' athletic condition and training effectiveness and possibly detect talented individuals in this sport.
Aguirre-Gamboa, Raul; Gomez-Rueda, Hugo; Martínez-Ledesma, Emmanuel; Martínez-Torteya, Antonio; Chacolla-Huaringa, Rafael; Rodriguez-Barrientos, Alberto; Tamez-Peña, José G; Treviño, Victor
2013-01-01
Validation of multi-gene biomarkers for clinical outcomes is one of the most important issues for cancer prognosis. An important source of information for virtual validation is the high number of available cancer datasets. Nevertheless, assessing the prognostic performance of a gene expression signature along datasets is a difficult task for Biologists and Physicians and also time-consuming for Statisticians and Bioinformaticians. Therefore, to facilitate performance comparisons and validations of survival biomarkers for cancer outcomes, we developed SurvExpress, a cancer-wide gene expression database with clinical outcomes and a web-based tool that provides survival analysis and risk assessment of cancer datasets. The main input of SurvExpress is only the biomarker gene list. We generated a cancer database collecting more than 20,000 samples and 130 datasets with censored clinical information covering tumors over 20 tissues. We implemented a web interface to perform biomarker validation and comparisons in this database, where a multivariate survival analysis can be accomplished in about one minute. We show the utility and simplicity of SurvExpress in two biomarker applications for breast and lung cancer. Compared to other tools, SurvExpress is the largest, most versatile, and quickest free tool available. SurvExpress web can be accessed in http://bioinformatica.mty.itesm.mx/SurvExpress (a tutorial is included). The website was implemented in JSP, JavaScript, MySQL, and R.
Aguirre-Gamboa, Raul; Gomez-Rueda, Hugo; Martínez-Ledesma, Emmanuel; Martínez-Torteya, Antonio; Chacolla-Huaringa, Rafael; Rodriguez-Barrientos, Alberto; Tamez-Peña, José G.; Treviño, Victor
2013-01-01
Validation of multi-gene biomarkers for clinical outcomes is one of the most important issues for cancer prognosis. An important source of information for virtual validation is the high number of available cancer datasets. Nevertheless, assessing the prognostic performance of a gene expression signature along datasets is a difficult task for Biologists and Physicians and also time-consuming for Statisticians and Bioinformaticians. Therefore, to facilitate performance comparisons and validations of survival biomarkers for cancer outcomes, we developed SurvExpress, a cancer-wide gene expression database with clinical outcomes and a web-based tool that provides survival analysis and risk assessment of cancer datasets. The main input of SurvExpress is only the biomarker gene list. We generated a cancer database collecting more than 20,000 samples and 130 datasets with censored clinical information covering tumors over 20 tissues. We implemented a web interface to perform biomarker validation and comparisons in this database, where a multivariate survival analysis can be accomplished in about one minute. We show the utility and simplicity of SurvExpress in two biomarker applications for breast and lung cancer. Compared to other tools, SurvExpress is the largest, most versatile, and quickest free tool available. SurvExpress web can be accessed in http://bioinformatica.mty.itesm.mx/SurvExpress (a tutorial is included). The website was implemented in JSP, JavaScript, MySQL, and R. PMID:24066126
Papageorgiou, Charalabos; Rabavilas, Andreas D; Stachtea, Xanthy; Giannakakis, Giorgos A; Kyprianou, Miltiades; Papadimitriou, George N; Stefanis, Costas N
2012-04-01
The objective of this study was to investigate the link between the Eysenck Personality Questionnaire (EPQ) scores and depressive symptomatology with reasoning performance induced by a task including valid and invalid Aristotelian syllogisms. The EPQ and the Zung Depressive Scale (ZDS) were completed by 48 healthy subjects (27 male, 21 female) aged 33.5 ± 9.0 years. Additionally, the subjects engaged into two reasoning tasks (valid vs. invalid syllogisms). Analysis showed that the judgment of invalid syllogisms is a more difficult task than of valid judgments (65.1% vs. 74.6% of correct judgments respectively, p < 0.01). In both conditions, the subjects' degree of confidence is significantly higher when they make a correct judgment than when they make an incorrect judgment (83.8 ± 11.2 vs. 75.3 ± 17.3, p < 0.01). Subjects with extraversion as measured by EPQ and high sexual desire as rated by the relative ZDS subscale are more prone to make incorrect judgments in the valid syllogisms, while, at the same time, they are more confident in their responses. The effects of extraversion/introversion and sexual desire on the outcome measures of the valid condition are not commutative but additive. These findings indicate that extraversion/introversion and sexual desire variations may have a detrimental effect in the reasoning performance.
Thermomechanical simulations and experimental validation for high speed incremental forming
NASA Astrophysics Data System (ADS)
Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia
2016-10-01
Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.
Validation of a short-term memory test for the recognition of people and faces.
Leyk, D; Sievert, A; Heiss, A; Gorges, W; Ridder, D; Alexander, T; Wunderlich, M; Ruther, T
2008-08-01
Memorising and processing faces is a short-term memory dependent task of utmost importance in the security domain, in which constant and high performance is a must. Especially in access or passport control-related tasks, the timely identification of performance decrements is essential, margins of error are narrow and inadequate performance may have grave consequences. However, conventional short-term memory tests frequently use abstract settings with little relevance to working situations. They may thus be unable to capture task-specific decrements. The aim of the study was to devise and validate a new test, better reflecting job specifics and employing appropriate stimuli. After 1.5 s (short) or 4.5 s (long) presentation, a set of seven portraits of faces had to be memorised for comparison with two control stimuli. Stimulus appearance followed 2 s (first item) and 8 s (second item) after set presentation. Twenty eight subjects (12 male, 16 female) were tested at seven different times of day, 3 h apart. Recognition rates were above 60% even for the least favourable condition. Recognition was significantly better in the 'long' condition (+10%) and for the first item (+18%). Recognition time showed significant differences (10%) between items. Minor effects of learning were found for response latencies only. Based on occupationally relevant metrics, the test displayed internal and external validity, consistency and suitability for further use in test/retest scenarios. In public security, especially where access to restricted areas is monitored, margins of error are narrow and operator performance must remain high and level. Appropriate schedules for personnel, based on valid test results, are required. However, task-specific data and performance tests, permitting the description of task specific decrements, are not available. Commonly used tests may be unsuitable due to undue abstraction and insufficient reference to real-world conditions. Thus, tests are required that account for task-specific conditions and neurophysiological characteristics.
Confidence in outcome estimates from systematic reviews used in informed consent.
Fritz, Robert; Bauer, Janet G; Spackman, Sue S; Bains, Amanjyot K; Jetton-Rangel, Jeanette
2016-12-01
Evidence-based dentistry now guides informed consent in which clinicians are obliged to provide patients with the most current, best evidence, or best estimates of outcomes, of regimens, therapies, treatments, procedures, materials, and equipment or devices when developing personal oral health care, treatment plans. Yet, clinicians require that the estimates provided from systematic reviews be verified to their validity, reliability, and contextualized as to performance competency so that clinicians may have confidence in explaining outcomes to patients in clinical practice. The purpose of this paper was to describe types of informed estimates from which clinicians may have confidence in their capacity to assist patients in competent decision-making, one of the most important concepts of informed consent. Using systematic review methodology, researchers provide clinicians with valid best estimates of outcomes regarding a subject of interest from best evidence. Best evidence is verified through critical appraisals using acceptable sampling methodology either by scoring instruments (Timmer analysis) or checklist (grade), a Cochrane Collaboration standard that allows transparency in open reviews. These valid best estimates are then tested for reliability using large databases. Finally, valid and reliable best estimates are assessed for meaning using quantification of margins and uncertainties. Through manufacturer and researcher specifications, quantification of margins and uncertainties develops a performance competency continuum by which valid, reliable best estimates may be contextualized for their performance competency: at a lowest margin performance competency (structural failure), high margin performance competency (estimated true value of success), or clinically determined critical values (clinical failure). Informed consent may be achieved when clinicians are confident of their ability to provide useful and accurate best estimates of outcomes regarding regimens, therapies, treatments, and equipment or devices to patients in their clinical practices and when developing personal, oral health care, treatment plans. Copyright © 2016 Elsevier Inc. All rights reserved.
Spatio-temporal modeling of chronic PM 10 exposure for the Nurses' Health Study
NASA Astrophysics Data System (ADS)
Yanosky, Jeff D.; Paciorek, Christopher J.; Schwartz, Joel; Laden, Francine; Puett, Robin; Suh, Helen H.
2008-06-01
Chronic epidemiological studies of airborne particulate matter (PM) have typically characterized the chronic PM exposures of their study populations using city- or county-wide ambient concentrations, which limit the studies to areas where nearby monitoring data are available and which ignore within-city spatial gradients in ambient PM concentrations. To provide more spatially refined and precise chronic exposure measures, we used a Geographic Information System (GIS)-based spatial smoothing model to predict monthly outdoor PM10 concentrations in the northeastern and midwestern United States. This model included monthly smooth spatial terms and smooth regression terms of GIS-derived and meteorological predictors. Using cross-validation and other pre-specified selection criteria, terms for distance to road by road class, urban land use, block group and county population density, point- and area-source PM10 emissions, elevation, wind speed, and precipitation were found to be important determinants of PM10 concentrations and were included in the final model. Final model performance was strong (cross-validation R2=0.62), with little bias (-0.4 μg m-3) and high precision (6.4 μg m-3). The final model (with monthly spatial terms) performed better than a model with seasonal spatial terms (cross-validation R2=0.54). The addition of GIS-derived and meteorological predictors improved predictive performance over spatial smoothing (cross-validation R2=0.51) or inverse distance weighted interpolation (cross-validation R2=0.29) methods alone and increased the spatial resolution of predictions. The model performed well in both rural and urban areas, across seasons, and across the entire time period. The strong model performance demonstrates its suitability as a means to estimate individual-specific chronic PM10 exposures for large populations.
[An instrument in Spanish to evaluate the performance of clinical teachers by students].
Bitran, Marcela; Mena, Beltrán; Riquelme, Arnoldo; Padilla, Oslando; Sánchez, Ignacio; Moreno, Rodrigo
2010-06-01
The modernization of clinical teaching has called for the creation of faculty development programs, and the design of suitable instruments to evaluate clinical teachers' performance. To report the development and validation of an instrument in Spanish designed to measure the students' perceptions of their clinical teachers' performance and to provide them with feedback to improve their teaching practices. In a process that included the active participation of authorities, professors in charge of courses and internships, clinical teachers, students and medical education experts, we developed a 30-item questionnaire called MEDUC30 to evaluate the performance of clinical teachers by their students. The internal validity was assessed by factor analysis of 5214 evaluations of 265 teachers, gathered from 2004 to 2007. The reliability was measured with the Cronbach's alpha coefficient and the generalizability coefficient (g). MEDUC30 had good content and construct validity. Its internal structure was compatible with four factors: patient-centered teaching, teaching skills, assessment skills and learning climate, and it proved to be consistent with the structure anticipated by the theory. The scores were highly reliable (Cronbach's alpha: 0.97); five evaluations per teacher were sufficient to reach a reliability coefficient (g) of 0.8. MEDUC30 is a valid, reliable and useful instrument to evaluate the performance of clinical teachers. To our knowledge, this is the first instrument in Spanish for which solid validity and reliability evidences have been reported. We hope that MEDUC30 will be used to improve medical education in Spanish-speaking medical schools, providing teachers a specific feedback upon which to improve their pedagogical practice, and authorities with valuable information for the assessment of their faculty.
The Second SeaWiFS HPLC Analysis Round-Robin Experiment (SeaHARRE-2)
NASA Technical Reports Server (NTRS)
2005-01-01
Eight international laboratories specializing in the determination of marine pigment concentrations using high performance liquid chromatography (HPLC) were intercompared using in situ samples and a variety of laboratory standards. The field samples were collected primarily from eutrophic waters, although mesotrophic waters were also sampled to create a dynamic range in chlorophyll concentration spanning approximately two orders of magnitude (0.3 25.8 mg m-3). The intercomparisons were used to establish the following: a) the uncertainties in quantitating individual pigments and higher-order variables (sums, ratios, and indices); b) an evaluation of spectrophotometric versus HPLC uncertainties in the determination of total chlorophyll a; and c) the reduction in uncertainties as a result of applying quality assurance (QA) procedures associated with extraction, separation, injection, degradation, detection, calibration, and reporting (particularly limits of detection and quantitation). In addition, the remote sensing requirements for the in situ determination of total chlorophyll a were investigated to determine whether or not the average uncertainty for this measurement is being satisfied. The culmination of the activity was a validation of the round-robin methodology plus the development of the requirements for validating an individual HPLC method. The validation process includes the measurements required to initially demonstrate a pigment is validated, and the measurements that must be made during sample analysis to confirm a method remains validated. The so-called performance-based metrics developed here describe a set of thresholds for a variety of easily-measured parameters with a corresponding set of performance categories. The aggregate set of performance parameters and categories establish a) the overall performance capability of the method, and b) whether or not the capability is consistent with the required accuracy objectives.
Brett, Benjamin L; Solomon, Gary S
2017-04-01
Research findings to date on the stability of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) Composite scores have been inconsistent, requiring further investigation. The use of test validity criteria across these studies also has been inconsistent. Using multiple measures of stability, we examined test-retest reliability of repeated ImPACT baseline assessments in high school athletes across various validity criteria reported in previous studies. A total of 1146 high school athletes completed baseline cognitive testing using the online ImPACT test battery at two time periods of approximately two-year intervals. No participant sustained a concussion between assessments. Five forms of validity criteria used in previous test-retest studies were applied to the data, and differences in reliability were compared. Intraclass correlation coefficients (ICCs) ranged in composite scores from .47 (95% confidence interval, CI [.38, .54]) to .83 (95% CI [.81, .85]) and showed little change across a two-year interval for all five sets of validity criteria. Regression based methods (RBMs) examining the test-retest stability demonstrated a lack of significant change in composite scores across the two-year interval for all forms of validity criteria, with no cases falling outside the expected range of 90% confidence intervals. The application of more stringent validity criteria does not alter test-retest reliability, nor does it account for some of the variation observed across previously performed studies. As such, use of the ImPACT manual validity criteria should be utilized in the determination of test validity and in the individualized approach to concussion management. Potential future efforts to improve test-retest reliability are discussed.
Pitchford, Nicola J; Outhwaite, Laura A
2016-01-01
Assessment of cognitive and motor functions is fundamental for developmental and neuropsychological profiling. Assessments are usually conducted on an individual basis, with a trained examiner, using standardized paper and pencil tests, and can take up to an hour or more to complete, depending on the nature of the test. This makes traditional standardized assessments of child development largely unsuitable for use in low-income countries. Touch screen tablets afford the opportunity to assess cognitive functions in groups of participants, with untrained administrators, with precision recording of responses, thus automating the assessment process. In turn, this enables cognitive profiling to be conducted in contexts where access to qualified examiners and standardized assessments are rarely available. As such, touch screen assessments could provide a means of assessing child development in both low- and high-income countries, which would afford cross-cultural comparisons to be made with the same assessment tool. However, before touch screen tablet assessments can be used for cognitive profiling in low-to-high-income countries they need to be shown to provide reliable and valid measures of performance. We report the development of a new touch screen tablet assessment of basic cognitive and motor functions for use with early years primary school children in low- and high-income countries. Measures of spatial intelligence, visual attention, short-term memory, working memory, manual processing speed, and manual coordination are included as well as mathematical knowledge. To investigate if this new touch screen assessment tool can be used for cross-cultural comparisons we administered it to a sample of children ( N = 283) spanning standards 1-3 in a low-income country, Malawi, and a smaller sample of children ( N = 70) from first year of formal schooling from a high-income country, the UK. Split-half reliability, test-retest reliability, face validity, convergent construct validity, predictive criterion validity, and concurrent criterion validity were investigated. Results demonstrate "proof of concept" that touch screen tablet technology can provide reliable and valid psychometric measures of performance in the early years, highlighting its potential to be used in cross-cultural comparisons and research.
Predicting stillbirth in a low resource setting.
Kayode, Gbenga A; Grobbee, Diederick E; Amoakoh-Coleman, Mary; Adeleke, Ibrahim Taiwo; Ansah, Evelyn; de Groot, Joris A H; Klipstein-Grobusch, Kerstin
2016-09-20
Stillbirth is a major contributor to perinatal mortality and it is particularly common in low- and middle-income countries, where annually about three million stillbirths occur in the third trimester. This study aims to develop a prediction model for early detection of pregnancies at high risk of stillbirth. This retrospective cohort study examined 6,573 pregnant women who delivered at Federal Medical Centre Bida, a tertiary level of healthcare in Nigeria from January 2010 to December 2013. Descriptive statistics were performed and missing data imputed. Multivariable logistic regression was applied to examine the associations between selected candidate predictors and stillbirth. Discrimination and calibration were used to assess the model's performance. The prediction model was validated internally and over-optimism was corrected. We developed a prediction model for stillbirth that comprised maternal comorbidity, place of residence, maternal occupation, parity, bleeding in pregnancy, and fetal presentation. As a secondary analysis, we extended the model by including fetal growth rate as a predictor, to examine how beneficial ultrasound parameters would be for the predictive performance of the model. After internal validation, both calibration and discriminative performance of both the basic and extended model were excellent (i.e. C-statistic basic model = 0.80 (95 % CI 0.78-0.83) and extended model = 0.82 (95 % CI 0.80-0.83)). We developed a simple but informative prediction model for early detection of pregnancies with a high risk of stillbirth for early intervention in a low resource setting. Future research should focus on external validation of the performance of this promising model.
High Performance Structures and Materials
advanced simulation and optimization methods that can be used during the early design stages of innovative Development of Simulation Model Validation Framework for RBDO Sponsored by U.S. Army TARDEC Visit Us Contact
[Selection of medical students : Measurement of cognitive abilities and psychosocial competencies].
Schwibbe, Anja; Lackamp, Janina; Knorr, Mirjana; Hissbach, Johanna; Kadmon, Martina; Hampe, Wolfgang
2018-02-01
The German Constitutional Court is currently reviewing whether the actual study admission process in medicine is compatible with the constitutional right of freedom of profession, since applicants without an excellent GPA usually have to wait for seven years. If the admission system is changed, politicians would like to increase the influence of psychosocial criteria on selection as specified by the Masterplan Medizinstudium 2020.What experiences have been made with the actual selection procedures? How could Situational Judgement Tests contribute to the validity of future selection procedures to German medical schools?High school GPA is the best predictor of study performance, but is more and more under discussion due to the lack of comparability between states and schools and the growing number of applicants with top grades. Aptitude and knowledge tests, especially in the natural sciences, show incremental validity in predicting study performance. The measurement of psychosocial competencies with traditional interviews shows rather low reliability and validity. The more reliable multiple mini-interviews are superior in predicting practical study performance. Situational judgement tests (SJTs) used abroad are regarded as reliable and valid; the correlation of a German SJT piloted in Hamburg with the multiple mini-interview is cautiously encouraging.A model proposed by the Medizinischer Fakultätentag and the Bundesvertretung der Medizinstudierenden considers these results. Student selection is proposed to be based on a combination of high school GPA (40%) and a cognitive test (40%) as well as an SJT (10%) and job experience (10%). Furthermore, the faculties still have the option to carry out specific selection procedures.
PASTIS: Bayesian extrasolar planet validation - I. General framework, models, and performance
NASA Astrophysics Data System (ADS)
Díaz, R. F.; Almenara, J. M.; Santerne, A.; Moutou, C.; Lethuillier, A.; Deleuil, M.
2014-06-01
A large fraction of the smallest transiting planet candidates discovered by the Kepler and CoRoT space missions cannot be confirmed by a dynamical measurement of the mass using currently available observing facilities. To establish their planetary nature, the concept of planet validation has been advanced. This technique compares the probability of the planetary hypothesis against that of all reasonably conceivable alternative false positive (FP) hypotheses. The candidate is considered as validated if the posterior probability of the planetary hypothesis is sufficiently larger than the sum of the probabilities of all FP scenarios. In this paper, we present PASTIS, the Planet Analysis and Small Transit Investigation Software, a tool designed to perform a rigorous model comparison of the hypotheses involved in the problem of planet validation, and to fully exploit the information available in the candidate light curves. PASTIS self-consistently models the transit light curves and follow-up observations. Its object-oriented structure offers a large flexibility for defining the scenarios to be compared. The performance is explored using artificial transit light curves of planets and FPs with a realistic error distribution obtained from a Kepler light curve. We find that data support the correct hypothesis strongly only when the signal is high enough (transit signal-to-noise ratio above 50 for the planet case) and remain inconclusive otherwise. PLAnetary Transits and Oscillations of stars (PLATO) shall provide transits with high enough signal-to-noise ratio, but to establish the true nature of the vast majority of Kepler and CoRoT transit candidates additional data or strong reliance on hypotheses priors is needed.
Sico, Jason J; Yaggi, H Klar; Ofner, Susan; Concato, John; Austin, Charles; Ferguson, Jared; Qin, Li; Tobias, Lauren; Taylor, Stanley; Vaz Fragoso, Carlos A; McLain, Vincent; Williams, Linda S; Bravata, Dawn M
2017-08-01
Screening instruments for obstructive sleep apnea (OSA), as used routinely to guide clinicians regarding patient referral for polysomnography (PSG), rely heavily on symptomatology. We sought to develop and validate a cerebrovascular disease-specific OSA prediction model less reliant on symptomatology, and to compare its performance with commonly used screening instruments within a population with ischemic stroke or transient ischemic attack (TIA). Using data on demographic factors, anthropometric measurements, medical history, stroke severity, sleep questionnaires, and PSG from 2 independently derived, multisite, randomized trials that enrolled patients with stroke or TIA, we developed and validated a model to predict the presence of OSA (i.e., Apnea-Hypopnea Index ≥5 events per hour). Model performance was compared with that of the Berlin Questionnaire, Epworth Sleepiness Scale (ESS), the Snoring, Tiredness, Observed apnea, high blood Pressure, Body mass index, Age, Neck circumference, and Gender instrument, and the Sleep Apnea Clinical Score. The new SLEEP Inventory (Sex, Left heart failure, ESS, Enlarged neck, weight [in Pounds], Insulin resistance/diabetes, and National Institutes of Health Stroke Scale) performed modestly better than other instruments in identifying patients with OSA, showing reasonable discrimination in the development (c-statistic .732) and validation (c-statistic .731) study populations, and having the highest negative predictive value of all in struments. Clinicians should be aware of these limitations in OSA screening instruments when making decisions about referral for PSG. The high negative predictive value of the SLEEP INventory may be useful in determining and prioritizing patients with stroke or TIA least in need of overnight PSG. Published by Elsevier Inc.
Amniotic fluid: the use of high-dimensional biology to understand fetal well-being.
Kamath-Rayne, Beena D; Smith, Heather C; Muglia, Louis J; Morrow, Ardythe L
2014-01-01
Our aim was to review the use of high-dimensional biology techniques, specifically transcriptomics, proteomics, and metabolomics, in amniotic fluid to elucidate the mechanisms behind preterm birth or assessment of fetal development. We performed a comprehensive MEDLINE literature search on the use of transcriptomic, proteomic, and metabolomic technologies for amniotic fluid analysis. All abstracts were reviewed for pertinence to preterm birth or fetal maturation in human subjects. Nineteen articles qualified for inclusion. Most articles described the discovery of biomarker candidates, but few larger, multicenter replication or validation studies have been done. We conclude that the use of high-dimensional systems biology techniques to analyze amniotic fluid has significant potential to elucidate the mechanisms of preterm birth and fetal maturation. However, further multicenter collaborative efforts are needed to replicate and validate candidate biomarkers before they can become useful tools for clinical practice. Ideally, amniotic fluid biomarkers should be translated to a noninvasive test performed in maternal serum or urine.
Field validation of protocols developed to evaluate in-line mastitis detection systems.
Kamphuis, C; Dela Rue, B T; Eastwood, C R
2016-02-01
This paper reports on a field validation of previously developed protocols for evaluating the performance of in-line mastitis-detection systems. The protocols outlined 2 requirements of these systems: (1) to detect cows with clinical mastitis (CM) promptly and accurately to enable timely and appropriate treatment and (2) to identify cows with high somatic cell count (SCC) to manage bulk milk SCC levels. Gold standard measures, evaluation tests, performance measures, and performance targets were proposed. The current study validated the protocols on commercial dairy farms with automated in-line mastitis-detection systems using both electrical conductivity (EC) and SCC sensor systems that both monitor at whole-udder level. The protocol for requirement 1 was applied on 3 commercial farms. For requirement 2, the protocol was applied on 6 farms; 3 of them had low bulk milk SCC (128×10(3) cells/mL) and were the same farms as used for field evaluation of requirement 1. Three farms with high bulk milk SCC (270×10(3) cells/mL) were additionally enrolled. The field evaluation methodology and results were presented at a workshop including representation from 7 international suppliers of in-line mastitis-detection systems. Feedback was sought on the acceptance of standardized performance evaluation protocols and recommended refinements to the protocols. Although the methodology for requirement 1 was relatively labor intensive and required organizational skills over an extended period, no major issues were encountered during the field validation of both protocols. The validation, thus, proved the protocols to be practical. Also, no changes to the data collection process were recommended by the technology supplier representatives. However, 4 recommendations were made to refine the protocols: inclusion of an additional analysis that ignores small (low-density) clot observations in the definition of CM, extension of the time window from 4 to 5 milkings for timely alerts for CM, setting a maximum number of 10 milkings for the time window to detect a CM episode, and presentation of sensitivity for a larger range of false alerts per 1,000 milkings replacing minimum performance targets. The recommended refinements are discussed with suggested changes to the original protocols. The information presented is intended to inform further debate toward achieving international agreement on standard protocols to evaluate performance of in-line mastitis-detection systems. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.
Li, Qiang; Doi, Kunio
2006-04-01
Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.
Alberta infant motor scale: reliability and validity when used on preterm infants in Taiwan.
Jeng, S F; Yau, K I; Chen, L C; Hsiao, S F
2000-02-01
The goal of this study was to examine the reliability and validity of measurements obtained with the Alberta Infant Motor Scale (AIMS) for evaluation of preterm infants in Taiwan. Two independent groups of preterm infants were used to investigate the reliability (n=45) and validity (n=41) for the AIMS. In the reliability study, the AIMS was administered to the infants by a physical therapist, and infant performance was videotaped. The performance was then rescored by the same therapist and by 2 other therapists to examine the intrarater and interrater reliability. In the validity study, the AIMS and the Bayley Motor Scale were administered to the infants at 6 and 12 months of age to examine criterion-related validity. Intraclass correlation coefficients (ICCs) for intrarater and interrater reliability of measurements obtained with the AIMS were high (ICC=.97-.99). The AIMS scores correlated with the Bayley Motor Scale scores at 6 and 12 months (r=.78 and.90), although the AIMS scores at 6 months were only moderately predictive of the motor function at 12 months (r=.56). The results suggest that measurements obtained with the AIMS have acceptable reliability and concurrent validity but limited predictive value for evaluating preterm Taiwanese infants.
Assessing Performance in Shoulder Arthroscopy: The Imperial Global Arthroscopy Rating Scale (IGARS).
Bayona, Sofia; Akhtar, Kash; Gupte, Chinmay; Emery, Roger J H; Dodds, Alexander L; Bello, Fernando
2014-07-02
Surgical training is undergoing major changes with reduced resident work hours and an increasing focus on patient safety and surgical aptitude. The aim of this study was to create a valid, reliable method for an assessment of arthroscopic skills that is independent of time and place and is designed for both real and simulated settings. The validity of the scale was tested using a virtual reality shoulder arthroscopy simulator. The study consisted of two parts. In the first part, an Imperial Global Arthroscopy Rating Scale for assessing technical performance was developed using a Delphi method. Application of this scale required installing a dual-camera system to synchronously record the simulator screen and body movements of trainees to allow an assessment that is independent of time and place. The scale includes aspects such as efficient portal positioning, angles of instrument insertion, proficiency in handling the arthroscope and adequately manipulating the camera, and triangulation skills. In the second part of the study, a validation study was conducted. Two experienced arthroscopic surgeons, blinded to the identities and experience of the participants, each assessed forty-nine subjects performing three different tests using the Imperial Global Arthroscopy Rating Scale. Results were analyzed using two-way analysis of variance with measures of absolute agreement. The intraclass correlation coefficient was calculated for each test to assess inter-rater reliability. The scale demonstrated high internal consistency (Cronbach alpha, 0.918). The intraclass correlation coefficient demonstrated high agreement between the assessors: 0.91 (p < 0.001). Construct validity was evaluated using Kruskal-Wallis one-way analysis of variance (chi-square test, 29.826; p < 0.001), demonstrating that the Imperial Global Arthroscopy Rating Scale distinguishes significantly between subjects with different levels of experience utilizing a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale has a high internal consistency and excellent inter-rater reliability and offers an approach for assessing technical performance in basic arthroscopy on a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale provides detailed information on surgical skills. Although it requires further validation in the operating room, this scale, which is independent of time and place, offers a robust and reliable method for assessing arthroscopic technical skills. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.
Reliability and validity of the closed kinetic chain upper extremity stability test.
Lee, Dong-Rour; Kim, Laurentius Jongsoon
2015-04-01
[Purpose] The purpose of this study was to examine the reliability and validity of the Closed Kinetic Chain Upper Extremity Stability (CKCUES) test. [Subjects and Methods] A sample of 40 subjects (20 males, 20 females) with and without pain in the upper limbs was recruited. The subjects were tested twice, three days apart to assess the reliability of the CKCUES test. The CKCUES test was performed four times, and the average was calculated using the data of the last 3 tests. In order to test the validity of the CKCUES test, peak torque of internal/external shoulder rotation was measured using an isokinetic dynamometer, and maximum grip strength was measured using a hand dynamometer, and their Pearson correlation coefficients with the average values of the CKCUES test were calculated. [Results] The reliability of the CKCUES test was very high (ICC=0.97). The correlations between the CKCUES test and maximum grip strength (r=0.78-0.79), and the peak torque of internal/external shoulder rotation (r=0.87-0.94) were high indicating its validity. [Conclusion] The reliability and validity of the CKCUES test were high. The CKCUES test is expected to be used for clinical tests on upper limb stability at low price.
Onboard FPGA-based SAR processing for future spaceborne systems
NASA Technical Reports Server (NTRS)
Le, Charles; Chan, Samuel; Cheng, Frank; Fang, Winston; Fischman, Mark; Hensley, Scott; Johnson, Robert; Jourdan, Michael; Marina, Miguel; Parham, Bruce;
2004-01-01
We present a real-time high-performance and fault-tolerant FPGA-based hardware architecture for the processing of synthetic aperture radar (SAR) images in future spaceborne system. In particular, we will discuss the integrated design approach, from top-level algorithm specifications and system requirements, design methodology, functional verification and performance validation, down to hardware design and implementation.
ERIC Educational Resources Information Center
Foster, Erin R.; Black, Kevin J.; Antenor-Dorsey, Jo Ann V.; Perlmutter, Joel S.; Hershey, Tamara
2008-01-01
Studies suggest motor deficit asymmetry may help predict the pattern of cognitive impairment in individuals with Parkinson disease (PD). We tested this hypothesis using a highly validated and sensitive spatial memory task, spatial delayed response (SDR), and clinical and neuroimaging measures of PD asymmetry. We predicted SDR performance would be…
Zhao, Lue Ping; Carlsson, Annelie; Larsson, Helena Elding; Forsander, Gun; Ivarsson, Sten A; Kockum, Ingrid; Ludvigsson, Johnny; Marcus, Claude; Persson, Martina; Samuelsson, Ulf; Örtqvist, Eva; Pyo, Chul-Woo; Bolouri, Hamid; Zhao, Michael; Nelson, Wyatt C; Geraghty, Daniel E; Lernmark, Åke
2017-11-01
It is of interest to predict possible lifetime risk of type 1 diabetes (T1D) in young children for recruiting high-risk subjects into longitudinal studies of effective prevention strategies. Utilizing a case-control study in Sweden, we applied a recently developed next generation targeted sequencing technology to genotype class II genes and applied an object-oriented regression to build and validate a prediction model for T1D. In the training set, estimated risk scores were significantly different between patients and controls (P = 8.12 × 10 -92 ), and the area under the curve (AUC) from the receiver operating characteristic (ROC) analysis was 0.917. Using the validation data set, we validated the result with AUC of 0.886. Combining both training and validation data resulted in a predictive model with AUC of 0.903. Further, we performed a "biological validation" by correlating risk scores with 6 islet autoantibodies, and found that the risk score was significantly correlated with IA-2A (Z-score = 3.628, P < 0.001). When applying this prediction model to the Swedish population, where the lifetime T1D risk ranges from 0.5% to 2%, we anticipate identifying approximately 20 000 high-risk subjects after testing all newborns, and this calculation would identify approximately 80% of all patients expected to develop T1D in their lifetime. Through both empirical and biological validation, we have established a prediction model for estimating lifetime T1D risk, using class II HLA. This prediction model should prove useful for future investigations to identify high-risk subjects for prevention research in high-risk populations. Copyright © 2017 John Wiley & Sons, Ltd.
Tozzoli, Rosangela; Maugliani, Antonella; Michelacci, Valeria; Minelli, Fabio; Caprioli, Alfredo; Morabito, Stefano
2018-05-08
In 2006, the European Committee for standardisation (CEN)/Technical Committee 275 - Food analysis - Horizontal methods/Working Group 6 - Microbiology of the food chain (TC275/WG6), launched the project of validating the method ISO 16654:2001 for the detection of Escherichia coli O157 in foodstuff by the evaluation of its performance, in terms of sensitivity and specificity, through collaborative studies. Previously, a validation study had been conducted to assess the performance of the Method No 164 developed by the Nordic Committee for Food Analysis (NMKL), which aims at detecting E. coli O157 in food as well, and is based on a procedure equivalent to that of the ISO 16654:2001 standard. Therefore, CEN established that the validation data obtained for the NMKL Method 164 could be exploited for the ISO 16654:2001 validation project, integrated with new data obtained through two additional interlaboratory studies on milk and sprouts, run in the framework of the CEN mandate No. M381. The ISO 16654:2001 validation project was led by the European Union Reference Laboratory for Escherichia coli including VTEC (EURL-VTEC), which organized the collaborative validation study on milk in 2012 with 15 participating laboratories and that on sprouts in 2014, with 14 participating laboratories. In both studies, a total of 24 samples were tested by each laboratory. Test materials were spiked with different concentration of E. coli O157 and the 24 samples corresponded to eight replicates of three levels of contamination: zero, low and high spiking level. The results submitted by the participating laboratories were analyzed to evaluate the sensitivity and specificity of the ISO 16654:2001 method when applied to milk and sprouts. The performance characteristics calculated on the data of the collaborative validation studies run under the CEN mandate No. M381 returned sensitivity and specificity of 100% and 94.4%, respectively for the milk study. As for sprouts matrix, the sensitivity resulted in 75.9% in the low level of contamination samples and 96.4% in samples spiked with high level of E. coli O157 and specificity was calculated as 99.1%. Copyright © 2018 Elsevier B.V. All rights reserved.
Hijazi, Ziad; Oldgren, Jonas; Lindbäck, Johan; Alexander, John H; Connolly, Stuart J; Eikelboom, John W; Ezekowitz, Michael D; Held, Claes; Hylek, Elaine M; Lopes, Renato D; Yusuf, Salim; Granger, Christopher B; Siegbahn, Agneta; Wallentin, Lars
2018-01-01
Abstract Aims In atrial fibrillation (AF), mortality remains high despite effective anticoagulation. A model predicting the risk of death in these patients is currently not available. We developed and validated a risk score for death in anticoagulated patients with AF including both clinical information and biomarkers. Methods and results The new risk score was developed and internally validated in 14 611 patients with AF randomized to apixaban vs. warfarin for a median of 1.9 years. External validation was performed in 8548 patients with AF randomized to dabigatran vs. warfarin for 2.0 years. Biomarker samples were obtained at study entry. Variables significantly contributing to the prediction of all-cause mortality were assessed by Cox-regression. Each variable obtained a weight proportional to the model coefficients. There were 1047 all-cause deaths in the derivation and 594 in the validation cohort. The most important predictors of death were N-terminal pro B-type natriuretic peptide, troponin-T, growth differentiation factor-15, age, and heart failure, and these were included in the ABC (Age, Biomarkers, Clinical history)-death risk score. The score was well-calibrated and yielded higher c-indices than a model based on all clinical variables in both the derivation (0.74 vs. 0.68) and validation cohorts (0.74 vs. 0.67). The reduction in mortality with apixaban was most pronounced in patients with a high ABC-death score. Conclusion A new biomarker-based score for predicting risk of death in anticoagulated AF patients was developed, internally and externally validated, and well-calibrated in two large cohorts. The ABC-death risk score performed well and may contribute to overall risk assessment in AF. ClinicalTrials.gov identifier NCT00412984 and NCT00262600 PMID:29069359
Development and validation of a prognostic nomogram for terminally ill cancer patients.
Feliu, Jaime; Jiménez-Gordo, Ana María; Madero, Rosario; Rodríguez-Aizcorbe, José Ramón; Espinosa, Enrique; Castro, Javier; Acedo, Jesús Domingo; Martínez, Beatriz; Alonso-Babarro, Alberto; Molina, Raquel; Cámara, Juan Carlos; García-Paredes, María Luisa; González-Barón, Manuel
2011-11-02
Determining life expectancy in terminally ill cancer patients is a difficult task. We aimed to develop and validate a nomogram to predict the length of survival in patients with terminal disease. From February 1, 2003, to December 31, 2005, 406 consecutive terminally ill patients were entered into the study. We analyzed 38 features prognostic of life expectancy among terminally ill patients by multivariable Cox regression and identified the most accurate and parsimonious model by backward variable elimination according to the Akaike information criterion. Five clinical and laboratory variables were built into a nomogram to estimate the probability of patient survival at 15, 30, and 60 days. We validated and calibrated the nomogram with an external validation cohort of 474 patients who were treated from June 1, 2006, through December 31, 2007. The median overall survival was 29.1 days for the training set and 18.3 days for the validation set. Eastern Cooperative Oncology Group performance status, lactate dehydrogenase levels, lymphocyte levels, albumin levels, and time from initial diagnosis to diagnosis of terminal disease were retained in the multivariable Cox proportional hazards model as independent prognostic factors of survival and formed the basis of the nomogram. The nomogram had high predictive performance, with a bootstrapped corrected concordance index of 0.70, and it showed good calibration. External independent validation revealed 68% predictive accuracy. We developed a highly accurate tool that uses basic clinical and analytical information to predict the probability of survival at 15, 30, and 60 days in terminally ill cancer patients. This tool can help physicians making decisions on clinical care at the end of life.
Jalil, Rozh; Soukup, Tayana; Akhter, Waseem; Sevdalis, Nick; Green, James S A
2018-03-03
High-quality leadership and chairing skills are vital for good performance in multidisciplinary tumor boards (MTBs), but no instruments currently exist for assessing and improving these skills. To construct and validate a robust instrument for assessment of MTB leading and chairing skills. We developed an observational MTB leadership assessment instrument (ATLAS). ATLAS includes 12 domains that assess the leadership and chairing skills of the MTB chairperson. ATLAS has gone through a rigorous process of refinement and content validation prior to use to assess the MTB lead by two urological surgeons (blinded to each other) in 7 real-live (n = 286 cases) and 10 video-recorded (n = 131 cases) MTBs. ATLAS domains were analyzed via descriptive statistics. Instrument content was evaluated for validity using the content validation index (CVI). Intraclass correlation coefficients (ICCs) were used to assess inter-observer reliability. Instrument refining resulted in ATLAS including the following 12 domains: time management, communication, encouraging contribution, ability to summarize, ensuring all patients have treatment plan, case prioritization, keeping meeting focused, facilitate discussion, conflict management, leadership, creating good working atmosphere, and recruitment for clinical trials. CVI was acceptable and inter-rater agreement adequate to high for all domains. Agreement was somewhat higher in real-time MTBs compared to video ratings. Concurrent validation evidence was derived via positive and significant correlations between ATLAS and an established validated brief MTB leadership assessment scale. ATLAS is an observational assessment instrument that can be reliably used for assessing leadership and chairing skills in cancer MTBs (both live and video-recorded). The ability to assess and feedback on team leader performance provides the ground for promotion of good practice and continuing professional development of tumor board leaders.
Specialty-specific multi-source feedback: assuring validity, informing training.
Davies, Helena; Archer, Julian; Bateman, Adrian; Dewar, Sandra; Crossley, Jim; Grant, Janet; Southgate, Lesley
2008-10-01
The white paper 'Trust, Assurance and Safety: the Regulation of Health Professionals in the 21st Century' proposes a single, generic multi-source feedback (MSF) instrument in the UK. Multi-source feedback was proposed as part of the assessment programme for Year 1 specialty training in histopathology. An existing instrument was modified following blueprinting against the histopathology curriculum to establish content validity. Trainees were also assessed using an objective structured practical examination (OSPE). Factor analysis and correlation between trainees' OSPE performance and the MSF were used to explore validity. All 92 trainees participated and the assessor response rate was 93%. Reliability was acceptable with eight assessors (95% confidence interval 0.38). Factor analysis revealed two factors: 'generic' and 'histopathology'. Pearson correlation of MSF scores with OSPE performances was 0.48 (P = 0.001) and the histopathology factor correlated more highly (histopathology r = 0.54, generic r = 0.42; t = - 2.76, d.f. = 89, P < 0.01). Trainees scored least highly in relation to ability to use histopathology to solve clinical problems (mean = 4.39) and provision of good reports (mean = 4.39). Three of six doctors whose means were < 4.0 received free text comments about report writing. There were 83 forms with aggregate scores of < 4. Of these, 19.2% included comments about report writing. Specialty-specific MSF is feasible and achieves satisfactory reliability. The higher correlation of the 'histopathology' factor with the OSPE supports validity. This paper highlights the importance of validating an MSF instrument within the specialty-specific context as, in addition to assuring content validity, the PATH-SPRAT (Histopathology-Sheffield Peer Review Assessment Tool) also demonstrates the potential to inform training as part of a quality improvement model.
Validity and reliability of an occupational exposure questionnaire for parkinsonism in welders.
Hobson, Angela J; Sterling, David A; Emo, Brett; Evanoff, Bradley A; Sterling, Callen S; Good, Laura; Seixas, Noah; Checkoway, Harvey; Racette, Brad A
2009-06-01
This study assessed the validity and test-retest reliability of a medical and occupational history questionnaire for workers performing welding in the shipyard industry. This self-report questionnaire was developed for an epidemiologic study of the risk of parkinsonism in welders. Validity participants recruited from three similar shipyards were asked to give consent for access to personnel files and complete the questionnaire. Responses on the questionnaire were compared with information extracted from personnel records. Reliability participants were recruited from the same shipyards and were asked to complete the questionnaire at two different times approximately 4 weeks apart. Percent agreement, kappa, intraclass correlation coefficient (ICC), and sensitivity and specificity were used as measures of validity and/or reliability. Personnel files were obtained for 101 of 143 participants (70%) in the validity study, and 56 of the 95 (58.9%) participants in the reliability study completed the retest of the questionnaire. Validity scores for items extracted from personnel files were high. Percent agreement for employment dates and job titles ranged from 83-100%, while ICC for start and stop dates ranged from 0.93-0.99. Sensitivity and specificity for current job title ranged from 0.5-1.0. Reliability scores for demographic, medical and health behavior items were mainly moderate or high, but ranged from 0.19 to 1.0. Most recent job/title items such as title, types of welding performed, and material used showed substantial to perfect agreement. Certain determinants of exposure such as days and hours per week exposed to welding fumes demonstrated mainly moderate agreement (kappa= 0.42-0.47, percent agreement 63-77%); however, mean days and hours reported did not differ between test and retest. The results of this study suggest that participants' self-report for job title and dates employed are valid compared with employer records. While kappa scores were low for some medical conditions and for caffeine consumption, high kappa scores for job title, dates worked, types of welding, and materials welded suggest participants generated reproducible answers important for occupational exposure assessment.
Vuong, Kylie; Armstrong, Bruce K; Weiderpass, Elisabete; Lund, Eiliv; Adami, Hans-Olov; Veierod, Marit B; Barrett, Jennifer H; Davies, John R; Bishop, D Timothy; Whiteman, David C; Olsen, Catherine M; Hopper, John L; Mann, Graham J; Cust, Anne E; McGeechan, Kevin
2016-08-01
Identifying individuals at high risk of melanoma can optimize primary and secondary prevention strategies. To develop and externally validate a risk prediction model for incident first-primary cutaneous melanoma using self-assessed risk factors. We used unconditional logistic regression to develop a multivariable risk prediction model. Relative risk estimates from the model were combined with Australian melanoma incidence and competing mortality rates to obtain absolute risk estimates. A risk prediction model was developed using the Australian Melanoma Family Study (629 cases and 535 controls) and externally validated using 4 independent population-based studies: the Western Australia Melanoma Study (511 case-control pairs), Leeds Melanoma Case-Control Study (960 cases and 513 controls), Epigene-QSkin Study (44 544, of which 766 with melanoma), and Swedish Women's Lifestyle and Health Cohort Study (49 259 women, of which 273 had melanoma). We validated model performance internally and externally by assessing discrimination using the area under the receiver operating curve (AUC). Additionally, using the Swedish Women's Lifestyle and Health Cohort Study, we assessed model calibration and clinical usefulness. The risk prediction model included hair color, nevus density, first-degree family history of melanoma, previous nonmelanoma skin cancer, and lifetime sunbed use. On internal validation, the AUC was 0.70 (95% CI, 0.67-0.73). On external validation, the AUC was 0.66 (95% CI, 0.63-0.69) in the Western Australia Melanoma Study, 0.67 (95% CI, 0.65-0.70) in the Leeds Melanoma Case-Control Study, 0.64 (95% CI, 0.62-0.66) in the Epigene-QSkin Study, and 0.63 (95% CI, 0.60-0.67) in the Swedish Women's Lifestyle and Health Cohort Study. Model calibration showed close agreement between predicted and observed numbers of incident melanomas across all deciles of predicted risk. In the external validation setting, there was higher net benefit when using the risk prediction model to classify individuals as high risk compared with classifying all individuals as high risk. The melanoma risk prediction model performs well and may be useful in prevention interventions reliant on a risk assessment using self-assessed risk factors.
NASA Astrophysics Data System (ADS)
Handhika, J.; Cari, C.; Suparmi, A.; Sunarno, W.; Purwandari, P.
2018-03-01
The purpose of this research was to develop a diagnostic test instrument to reveal students' conceptions in kinematics and dynamics. The diagnostic test was developed based on the content indicator the concept of (1) displacement and distance, (2) instantaneous and average velocity, (3) zero and constant acceleration, (4) gravitational acceleration (5) Newton's first Law, (6) and Newton's third Law. The diagnostic test development model includes: Diagnostic test requirement analysis, formulating test-making objectives, developing tests, checking the validity of the content and the performance of reliability, and application of tests. The Content Validation Index (CVI) results in the category are highly relevant, with a value of 0.85. Three questions get negative Content Validation Ratio CVR) (-0.6), after revised distractors and clarify visual presentation; the CVR become 1 (highly relevant). This test was applied, obtained 16 valid test items, with Cronbach Alpha value of 0.80. It can conclude that diagnostic test can be used to reveal the level of students conception in kinematics and dynamics.
ERIC Educational Resources Information Center
Dodrill, Carl B.; Clemmons, David
1984-01-01
Examined the validity of intellectual, neuropsychological, and emotional adjustment measures administered in high school in predicting vocational adjustment of 39 young adults with epilepsy. Results showed neuropsychological tests were the best predictors of later adjustment. Abilities were more related to final adjustment than variables…
Application of High Speed Digital Image Correlation in Rocket Engine Hot Fire Testing
NASA Technical Reports Server (NTRS)
Gradl, Paul R.; Schmidt, Tim
2016-01-01
Hot fire testing of rocket engine components and rocket engine systems is a critical aspect of the development process to understand performance, reliability and system interactions. Ground testing provides the opportunity for highly instrumented development testing to validate analytical model predictions and determine necessary design changes and process improvements. To properly obtain discrete measurements for model validation, instrumentation must survive in the highly dynamic and extreme temperature application of hot fire testing. Digital Image Correlation has been investigated and being evaluated as a technique to augment traditional instrumentation during component and engine testing providing further data for additional performance improvements and cost savings. The feasibility of digital image correlation techniques were demonstrated in subscale and full scale hotfire testing. This incorporated a pair of high speed cameras to measure three-dimensional, real-time displacements and strains installed and operated under the extreme environments present on the test stand. The development process, setup and calibrations, data collection, hotfire test data collection and post-test analysis and results are presented in this paper.
NASA Technical Reports Server (NTRS)
Kharisov, Evgeny; Gregory, Irene M.; Cao, Chengyu; Hovakimyan, Naira
2008-01-01
This paper explores application of the L1 adaptive control architecture to a generic flexible Crew Launch Vehicle (CLV). Adaptive control has the potential to improve performance and enhance safety of space vehicles that often operate in very unforgiving and occasionally highly uncertain environments. NASA s development of the next generation space launch vehicles presents an opportunity for adaptive control to contribute to improved performance of this statically unstable vehicle with low damping and low bending frequency flexible dynamics. In this paper, we consider the L1 adaptive output feedback controller to control the low frequency structural modes and propose steps to validate the adaptive controller performance utilizing one of the experimental test flights for the CLV Ares-I Program.
Validation of a unique concept for a low-cost, lightweight space-deployable antenna structure
NASA Technical Reports Server (NTRS)
Freeland, R. E.; Bilyeu, G. D.; Veal, G. R.
1993-01-01
An experiment conducted in the framework of a NASA In-Space Technology Experiments Program based on a concept of inflatable deployable structures is described. The concept utilizes very low inflation pressure to maintain the required geometry on orbit and gravity-induced deflection of the structure precludes any meaningful ground-based demonstrations of functions performance. The experiment is aimed at validating and characterizing the mechanical functional performance of a 14-m-diameter inflatable deployable reflector antenna structure in the orbital operational environment. Results of the experiment are expected to significantly reduce the user risk associated with using large space-deployable antennas by demonstrating the functional performance of a concept that meets the criteria for low-cost, lightweight, and highly reliable space-deployable structures.
Testing and Validation of Computational Methods for Mass Spectrometry.
Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas
2016-03-04
High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.
Hoffman, Justin T; Rossi, Steven S; Espina-Quinto, Rowena; Letendre, Scott; Capparelli, Edmund V
2013-04-01
Previously published methods for determination of efavirenz (EFV) in human dried blood spots (DBS) use costly and complex liquid chromatography/mass spectrometry. We describe the validation and evaluation of a simple and inexpensive high-performance liquid chromatography method for EFV quantification in human DBS and dried plasma spots (DPS), using ultraviolet detection appropriate for resource-limited settings. One hundred microliters of heparinized whole blood or plasma were spotted onto blood collection cards, dried, punched, and eluted. Eluates are injected onto a C-18 reversed phase high-performance liquid chromatography column. EFV is separated isocratically using a potassium phosphate and acetonitrile mobile phase. Ultraviolet detection is at 245 nm. Quantitation is by use of external calibration standards. Following validation, the method was evaluated using whole blood and plasma from HIV-positive patients undergoing EFV therapy. Mean recovery of drug from DBS is 91.5%. The method is linear over the validated concentration range of 0.3125-20.0 μg/mL. A good correlation (Spearman r = 0.96) between paired plasma and DBS EFV concentrations from the clinical samples was observed, and hematocrit level was not found to be a significant determinant of the EFV DBS level. The mean observed C DBS/C plasma ratio was 0.68. A good correlation (Spearman r = 0.96) between paired plasma and DPS EFV concentrations from the clinical samples was observed. The mean percent deviation of DPS samples from plasma samples is 1.68%. Dried whole blood spot or dried plasma spot sampling is well suited for monitoring EFV therapy in resource-limited settings, particularly when high sensitivity is not essential.
NASA Astrophysics Data System (ADS)
Araújo, J. P. C.; DA Silva, L. M.; Dourado, F. A. D.; Fernandes, N.
2015-12-01
Landslides are the most damaging natural hazard in the mountainous region of Rio de Janeiro State in Brazil, responsible for thousands of deaths and important financial and environmental losses. However, this region has currently few landslide susceptibility maps implemented on an adequate scale. Identification of landslide susceptibility areas is fundamental in successful land use planning and management practices to reduce risk. This paper applied the Bayes' theorem based on weight of evidence (WoE) using 8 landslide-related factors in a geographic information system (GIS) for landslide susceptibility mapping. 378 landslide locations were identified and mapped on a selected basin in the city of Nova Friburgo, triggered by the January 2011 rainfall event. The landslide scars were divided into two subsets: training and validation subsets. The 8 landslide-related factors weighted by WoE were performed using chi-square test to indicate which variables are conditionally independent of each other to be used in the final map. Finally, the maps of weighted factors were summed up to construct the landslide susceptibility map and validated by the validation landslide subset. According to the results, slope, aspect and contribution area showed the higher positive spatial correlation with landslides. In the landslide susceptibility map, 21% of the area presented very low and low susceptibilities with 3% of the validation scars, 41% presented medium susceptibility with 22% of the validation scars and 38% presented high and very high susceptibilities with 75% of the validation scars. The very high susceptibility class stands for 16% of the basin area and has 54% of the all scars. The approach used in this study can be considered very useful since 75% of the area affected by landslides was included in the high and very high susceptibility classes.
Aeroservoelastic Modeling and Validation of a Thrust-Vectoring F/A-18 Aircraft
NASA Technical Reports Server (NTRS)
Brenner, Martin J.
1996-01-01
An F/A-18 aircraft was modified to perform flight research at high angles of attack (AOA) using thrust vectoring and advanced control law concepts for agility and performance enhancement and to provide a testbed for the computational fluid dynamics community. Aeroservoelastic (ASE) characteristics had changed considerably from the baseline F/A-18 aircraft because of structural and flight control system amendments, so analyses and flight tests were performed to verify structural stability at high AOA. Detailed actuator models that consider the physical, electrical, and mechanical elements of actuation and its installation on the airframe were employed in the analysis to accurately model the coupled dynamics of the airframe, actuators, and control surfaces. This report describes the ASE modeling procedure, ground test validation, flight test clearance, and test data analysis for the reconfigured F/A-18 aircraft. Multivariable ASE stability margins are calculated from flight data and compared to analytical margins. Because this thrust-vectoring configuration uses exhaust vanes to vector the thrust, the modeling issues are nearly identical for modem multi-axis nozzle configurations. This report correlates analysis results with flight test data and makes observations concerning the application of the linear predictions to thrust-vectoring and high-AOA flight.
Thomas, Emily; Murphy, Mary; Pitt, Rebecca; Rivers, Angela; Leavens, David A
2008-11-01
Povinelli, Bierschwale, and Cech (1999) reported that when tested on a visual attention task, the behavior of juvenile chimpanzees did not support a high-level understanding of visual attention. This study replicates their research using adult humans and aims to investigate the validity of their experimental design. Participants were trained to respond to pointing cues given by an experimenter, and then tested on their ability to locate hidden objects from visual cues. Povinelli et al.'s assertion that the generalization of pointing to gaze is indicative of a high-level framework was not supported by our findings: Training improved performance only on initial probe trials when the experimenter's gaze was not directed at the baited cup. Furthermore, participants performed above chance on such trials, the same result exhibited by chimpanzees and used as evidence by Povinelli et al. to support a low-level framework. These findings, together with the high performance of participants in an incongruent condition, in which the experimenter pointed to or gazed at an unbaited container, challenge the validity of their experimental design. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
Validated HPLC Determination of 4-Dimethylaminoantipyrine in Different Suppository Bases
Kalmár, É; Kormányos, B.; Szakonyi, G.; Dombi, G.
2014-01-01
Suppositories are important tools for individual therapy, especially in paediatrics, and an instrumental assay method has become necessary for the quality control of dosage units. The aim of this work was to develop a rapid, effective high-performance liquid chromatography method to assay aminophenazone in extemporaneous suppositories prepared with two different suppository bases, adeps solidus and massa macrogoli. With a novel sample preparation method developed by the authors, 4-dimethylaminoantipyrine was determined in these suppository bases with 95-105% recovery. The measurements were carried out on a Shimadzu Prominence ultra high-performance liquid chromatography system equipped with a 20 μl sample loop. The separation was achieved on a Hypersil ODS column, with methanol, sodium acetate buffer (pH 5.5±0.05, 0.05 M, 60:40, v/v) as the mobile phase at a flow rate of 1.5 ml/min. The chromatograms were acquired at 253 nm. The chromatographic method was fully validated in accordance with current guidelines. The presented data demonstrate the successful development of a rapid, efficient and robust sample preparation and high-performance liquid chromatography method for the routine quality control of the dosage units of suppositories containing 4-dimethylaminoantipyrine. PMID:24799736
Performance Analysis and Electronics Packaging of the Optical Communications Demonstrator
NASA Technical Reports Server (NTRS)
Jeganathan, M.; Monacos, S.
1998-01-01
The Optical Communications Demonstrator (OCD), under development at the Jet Propulsion Laboratory (JPL), is a laboratory-based lasercomm terminal designed to validate several key technologies, primarily precision beam pointing, high bandwidth tracking, and beacon acquisition.
Ye, Guangming; Cai, Xuejian; Wang, Biao; Zhou, Zhongxian; Yu, Xiaohua; Wang, Weibin; Zhang, Jiandong; Wang, Yuhai; Dong, Jierong; Jiang, Yunyun
2008-11-04
A simple, accurate and rapid method for simultaneous analysis of vancomycin and ceftazidime in cerebrospinal fluid (CSF), utilizing high-performance liquid chromatography (HPLC), has been developed and thoroughly validated to satisfy strict FDA guidelines for bioanalytical methods. Protein precipitation was used as the sample pretreatment method. In order to increase the accuracy, tinidazole was chosen as the internal standard. Separation was achieved on a Diamonsil C18 column (200 mm x 4.6mm I.D., 5 microm) using a mobile phase composed of acetonitrile and acetate buffer (pH 3.5) (8:92, v/v) at room temperature (25 degrees C), and the detection wavelength was 240 nm. All the validation data, such as accuracy, precision, and inter-day repeatability, were within the required limits. The method was applied to determine vancomycin and ceftazidime concentrations in CSF in five craniotomy patients.
Leston, Sara; Freitas, Andreia; Rosa, João; Barbosa, Jorge; Lemos, Marco F L; Pardal, Miguel Ângelo; Ramos, Fernando
2016-10-15
Together with fish, algae reared in aquaculture systems have gained importance in the last years, for many purposes. Besides their use as biofilters of effluents, macroalgae's rich nutritional profiles have increased their inclusion in human diets but also in animal feeds as sources of fatty acids, especially important for the fish industry. Nonetheless, algae are continuously exposed to environmental contaminants including antibiotics and possess the ability for bioaccumulation of such compounds. Therefore, the present paper describes the development and validation of an ultra-high performance liquid chromatography with tandem mass spectrometry (UPLC-MS/MS) method for the simultaneous quantification of antibiotics in the green macroalgae Ulva lactuca. This multi-residue method enables the determination of 38 compounds distributed between seven classes and was fully validated according to EU Decision 2002/657/EC. Copyright © 2016 Elsevier B.V. All rights reserved.
2015-09-01
with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS . 1...UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS (ES) Technical and Project Engineering, LLC QED Systems, LLC Alexandria, VA...AND ADDRESS (ES) US Army Research Laboratory ATTN: RDRL-CIH-C Aberdeen Proving Ground, MD 21005 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR
NASA Astrophysics Data System (ADS)
Wu, Mei-Ying; Chang, Yun-Ju; Weng, Yung-Chien
2009-08-01
With the structural change of global supply chains, the relationship between manufacturers and suppliers has transformed into a long-term partnership. Thus, this study aims to explore the partnership between manufacturers and suppliers in Taiwan's high-tech industry. Four constructs, including partner characteristic, partnership quality, partnership closeness, and cooperative performance, induced from previous literatures are used to construct the research framework and hypotheses. A questionnaire survey is then performed on executives and staffs involved in the high-tech industry. The proposed framework and hypotheses are empirically validated through confirmatory factory analysis and structural equation modeling. It is expected that the research findings can serve as a reference for Taiwan's high-tech industry on building partnerships.
Titration Calorimetry Standards and the Precision of Isothermal Titration Calorimetry Data
Baranauskienė, Lina; Petrikaitė, Vilma; Matulienė, Jurgita; Matulis, Daumantas
2009-01-01
Current Isothermal Titration Calorimetry (ITC) data in the literature have relatively high errors in the measured enthalpies of protein-ligand binding reactions. There is a need for universal validation standards for titration calorimeters. Several inorganic salt co-precipitation and buffer protonation reactions have been suggested as possible enthalpy standards. The performances of several commercial calorimeters, including the VP-ITC, ITC200, and Nano ITC-III, were validated using these suggested standard reactions. PMID:19582227
Development of Officer Selection Battery Forms 3 and 4
1986-03-01
the development, standardization, and validation of two parallel forms of a test to be used for assessing young men and women applying to ROTC. Fairly...appropriate di6ffculty, high reliability, and state-of-the-art validity and fairness for mit~orities and women . EDGAR M. JOHNSON Technical Directcr 4v 4...administrable, test for use in assessing young men and women applying to Advanced Army ROTC. Procedur .-: Earlier research had performed an analysis of the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calderer, Antoni; Yang, Xiaolei; Angelidis, Dionysios
2015-10-30
The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models.
Ovesen, C; Christensen, A; Nielsen, J K; Christensen, H
2013-11-01
Easy-to-perform and valid assessment scales for the effect of thrombolysis are essential in hyperacute stroke settings. Because of this we performed an external validation of the DRAGON scale proposed by Strbian et al. in a Danish cohort. All patients treated with intravenous recombinant plasminogen activator between 2009 and 2011 were included. Upon admission all patients underwent physical and neurological examination using the National Institutes of Health Stroke Scale along with non-contrast CT scans and CT angiography. Patients were followed up through the Outpatient Clinic and their modified Rankin Scale (mRS) was assessed after 3 months. Three hundred and three patients were included in the analysis. The DRAGON scale proved to have a good discriminative ability for predicting highly unfavourable outcome (mRS 5-6) (area under the curve-receiver operating characteristic [AUC-ROC]: 0.89; 95% confidence interval [CI] 0.81-0.96; p<0.001) and good outcome (mRS 0-2) (AUC-ROC: 0.79; 95% CI 0.73-0.85; p<0.001). When only patients with M1 occlusions were selected the DRAGON scale provided good discriminative capability (AUC-ROC: 0.89; 95% CI 0.78-1.0; p=0.003) for highly unfavourable outcome. We confirmed the validity of the DRAGON scale in predicting outcome after thrombolysis treatment. Copyright © 2013 Elsevier Ltd. All rights reserved.
Comparison of scoring approaches for the NEI VFQ-25 in low vision.
Dougherty, Bradley E; Bullimore, Mark A
2010-08-01
The aim of this study was to evaluate different approaches to scoring the National Eye Institute Visual Functioning Questionnaire-25 (NEI VFQ-25) in patients with low vision including scoring by the standard method, by Rasch analysis, and by use of an algorithm created by Massof to approximate Rasch person measure. Subscale validity and use of a 7-item short form instrument proposed by Ryan et al. were also investigated. NEI VFQ-25 data from 50 patients with low vision were analyzed using the standard method of summing Likert-type scores and calculating an overall average, Rasch analysis using Winsteps software, and the Massof algorithm in Excel. Correlations between scores were calculated. Rasch person separation reliability and other indicators were calculated to determine the validity of the subscales and of the 7-item instrument. Scores calculated using all three methods were highly correlated, but evidence of floor and ceiling effects was found with the standard scoring method. None of the subscales investigated proved valid. The 7-item instrument showed acceptable person separation reliability and good targeting and item performance. Although standard scores and Rasch scores are highly correlated, Rasch analysis has the advantages of eliminating floor and ceiling effects and producing interval-scaled data. The Massof algorithm for approximation of the Rasch person measure performed well in this group of low-vision patients. The validity of the subscales VFQ-25 should be reconsidered.
Validation of High Frequency (HF) Propagation Prediction Models in the Arctic region
NASA Astrophysics Data System (ADS)
Athieno, R.; Jayachandran, P. T.
2014-12-01
Despite the emergence of modern techniques for long distance communication, Ionospheric communication in the high frequency (HF) band (3-30 MHz) remains significant to both civilian and military users. However, the efficient use of the ever-varying ionosphere as a propagation medium is dependent on the reliability of ionospheric and HF propagation prediction models. Most available models are empirical implying that data collection has to be sufficiently large to provide good intended results. The models we present were developed with little data from the high latitudes which necessitates their validation. This paper presents the validation of three long term High Frequency (HF) propagation prediction models over a path within the Arctic region. Measurements of the Maximum Usable Frequency for a 3000 km range (MUF (3000) F2) for Resolute, Canada (74.75° N, 265.00° E), are obtained from hand-scaled ionograms generated by the Canadian Advanced Digital Ionosonde (CADI). The observations have been compared with predictions obtained from the Ionospheric Communication Enhanced Profile Analysis Program (ICEPAC), Voice of America Coverage Analysis Program (VOACAP) and International Telecommunication Union Recommendation 533 (ITU-REC533) for 2009, 2011, 2012 and 2013. A statistical analysis shows that the monthly predictions seem to reproduce the general features of the observations throughout the year though it is more evident in the winter and equinox months. Both predictions and observations show a diurnal and seasonal variation. The analysed models did not show large differences in their performances. However, there are noticeable differences across seasons for the entire period analysed: REC533 gives a better performance in winter months while VOACAP has a better performance for both equinox and summer months. VOACAP gives a better performance in the daily predictions compared to ICEPAC though, in general, the monthly predictions seem to agree more with the observations compared to the daily predictions.
Shewiyo, D H; Kaale, E; Risha, P G; Dejaegher, B; Smeyers-Verbeke, J; Vander Heyden, Y
2009-10-16
Pneumocystis carinii pneumonia (PCP) is often the ultimate mortal cause for immunocompromised individuals, such as HIV/AIDS patients. Currently, the most effective medicine for treatment and prophylaxis is co-trimoxazole, a synergistic combination of sulfamethoxazole (SMX) and trimethoprim (TMP). In order to ensure a continued availability of high quality co-trimoxazole tablets within resource-limited countries, Medicines Regulatory Authorities must perform quality control of these products. However, most pharmacopoeial methods are based on high-performance liquid chromatographic (HPLC) methods. Because of the lack of equipment, the Tanzania Food and Drugs Authority (TFDA) laboratory decided to develop and validate an alternative method of analysis based on the TLC technique with densitometric detection, for the routine quality control of co-trimoxazole tablets. SMX and TMP were separated on glass-backed silica gel 60 F(254) plates in a high-performance thin layer chromatograph (HPTLC). The mobile phase was comprised of toluene, ethylacetate and methanol (50:28.5:21.5, v:v:v). Detection wavelength was 254 nm. The R(f) values were 0.30 and 0.61 for TMP and SMX, respectively. This method was validated for linearity, precision, trueness, specificity and robustness. Cochran's criterion test indicated homoscedasticity of variances for the calibration data. The F-tests for lack-of-fit indicated that straight lines were adequate to describe the relationship between spot areas and concentrations for each compound. The percentage relative standard deviations for repeatability and time-different precisions were 0.98 and 1.32, and 0.83 and 1.64 for SMX and TMP, respectively. Percentage recovery values were 99.00%+/-1.83 and 99.66%+/-1.21 for SMX and TMP, respectively. The method was found to be robust and was then successfully applied to analyze co-trimoxazole tablet samples.
Development and Validation of the Appearance and Performance Enhancing Drug Use Schedule
Langenbucher, James W.; Lai, Justine Karmin; Loeb, Katharine L.; Hollander, Eric
2011-01-01
Appearance-and-performance enhancing drug (APED) use is a form of drug use that includes use of a wide range of substances such as anabolic-androgenic steroids (AASs) and associated behaviors including intense exercise and dietary control. To date, there are no reliable or valid measures of the core features of APED use. The present study describes the development and psychometric evaluation of the Appearance and Performance Enhancing Drug Use Schedule (APEDUS) which is a semi-structured interview designed to assess the spectrum of drug use and related features of APED use. Eighty-five current APED using men and women (having used an illicit APED in the past year and planning to use an illicit APED in the future) completed the APEDUS and measures of convergent and divergent validity. Inter-rater agreement, scale reliability, one-week test-retest reliability, convergent and divergent validity, and construct validity were evaluated for each of the APEDUS scales. The APEDUS is a modular interview with 10 sections designed to assess the core drug and non-drug phenomena associated with APED use. All scales and individual items demonstrated high inter-rater agreement and reliability. Individual scales significantly correlated with convergent measures (DSM-IV diagnoses, aggression, impulsivity, eating disorder pathology) and were uncorrelated with a measure of social desirability. APEDUS subscale scores were also accurate measures of AAS dependence. The APEDUS is a reliable and valid measure of APED phenomena and an accurate measure of the core pathology associated with APED use. Issues with assessing APED use are considered and future research considered. PMID:21640487
Gómez, José Fernando; Curcio, Carmen-Lucía; Alvarado, Beatriz; Zunzunegui, María Victoria; Guralnik, Jack
2013-07-01
To assess the validity (convergent and construct) and reliability of the Short Physical Performance Battery (SPPB) among non-disabled adults between 65 to 74 years of age residing in the Andes Mountains of Colombia. Design Validation study; 150 subjects aged 65 to 74 years recruited from elderly associations (day-centers) in Manizales, Colombia. The SPPB tests of balance, including time to walk 4 meters and time required to stand from a chair 5 times were administered to all participants. Reliability was analyzed with a 7-day interval between assessments and use of repeated ANOVA testing. Construct validity was assessed using factor analysis and by testing the relationship between SPPB and depressive symptoms, cognitive function, and self rated health (SRH), while the concurrent validity was measured through relationships with mobility limitations and disability in Activities of Daily Living (ADL). ANOVA tests were used to establish these associations. Test-retest reliability of the SPPB was high: 0.87 (CI95%: 0.77-0.96). A one factor solution was found with three SPPB tests. SPPB was related to self-rated health, limitations in walking and climbing steps and to indicators of disability, as well as to cognitive function and depression. There was a graded decrease in the mean SPPB score with increasing disability and poor health. The Spanish version of SPPB is reliable and valid to assess physical performance among older adults from our region. Future studies should establish their clinical applications and explore usage in population studies.
Validity and reliability of the Short Physical Performance Battery (SPPB)
Curcio, Carmen-Lucía; Alvarado, Beatriz; Zunzunegui, María Victoria; Guralnik, Jack
2013-01-01
Objectives: To assess the validity (convergent and construct) and reliability of the Short Physical Performance Battery (SPPB) among non-disabled adults between 65 to 74 years of age residing in the Andes Mountains of Colombia. Methods: Design Validation study; Participants: 150 subjects aged 65 to 74 years recruited from elderly associations (day-centers) in Manizales, Colombia. Measurements: The SPPB tests of balance, including time to walk 4 meters and time required to stand from a chair 5 times were administered to all participants. Reliability was analyzed with a 7-day interval between assessments and use of repeated ANOVA testing. Construct validity was assessed using factor analysis and by testing the relationship between SPPB and depressive symptoms, cognitive function, and self rated health (SRH), while the concurrent validity was measured through relationships with mobility limitations and disability in Activities of Daily Living (ADL). ANOVA tests were used to establish these associations. Results: Test-retest reliability of the SPPB was high: 0.87 (CI95%: 0.77-0.96). A one factor solution was found with three SPPB tests. SPPB was related to self-rated health, limitations in walking and climbing steps and to indicators of disability, as well as to cognitive function and depression. There was a graded decrease in the mean SPPB score with increasing disability and poor health. Conclusion: The Spanish version of SPPB is reliable and valid to assess physical performance among older adults from our region. Future studies should establish their clinical applications and explore usage in population studies. PMID:24892614
Diagnostic methods for CW laser damage testing
NASA Astrophysics Data System (ADS)
Stewart, Alan F.; Shah, Rashmi S.
2004-06-01
High performance optical coatings are an enabling technology for many applications - navigation systems, telecom, fusion, advanced measurement systems of many types as well as directed energy weapons. The results of recent testing of superior optical coatings conducted at high flux levels will be presented. The diagnostics used in this type of nondestructive testing and the analysis of the data demonstrates the evolution of test methodology. Comparison of performance data under load to the predictions of thermal and optical models shows excellent agreement. These tests serve to anchor the models and validate the performance of the materials and coatings.
Nuclide Depletion Capabilities in the Shift Monte Carlo Code
Davidson, Gregory G.; Pandya, Tara M.; Johnson, Seth R.; ...
2017-12-21
A new depletion capability has been developed in the Exnihilo radiation transport code suite. This capability enables massively parallel domain-decomposed coupling between the Shift continuous-energy Monte Carlo solver and the nuclide depletion solvers in ORIGEN to perform high-performance Monte Carlo depletion calculations. This paper describes this new depletion capability and discusses its various features, including a multi-level parallel decomposition, high-order transport-depletion coupling, and energy-integrated power renormalization. Several test problems are presented to validate the new capability against other Monte Carlo depletion codes, and the parallel performance of the new capability is analyzed.
Validating Visual Cues In Flight Simulator Visual Displays
NASA Astrophysics Data System (ADS)
Aronson, Moses
1987-09-01
Currently evaluation of visual simulators are performed by either pilot opinion questionnaires or comparison of aircraft terminal performance. The approach here is to compare pilot performance in the flight simulator with a visual display to his performance doing the same visual task in the aircraft as an indication that the visual cues are identical. The A-7 Night Carrier Landing task was selected. Performance measures which had high pilot performance prediction were used to compare two samples of existing pilot performance data to prove that the visual cues evoked the same performance. The performance of four pilots making 491 night landing approaches in an A-7 prototype part task trainer were compared with the performance of 3 pilots performing 27 A-7E carrier landing qualification approaches on the CV-60 aircraft carrier. The results show that the pilots' performances were similar, therefore concluding that the visual cues provided in the simulator were identical to those provided in the real world situation. Differences between the flight simulator's flight characteristics and the aircraft have less of an effect than the pilots individual performances. The measurement parameters used in the comparison can be used for validating the visual display for adequacy for training.
Development and initial validation of the Classroom Motivational Climate Questionnaire (CMCQ).
Alonso Tapia, Jesús; Fernández Heredia, Blanca
2008-11-01
Research on classroom goal-structures (CGS) has shown the usefulness of assessing the classroom motivational climate to evaluate educational interventions and to promote changes in teachers' activity. So, the Classroom Motivational Climate Questionnaire for Secondary and High-School students was developed. To validate it, confirmatory factor analysis and correlation and regression analyses were performed. Results showed that the CMCQ is a highly reliable instrument that covers many of the types of teaching patterns that favour motivation to learn, correlates as expected with other measures of CGS, predicts satisfaction with teacher's work well, and allows detecting teachers who should revise their teaching.
Kaur, Jaspreet; Srinivasan, K. K.; Joseph, Alex; Gupta, Abhishek; Singh, Yogendra; Srinivas, Kona S.; Jain, Garima
2010-01-01
Objective: Venlafaxine,hydrochloride is a structurally novel phenethyl bicyclic antidepressant, and is usually categorized as a serotonin–norepinephrine reuptake inhibitor (SNRI) but it has been referred to as a serotonin–norepinephrine–dopamine reuptake inhibitor. It inhibits the reuptake of dopamine. Venlafaxine HCL is widely prescribed in the form of sustained release formulations. In the current article we are reporting the development and validation of a fast and simple stability indicating, isocratic high performance liquid chromatographic (HPLC) method for the determination of venlafaxine hydrochloride in sustained release formulations. Materials and Methods: The quantitative determination of venlafaxine hydrochloride was performed on a Kromasil C18 analytical column (250 × 4.6 mm i.d., 5 μm particle size) with 0.01 M phosphate buffer (pH 4.5): methanol (40: 60) as a mobile phase, at a flow rate of 1.0 ml/min. For HPLC methods, UV detection was made at 225 nm. Results: During method validation, parameters such as precision, linearity, accuracy, stability, limit of quantification and detection and specificity were evaluated, which remained within acceptable limits. Conclusions: The method has been successfully applied for the quantification and dissolution profiling of Venlafaxine HCL in sustained release formulation. The method presents a simple and reliable solution for the routine quantitative analysis of Venlafaxine HCL. PMID:21814426
NASA Astrophysics Data System (ADS)
Lily; Laila, L.; Prasetyo, B. E.
2018-03-01
A selective, reproducibility, effective, sensitive, simple and fast High-Performance Liquid Chromatography (HPLC) was developed, optimized and validated to analyze 25-Desacetyl Rifampicin (25-DR) in human urine which is from tuberculosis patient. The separation was performed by HPLC Agilent Technologies with column Agilent Eclipse XDB- Ci8 and amobile phase of 65:35 v/v methanol: 0.01 M sodium phosphate buffer pH 5.2, at 254 nm and flow rate of 0.8ml/min. The mean retention time was 3.016minutes. The method was linear from 2–10μg/ml 25-DR with a correlation coefficient of 0.9978. Standard deviation, relative standard deviation and coefficient variation of 2, 6, 10μg/ml 25-DR were 0-0.0829, 03.1752, 0-0.0317%, respectively. The recovery of 5, 7, 9μg/ml25-DR was 80.8661, 91.3480 and 111.1457%, respectively. Limits of detection (LoD) and quantification (LoQ) were 0.51 and 1.7μg/ml, respectively. The method has fulfilled the validity guidelines of the International Conference on Harmonization (ICH) bioanalytical method which includes parameters of specificity, linearity, precision, accuracy, LoD, and LoQ. The developed method is suitable for pharmacokinetic analysis of various concentrations of 25-DR in human urine.
The application of neural networks to the SSME startup transient
NASA Technical Reports Server (NTRS)
Meyer, Claudia M.; Maul, William A.
1991-01-01
Feedforward neural networks were used to model three parameters during the Space Shuttle Main Engine startup transient. The three parameters were the main combustion chamber pressure, a controlled parameter, the high pressure oxidizer turbine discharge temperature, a redlined parameter, and the high pressure fuel pump discharge pressure, a failure-indicating performance parameter. Network inputs consisted of time windows of data from engine measurements that correlated highly to the modeled parameter. A standard backpropagation algorithm was used to train the feedforward networks on two nominal firings. Each trained network was validated with four additional nominal firings. For all three parameters, the neural networks were able to accurately predict the data in the validation sets as well as the training set.
Validation of Land Surface Temperature from Sentinel-3
NASA Astrophysics Data System (ADS)
Ghent, D.
2017-12-01
One of the main objectives of the Sentinel-3 mission is to measure sea- and land-surface temperature with high-end accuracy and reliability in support of environmental and climate monitoring in an operational context. Calibration and validation are thus key criteria for operationalization within the framework of the Sentinel-3 Mission Performance Centre (S3MPC). Land surface temperature (LST) has a long heritage of satellite observations which have facilitated our understanding of land surface and climate change processes, such as desertification, urbanization, deforestation and land/atmosphere coupling. These observations have been acquired from a variety of satellite instruments on platforms in both low-earth orbit and in geostationary orbit. Retrieval accuracy can be a challenge though; surface emissivities can be highly variable owing to the heterogeneity of the land, and atmospheric effects caused by the presence of aerosols and by water vapour absorption can give a bias to the underlying LST. As such, a rigorous validation is critical in order to assess the quality of the data and the associated uncertainties. Validation of the level-2 SL_2_LST product, which became freely available on an operational basis from 5th July 2017 builds on an established validation protocol for satellite-based LST. This set of guidelines provides a standardized framework for structuring LST validation activities. The protocol introduces a four-pronged approach which can be summarised thus: i) in situ validation where ground-based observations are available; ii) radiance-based validation over sites that are homogeneous in emissivity; iii) intercomparison with retrievals from other satellite sensors; iv) time-series analysis to identify artefacts on an interannual time-scale. This multi-dimensional approach is a necessary requirement for assessing the performance of the LST algorithm for the Sea and Land Surface Temperature Radiometer (SLSTR) which is designed around biome-based coefficients, thus emphasizing the importance of non-traditional forms of validation such as radiance-based techniques. Here we present examples of the ongoing routine application of the protocol to operational Sentinel-3 LST data.
Advanced Lithium-ion Batteries with High Specific Energy and Improved Safety for Nasa's Missions
NASA Technical Reports Server (NTRS)
West, William; Smart, Marshall; Soler, Jess; Krause, Charlie; Hwang, Constanza; Bugga, Ratnakumar
2012-01-01
High Energy Materials ( Cathodes, anodes and high voltage and safe electrolyte are required to meet the needs of the future space missions. A. Cathodes: The layered layered composites of of Li2MnO3 and LiMO2 are promising Power capability of the materials, however requires further improvement. Suitable morphology is critical for good performance and high tap (packing) density. Surface coatings help in the interfacial kinetics and stability. B. Electrolytes: Small additions of Flame Retardant Additives improves flammability without affecting performance (Rate and cycle life). 1.0 M in EC+EMC+TPP was shown to have good performance against the high voltage cathode; Performance demonstrated in large capacity prototype MCMB- LiNiCoO2 Cells. Formulations with higher proportions are looking promising. Still requires further validation through abuse tests (e.g., on 18650 cells).
Badr, Jihan M.
2013-01-01
Background: Yohimbine is an indole alkaloid used as a promising therapy for erectile dysfunction. A number of methods were reported for the analysis of yohimbine in the bark or in pharmaceutical preparations. Materials and Method: In the present work, a simple and sensitive high performance thin layer chromatographic method is developed for determination of yohimbine (occurring as yohimbine hydrochloride) in pharmaceutical preparations and validated according to International Conference of Harmonization (ICH) guidelines. The method employed thin layer chromatography aluminum sheets precoated with silica gel as the stationary phase and the mobile phase consisted of chloroform:methanol:ammonia (97:3:0.2), which gave compact bands of yohimbine hydrochloride. Results: Linear regression data for the calibration curves of standard yohimbine hydrochloride showed a good linear relationship over a concentration range of 80–1000 ng/spot with respect to the area and correlation coefficient (R2) was 0.9965. The method was evaluated regarding accuracy, precision, selectivity, and robustness. Limits of detection and quantitation were recorded as 5 and 40 ng/spot, respectively. The proposed method efficiently separated yohimbine hydrochloride from other components even in complex mixture containing powdered plants. The amount of yohimbine hydrochloride ranged from 2.3 to 5.2 mg/tablet or capsule in preparations containing the pure alkaloid, while it varied from zero (0) to 1.5–1.8 mg/capsule in dietary supplements containing powdered yohimbe bark. Conclusion: We concluded that this method employing high performance thin layer chromatography (HPTLC) in quantitative determination of yohimbine hydrochloride in pharmaceutical preparations is efficient, simple, accurate, and validated. PMID:23661986
Badr, Jihan M
2013-01-01
Yohimbine is an indole alkaloid used as a promising therapy for erectile dysfunction. A number of methods were reported for the analysis of yohimbine in the bark or in pharmaceutical preparations. In the present work, a simple and sensitive high performance thin layer chromatographic method is developed for determination of yohimbine (occurring as yohimbine hydrochloride) in pharmaceutical preparations and validated according to International Conference of Harmonization (ICH) guidelines. The method employed thin layer chromatography aluminum sheets precoated with silica gel as the stationary phase and the mobile phase consisted of chloroform:methanol:ammonia (97:3:0.2), which gave compact bands of yohimbine hydrochloride. Linear regression data for the calibration curves of standard yohimbine hydrochloride showed a good linear relationship over a concentration range of 80-1000 ng/spot with respect to the area and correlation coefficient (R(2)) was 0.9965. The method was evaluated regarding accuracy, precision, selectivity, and robustness. Limits of detection and quantitation were recorded as 5 and 40 ng/spot, respectively. The proposed method efficiently separated yohimbine hydrochloride from other components even in complex mixture containing powdered plants. The amount of yohimbine hydrochloride ranged from 2.3 to 5.2 mg/tablet or capsule in preparations containing the pure alkaloid, while it varied from zero (0) to 1.5-1.8 mg/capsule in dietary supplements containing powdered yohimbe bark. We concluded that this method employing high performance thin layer chromatography (HPTLC) in quantitative determination of yohimbine hydrochloride in pharmaceutical preparations is efficient, simple, accurate, and validated.
El Yazbi, Fawzy A.; Hassan, Ekram M.; Khamis, Essam F.; Ragab, Marwa A.A.; Hamdy, Mohamed M.A.
2016-01-01
A validated and highly selective high-performance thin-layer chromatography (HPTLC) method was developed for the determination of ketorolac tromethamine (KTC) with phenylephrine hydrochloride (PHE) (Mixture 1) and with febuxostat (FBX) (Mixture 2) in bulk drug and in combined dosage forms. The proposed method was based on HPTLC separation of the drugs followed by densitometric measurements of their spots at 273 and 320 nm for Mixtures 1 and 2, respectively. The separation was carried out on Merck HPTLC aluminum sheets of silica gel 60 F254 using chloroform–methanol–ammonia (7:3:0.1, v/v) and (7.5:2.5:0.1, v/v) as mobile phase for KTC/PHE and KTC/FBX mixtures, respectively. Linear regression lines were obtained over the concentration ranges 0.20–0.60 and 0.60–1.95 µg band−1 for KTC and PHE (Mixture 1), respectively, and 0.10–1.00 and 0.25–2.50 µg band−1 for KTC and FBX (Mixture 2), respectively, with correlation coefficients higher than 0.999. The method was successfully applied to the analysis of the two drugs in their synthetic mixtures and in their dosage forms. The mean percentage recoveries were in the range of 98–102%, and the RSD did not exceed 2%. The method was validated according to ICH guidelines and showed good performances in terms of linearity, sensitivity, precision, accuracy and stability. PMID:26847918
Teglia, Carla M; Gil García, María D; Galera, María Martínez; Goicoechea, Héctor C
2014-08-01
When determining endogenous compounds in biological samples, the lack of blank or analyte-free matrix samples involves the use of alternative strategies for calibration and quantitation. This article deals with the development, optimization and validation of a high performance liquid chromatography method for the determination of retinoic acid in plasma, obtaining at the same time information about its isomers, taking into account the basal concentration of these endobiotica. An experimental design was used for the optimization of three variables: mobile phase composition, flow rate and column temperature through a central composite design. Four responses were selected for optimization purposes (area under the peaks, quantity of peaks, analysis time and resolution between the first principal peak and the following one). The optimum conditions resulted in a mobile phase consisting of methanol 83.4% (v/v), acetonitrile 0.6% (v/v) and acid aqueous solution 16.0% (v/v); flow rate of 0.68 mL min(-1) and an column temperature of 37.10 °C. Detection was performed at 350 nm by a diode array detector. The method was validated following a holistic approach that included not only the classical parameters related to method performance but also the robustness and the expected proportion of acceptable results lying inside predefined acceptability intervals, i.e., the uncertainty of measurements. The method validation results indicated a high selectivity and good precision characteristics that were studied at four concentration levels, with RSD less than 5.0% for retinoic acid (less than 7.5% for the LOQ concentration level), in intra and inter-assay precision studies. Linearity was proved for a range from 0.00489 to 15.109 ng mL(-1) of retinoic acid and the recovery, which was studied at four different fortification levels in phuman plasma samples, varied from 99.5% to 106.5% for retinoic acid. The applicability of the method was demonstrated by determining retinoic acid and obtaining information about its isomers in human and frog plasma samples from different origins. Copyright © 2014 Elsevier B.V. All rights reserved.
Uncertainty Assessment of Hypersonic Aerothermodynamics Prediction Capability
NASA Technical Reports Server (NTRS)
Bose, Deepak; Brown, James L.; Prabhu, Dinesh K.; Gnoffo, Peter; Johnston, Christopher O.; Hollis, Brian
2011-01-01
The present paper provides the background of a focused effort to assess uncertainties in predictions of heat flux and pressure in hypersonic flight (airbreathing or atmospheric entry) using state-of-the-art aerothermodynamics codes. The assessment is performed for four mission relevant problems: (1) shock turbulent boundary layer interaction on a compression corner, (2) shock turbulent boundary layer interaction due a impinging shock, (3) high-mass Mars entry and aerocapture, and (4) high speed return to Earth. A validation based uncertainty assessment approach with reliance on subject matter expertise is used. A code verification exercise with code-to-code comparisons and comparisons against well established correlations is also included in this effort. A thorough review of the literature in search of validation experiments is performed, which identified a scarcity of ground based validation experiments at hypersonic conditions. In particular, a shortage of useable experimental data at flight like enthalpies and Reynolds numbers is found. The uncertainty was quantified using metrics that measured discrepancy between model predictions and experimental data. The discrepancy data is statistically analyzed and investigated for physics based trends in order to define a meaningful quantified uncertainty. The detailed uncertainty assessment of each mission relevant problem is found in the four companion papers.
Validation study of the in vitro skin irritation test with the LabCyte EPI-MODEL24.
Kojima, Hajime; Ando, Yoko; Idehara, Kenji; Katoh, Masakazu; Kosaka, Tadashi; Miyaoka, Etsuyoshi; Shinoda, Shinsuke; Suzuki, Tamie; Yamaguchi, Yoshihiro; Yoshimura, Isao; Yuasa, Atsuko; Watanabe, Yukihiko; Omori, Takashi
2012-03-01
A validation study on an in vitro skin irritation assay was performed with the reconstructed human epidermis (RhE) LabCyte EPI-MODEL24, developed by Japan Tissue Engineering Co. Ltd (Gamagori, Japan). The protocol that was followed in the current study was an optimised version of the EpiSkin protocol (LabCyte assay). According to the United Nations Globally Harmonised System (UN GHS) of classification for assessing the skin irritation potential of a chemical, 12 irritants and 13 non-irritants were validated by a minimum of six laboratories from the Japanese Society for Alternatives to Animal Experiments (JSAAE) skin irritation assay validation study management team (VMT). The 25 chemicals were listed in the European Centre for the Validation of Alternative Methods (ECVAM) performance standards. The reconstructed tissues were exposed to the chemicals for 15 minutes and incubated for 42 hours in fresh culture medium. Subsequently, the level of interleukin-1 alpha (IL-1 α) present in the conditioned medium was measured, and tissue viability was assessed by using the MTT assay. The results of the MTT assay obtained with the LabCyte EPI-MODEL24 (LabCyte MTT assay) demonstrated high within-laboratory and between-laboratory reproducibility, as well as high accuracy for use as a stand-alone assay to distinguish skin irritants from non-irritants. In addition, the IL-1α release measurements in the LabCyte assay were clearly unnecessary for the success of this model in the classification of chemicals for skin irritation potential. 2012 FRAME.
Kim, Joseph; Flick, Jeanette; Reimer, Michael T; Rodila, Ramona; Wang, Perry G; Zhang, Jun; Ji, Qin C; El-Shourbagy, Tawakol A
2007-11-01
As an effective DPP-IV inhibitor, 2-(4-((2-(2S,5R)-2-Cyano-5-ethynyl-1-pyrrolidinyl)-2-oxoethylamino)-4-methyl-1-piperidinyl)-4-pyridinecarboxylic acid (ABT-279), is an investigational drug candidate under development at Abbott Laboratories for potential treatment of type 2 diabetes. In order to support the development of ABT-279, multiple analytical methods for an accurate, precise and selective concentration determination of ABT-279 in different matrices were developed and validated in accordance with the US Food and Drug Administration Guidance on Bioanalytical Method Validation. The analytical method for ABT-279 in dog plasma was validated in parallel to other validations for ABT-279 determination in different matrices. In order to shorten the sample preparation time and increase method precision, an automated multi-channel liquid handler was used to perform high-throughput protein precipitation and all other liquid transfers. The separation was performed through a Waters YMC ODS-AQ column (2.0 x 150 mm, 5 microm, 120 A) with a mobile phase of 20 mm ammonium acetate in 20% acetonitrile at a flow rate of 0.3 mL/min. Data collection started at 2.2 min and continued for 2.0 min. The validated linear dynamic range in dog plasma was between 3.05 and 2033.64 ng/mL using a 50 microL sample volume. The achieved r(2) coefficient of determination from three consecutive runs was between 0.998625 and 0.999085. The mean bias was between -4.1 and 4.3% for all calibration standards including lower limit of quantitation. The mean bias was between -8.0 and 0.4% for the quality control samples. The precision, expressed as a coefficient of variation (CV), was < or =4.1% for all levels of quality control samples. The validation results demonstrated that the high-throughput method was accurate, precise and selective for the determination of ABT-279 in dog plasma. The validated method was also employed to support two toxicology studies. The passing rate was 100% for all 49 runs from one validation study and two toxicology studies. Copyright (c) 2007 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Yugatama, A.; Rohmani, S.; Dewangga, A.
2018-03-01
Atorvastatin is the primary choice for dyslipidemia treatment. Due to patent expiration of atorvastatin, the pharmaceutical industry makes copy of the drug. Therefore, the development methods for tablet quality tests involving atorvastatin concentration on tablets needs to be performed. The purpose of this research was to develop and validate the simple atorvastatin tablet analytical method by HPLC. HPLC system used in this experiment consisted of column Cosmosil C18 (150 x 4,6 mm, 5 µm) as the stationary reverse phase chomatography, a mixture of methanol-water at pH 3 (80:20 v/v) as the mobile phase, flow rate of 1 mL/min, and UV detector at wavelength of 245 nm. Validation methods were including: selectivity, linearity, accuracy, precision, limit of detection (LOD), and limit of quantitation (LOQ). The results of this study indicate that the developed method had good validation including selectivity, linearity, accuracy, precision, LOD, and LOQ for analysis of atorvastatin tablet content. LOD and LOQ were 0.2 and 0.7 ng/mL, and the linearity range were 20 - 120 ng/mL.
NASA Astrophysics Data System (ADS)
Susilaningsih, E.; Khotimah, K.; Nurhayati, S.
2018-04-01
The assessment of laboratory skill in general hasn’t specific guideline in assessment, while the individual assessment of students during a performance and skill in performing laboratory is still not been observed and measured properly. Alternative assessment that can be used to measure student laboratory skill is use performance assessment. The purpose of this study was to determine whether the performance assessment instrument that the result of research can be used to assess basic skills student laboratory. This research was conducted by the Research and Development. The result of the data analysis performance assessment instruments developed feasible to implement and validation result 62.5 with very good categories for observation sheets laboratory skills and all of the components with the very good category. The procedure is the preliminary stages of research and development stages. Preliminary stages are divided in two, namely the field studies and literature studies. The development stages are divided into several parts, namely 1) development of the type instrument, 2) validation by an expert, 3) a limited scale trial, 4) large-scale trials and 5) implementation of the product. The instrument included in the category of effective because 26 from 29 students have very high laboratory skill and high laboratory skill. The research of performance assessment instrument is standard and can be used to assess basic skill student laboratory.
Jank, Louise; Martins, Magda Targa; Arsand, Juliana Bazzan; Hoff, Rodrigo Barcellos; Barreto, Fabiano; Pizzolato, Tânia Mara
2015-01-01
This study describes the development and validation procedures for scope extension of a method for the determination of β-lactam antibiotic residues (ampicillin, amoxicillin, penicillin G, penicillin V, oxacillin, cloxacillin, dicloxacillin, nafcillin, ceftiofur, cefquinome, cefoperazone, cephapirine, cefalexin and cephalonium) in bovine milk. Sample preparation was performed by liquid-liquid extraction (LLE) followed by two clean-up steps, including low temperature purification (LTP) and a solid phase dispersion clean-up. Extracts were analysed using a liquid chromatography-electrospray-tandem mass spectrometry system (LC-ESI-MS/MS). Chromatographic separation was performed in a C18 column, using methanol and water (both with 0.1% of formic acid) as mobile phase. Method validation was performed according to the criteria of Commission Decision 2002/657/EC. Main validation parameters such as linearity, limit of detection, decision limit (CCα), detection capability (CCβ), accuracy, and repeatability were determined and were shown to be adequate. The method was applied to real samples (more than 250) and two milk samples had levels above maximum residues limits (MRLs) for cloxacillin - CLX and cefapirin - CFAP.
Validation of an ultra-fast UPLC-UV method for the separation of antituberculosis tablets.
Nguyen, Dao T-T; Guillarme, Davy; Rudaz, Serge; Veuthey, Jean-Luc
2008-04-01
A simple method using ultra performance LC (UPLC) coupled with UV detection was developed and validated for the determination of antituberculosis drugs in combined dosage form, i. e. isoniazid (ISN), pyrazinamide (PYR) and rifampicin (RIF). Drugs were separated on a short column (2.1 mm x 50 mm) packed with 1.7 mum particles, using an elution gradient procedure. At 30 degrees C, less than 2 min was necessary for the complete separation of the three antituberculosis drugs, while the original USP method was performed in 15 min. Further improvements were obtained with the combination of UPLC and high temperature (up to 90 degrees C), namely HT-UPLC, which allows the application of higher mobile phase flow rates. Therefore, the separation of ISN, PYR and RIF was performed in less than 1 min. After validation (selectivity, trueness, precision and accuracy), both methods (UPLC and HT-UPLC) have proven suitable for the routine quality control analysis of antituberculosis drugs in combined dosage form. Additionally, a large number of samples per day can be analysed due to the short analysis times.
Knight, Sophie; Aggarwal, Rajesh; Agostini, Aubert; Loundou, Anderson; Berdah, Stéphane
2018-01-01
Introduction Total Laparoscopic hysterectomy (LH) requires an advanced level of operative skills and training. The aim of this study was to develop an objective scale specific for the assessment of technical skills for LH (H-OSATS) and to demonstrate feasibility of use and validity in a virtual reality setting. Material and methods The scale was developed using a hierarchical task analysis and a panel of international experts. A Delphi method obtained consensus among experts on relevant steps that should be included into the H-OSATS scale for assessment of operative performances. Feasibility of use and validity of the scale were evaluated by reviewing video recordings of LH performed on a virtual reality laparoscopic simulator. Three groups of operators of different levels of experience were assessed in a Marseille teaching hospital (10 novices, 8 intermediates and 8 experienced surgeons). Correlations with scores obtained using a recognised generic global rating tool (OSATS) were calculated. Results A total of 76 discrete steps were identified by the hierarchical task analysis. 14 experts completed the two rounds of the Delphi questionnaire. 64 steps reached consensus and were integrated in the scale. During the validation process, median time to rate each video recording was 25 minutes. There was a significant difference between the novice, intermediate and experienced group for total H-OSATS scores (133, 155.9 and 178.25 respectively; p = 0.002). H-OSATS scale demonstrated high inter-rater reliability (intraclass correlation coefficient [ICC] = 0.930; p<0.001) and test retest reliability (ICC = 0.877; p<0.001). High correlations were found between total H-OSATS scores and OSATS scores (rho = 0.928; p<0.001). Conclusion The H-OSATS scale displayed evidence of validity for assessment of technical performances for LH performed on a virtual reality simulator. The implementation of this scale is expected to facilitate deliberate practice. Next steps should focus on evaluating the validity of the scale in the operating room. PMID:29293635
Esbenshade, Adam J; Zhao, Zhiguo; Aftandilian, Catherine; Saab, Raya; Wattier, Rachel L; Beauchemin, Melissa; Miller, Tamara P; Wilkes, Jennifer J; Kelly, Michael J; Fernbach, Alison; Jeng, Michael; Schwartz, Cindy L; Dvorak, Christopher C; Shyr, Yu; Moons, Karl G M; Sulis, Maria-Luisa; Friedman, Debra L
2017-10-01
Pediatric oncology patients are at an increased risk of invasive bacterial infection due to immunosuppression. The risk of such infection in the absence of severe neutropenia (absolute neutrophil count ≥ 500/μL) is not well established and a validated prediction model for blood stream infection (BSI) risk offers clinical usefulness. A 6-site retrospective external validation was conducted using a previously published risk prediction model for BSI in febrile pediatric oncology patients without severe neutropenia: the Esbenshade/Vanderbilt (EsVan) model. A reduced model (EsVan2) excluding 2 less clinically reliable variables also was created using the initial EsVan model derivative cohort, and was validated using all 5 external validation cohorts. One data set was used only in sensitivity analyses due to missing some variables. From the 5 primary data sets, there were a total of 1197 febrile episodes and 76 episodes of bacteremia. The overall C statistic for predicting bacteremia was 0.695, with a calibration slope of 0.50 for the original model and a calibration slope of 1.0 when recalibration was applied to the model. The model performed better in predicting high-risk bacteremia (gram-negative or Staphylococcus aureus infection) versus BSI alone, with a C statistic of 0.801 and a calibration slope of 0.65. The EsVan2 model outperformed the EsVan model across data sets with a C statistic of 0.733 for predicting BSI and a C statistic of 0.841 for high-risk BSI. The results of this external validation demonstrated that the EsVan and EsVan2 models are able to predict BSI across multiple performance sites and, once validated and implemented prospectively, could assist in decision making in clinical practice. Cancer 2017;123:3781-3790. © 2017 American Cancer Society. © 2017 American Cancer Society.
ERIC Educational Resources Information Center
Demir, Tazegul
2013-01-01
The main purpose of this study is to demonstrate the students' attitudes towards project and performance tasks in Turkish Lessons and to develop a reliable and valid measurement tool. A total of 461 junior high school students participated in this study. In this study, firstly the preparation of items, specialist be consulted (content…
CZT Detector and HXI Development at CASS/UCSD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rothschild, Richard E.; Tomsick, John A.; Matteson, James L.
2006-06-09
The scientific goals and concept design of the Hard X-ray Imager (HXI) for MIRAX are presented to set the context for a discussion of the status of the HXI development. Emphasis is placed upon the RENA ASIC performance, the detector module upgrades, and a planned high altitude balloon flight to validate the HXI design and performance in a near-space environment.
[Computerized system validation of clinical researches].
Yan, Charles; Chen, Feng; Xia, Jia-lai; Zheng, Qing-shan; Liu, Daniel
2015-11-01
Validation is a documented process that provides a high degree of assurance. The computer system does exactly and consistently what it is designed to do in a controlled manner throughout the life. The validation process begins with the system proposal/requirements definition, and continues application and maintenance until system retirement and retention of the e-records based on regulatory rules. The objective to do so is to clearly specify that each application of information technology fulfills its purpose. The computer system validation (CSV) is essential in clinical studies according to the GCP standard, meeting product's pre-determined attributes of the specifications, quality, safety and traceability. This paper describes how to perform the validation process and determine relevant stakeholders within an organization in the light of validation SOPs. Although a specific accountability in the implementation of the validation process might be outsourced, the ultimate responsibility of the CSV remains on the shoulder of the business process owner-sponsor. In order to show that the compliance of the system validation has been properly attained, it is essential to set up comprehensive validation procedures and maintain adequate documentations as well as training records. Quality of the system validation should be controlled using both QC and QA means.
Validation of China-wide interpolated daily climate variables from 1960 to 2011
NASA Astrophysics Data System (ADS)
Yuan, Wenping; Xu, Bing; Chen, Zhuoqi; Xia, Jiangzhou; Xu, Wenfang; Chen, Yang; Wu, Xiaoxu; Fu, Yang
2015-02-01
Temporally and spatially continuous meteorological variables are increasingly in demand to support many different types of applications related to climate studies. Using measurements from 600 climate stations, a thin-plate spline method was applied to generate daily gridded climate datasets for mean air temperature, maximum temperature, minimum temperature, relative humidity, sunshine duration, wind speed, atmospheric pressure, and precipitation over China for the period 1961-2011. A comprehensive evaluation of interpolated climate was conducted at 150 independent validation sites. The results showed superior performance for most of the estimated variables. Except for wind speed, determination coefficients ( R 2) varied from 0.65 to 0.90, and interpolations showed high consistency with observations. Most of the estimated climate variables showed relatively consistent accuracy among all seasons according to the root mean square error, R 2, and relative predictive error. The interpolated data correctly predicted the occurrence of daily precipitation at validation sites with an accuracy of 83 %. Moreover, the interpolation data successfully explained the interannual variability trend for the eight meteorological variables at most validation sites. Consistent interannual variability trends were observed at 66-95 % of the sites for the eight meteorological variables. Accuracy in distinguishing extreme weather events differed substantially among the meteorological variables. The interpolated data identified extreme events for the three temperature variables, relative humidity, and sunshine duration with an accuracy ranging from 63 to 77 %. However, for wind speed, air pressure, and precipitation, the interpolation model correctly identified only 41, 48, and 58 % of extreme events, respectively. The validation indicates that the interpolations can be applied with high confidence for the three temperatures variables, as well as relative humidity and sunshine duration based on the performance of these variables in estimating daily variations, interannual variability, and extreme events. Although longitude, latitude, and elevation data are included in the model, additional information, such as topography and cloud cover, should be integrated into the interpolation algorithm to improve performance in estimating wind speed, atmospheric pressure, and precipitation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valerio, Luis G., E-mail: luis.valerio@fda.hhs.gov; Cross, Kevin P.
Control and minimization of human exposure to potential genotoxic impurities found in drug substances and products is an important part of preclinical safety assessments of new drug products. The FDA's 2008 draft guidance on genotoxic and carcinogenic impurities in drug substances and products allows use of computational quantitative structure–activity relationships (QSAR) to identify structural alerts for known and expected impurities present at levels below qualified thresholds. This study provides the information necessary to establish the practical use of a new in silico toxicology model for predicting Salmonella t. mutagenicity (Ames assay outcome) of drug impurities and other chemicals. We describemore » the model's chemical content and toxicity fingerprint in terms of compound space, molecular and structural toxicophores, and have rigorously tested its predictive power using both cross-validation and external validation experiments, as well as case studies. Consistent with desired regulatory use, the model performs with high sensitivity (81%) and high negative predictivity (81%) based on external validation with 2368 compounds foreign to the model and having known mutagenicity. A database of drug impurities was created from proprietary FDA submissions and the public literature which found significant overlap between the structural features of drug impurities and training set chemicals in the QSAR model. Overall, the model's predictive performance was found to be acceptable for screening drug impurities for Salmonella mutagenicity. -- Highlights: ► We characterize a new in silico model to predict mutagenicity of drug impurities. ► The model predicts Salmonella mutagenicity and will be useful for safety assessment. ► We examine toxicity fingerprints and toxicophores of this Ames assay model. ► We compare these attributes to those found in drug impurities known to FDA/CDER. ► We validate the model and find it has a desired predictive performance.« less
NASA Astrophysics Data System (ADS)
Cortesi, N.; Trigo, R.; Gonzalez-Hidalgo, J. C.; Ramos, A. M.
2012-06-01
Precipitation over the Iberian Peninsula (IP) is highly variable and shows large spatial contrasts between wet mountainous regions, to the north, and dry regions in the inland plains and southern areas. In this work, a high-density monthly precipitation dataset for the IP was coupled with a set of 26 atmospheric circulation weather types (Trigo and DaCamara, 2000) to reconstruct Iberian monthly precipitation from October to May with a very high resolution of 3030 precipitation series (overall mean density one station each 200 km2). A stepwise linear regression model with forward selection was used to develop monthly reconstructed precipitation series calibrated and validated over 1948-2003 period. Validation was conducted by means of a leave-one-out cross-validation over the calibration period. The results show a good model performance for selected months, with a mean coefficient of variation (CV) around 0.6 for validation period, being particularly robust over the western and central sectors of IP, while the predicted values in the Mediterranean and northern coastal areas are less acute. We show for three long stations (Lisbon, Madrid and Valencia) the comparison between model and original data as an example to how these models can be used in order to obtain monthly precipitation fields since the 1850s over most of IP for this very high density network.
Validation of high throughput sequencing and microbial forensics applications
2014-01-01
High throughput sequencing (HTS) generates large amounts of high quality sequence data for microbial genomics. The value of HTS for microbial forensics is the speed at which evidence can be collected and the power to characterize microbial-related evidence to solve biocrimes and bioterrorist events. As HTS technologies continue to improve, they provide increasingly powerful sets of tools to support the entire field of microbial forensics. Accurate, credible results allow analysis and interpretation, significantly influencing the course and/or focus of an investigation, and can impact the response of the government to an attack having individual, political, economic or military consequences. Interpretation of the results of microbial forensic analyses relies on understanding the performance and limitations of HTS methods, including analytical processes, assays and data interpretation. The utility of HTS must be defined carefully within established operating conditions and tolerances. Validation is essential in the development and implementation of microbial forensics methods used for formulating investigative leads attribution. HTS strategies vary, requiring guiding principles for HTS system validation. Three initial aspects of HTS, irrespective of chemistry, instrumentation or software are: 1) sample preparation, 2) sequencing, and 3) data analysis. Criteria that should be considered for HTS validation for microbial forensics are presented here. Validation should be defined in terms of specific application and the criteria described here comprise a foundation for investigators to establish, validate and implement HTS as a tool in microbial forensics, enhancing public safety and national security. PMID:25101166
Validation of high throughput sequencing and microbial forensics applications.
Budowle, Bruce; Connell, Nancy D; Bielecka-Oder, Anna; Colwell, Rita R; Corbett, Cindi R; Fletcher, Jacqueline; Forsman, Mats; Kadavy, Dana R; Markotic, Alemka; Morse, Stephen A; Murch, Randall S; Sajantila, Antti; Schmedes, Sarah E; Ternus, Krista L; Turner, Stephen D; Minot, Samuel
2014-01-01
High throughput sequencing (HTS) generates large amounts of high quality sequence data for microbial genomics. The value of HTS for microbial forensics is the speed at which evidence can be collected and the power to characterize microbial-related evidence to solve biocrimes and bioterrorist events. As HTS technologies continue to improve, they provide increasingly powerful sets of tools to support the entire field of microbial forensics. Accurate, credible results allow analysis and interpretation, significantly influencing the course and/or focus of an investigation, and can impact the response of the government to an attack having individual, political, economic or military consequences. Interpretation of the results of microbial forensic analyses relies on understanding the performance and limitations of HTS methods, including analytical processes, assays and data interpretation. The utility of HTS must be defined carefully within established operating conditions and tolerances. Validation is essential in the development and implementation of microbial forensics methods used for formulating investigative leads attribution. HTS strategies vary, requiring guiding principles for HTS system validation. Three initial aspects of HTS, irrespective of chemistry, instrumentation or software are: 1) sample preparation, 2) sequencing, and 3) data analysis. Criteria that should be considered for HTS validation for microbial forensics are presented here. Validation should be defined in terms of specific application and the criteria described here comprise a foundation for investigators to establish, validate and implement HTS as a tool in microbial forensics, enhancing public safety and national security.
Grindle, Susan; Garganta, Cheryl; Sheehan, Susan; Gile, Joe; Lapierre, Andree; Whitmore, Harry; Paigen, Beverly; DiPetrillo, Keith
2006-12-01
Chronic kidney disease is a substantial medical and economic burden. Animal models, including mice, are a crucial component of kidney disease research; however, recent studies disprove the ability of autoanalyzer methods to accurately quantify plasma creatinine levels, an established marker of kidney disease, in mice. Therefore, we validated autoanalyzer methods for measuring blood urea nitrogen (BUN) and urinary albumin concentrations, 2 common markers of kidney disease, in samples from mice. We used high-performance liquid chromatography to validate BUN concentrations measured using an autoanalyzer, and we utilized mouse albumin standards to determine the accuracy of the autoanalyzer over a wide range of albumin concentrations. We observed a significant, linear correlation between BUN concentrations measured by autoanalyzer and high-performance liquid chromatography. We also found a linear relationship between known and measured albumin concentrations, although the autoanalyzer method underestimated the known amount of albumin by 3.5- to 4-fold. We confirmed that plasma and urine constituents do not interfere with the autoanalyzer methods for measuring BUN and urinary albumin concentrations. In addition, we verified BUN and albuminuria as useful markers to detect kidney disease in aged mice and mice with 5/6-nephrectomy. We conclude that autoanalyzer methods are suitable for high-throughput analysis of BUN and albumin concentrations in mice. The autoanalyzer accurately quantifies BUN concentrations in mouse plasma samples and is useful for measuring urinary albumin concentrations when used with mouse albumin standards.
Validation of the one pass measure for motivational interviewing competence.
McMaster, Fiona; Resnicow, Ken
2015-04-01
This paper examines the psychometric properties of the OnePass coding system: a new, user-friendly tool for evaluating practitioner competence in motivational interviewing (MI). We provide data on reliability and validity with the current gold-standard: Motivational Interviewing Treatment Integrity tool (MITI). We compared scores from 27 videotaped MI sessions performed by student counselors trained in MI and simulated patients using both OnePass and MITI, with three different raters for each tool. Reliability was estimated using intra-class coefficients (ICCs), and validity was assessed using Pearson's r. OnePass had high levels of inter-rater reliability with 19/23 items found from substantial to almost perfect agreement. Taking the pair of scores with the highest inter-rater reliability on the MITI, the concurrent validity between the two measures ranged from moderate to high. Validity was highest for evocation, autonomy, direction and empathy. OnePass appears to have good inter-rater reliability while capturing similar dimensions of MI as the MITI. Despite the moderate concurrent validity with the MITI, the OnePass shows promise in evaluating both traditional and novel interpretations of MI. OnePass may be a useful tool for developing and improving practitioner competence in MI where access to MITI coders is limited. Copyright © 2015. Published by Elsevier Ireland Ltd.
Development and validation of an instrument for evaluating inquiry-based tasks in science textbooks
NASA Astrophysics Data System (ADS)
Yang, Wenyuan; Liu, Enshan
2016-12-01
This article describes the development and validation of an instrument that can be used for content analysis of inquiry-based tasks. According to the theories of educational evaluation and qualities of inquiry, four essential functions that inquiry-based tasks should serve are defined: (1) assisting in the construction of understandings about scientific concepts, (2) providing students opportunities to use inquiry process skills, (3) being conducive to establishing understandings about scientific inquiry, and (4) giving students opportunities to develop higher order thinking skills. An instrument - the Inquiry-Based Tasks Analysis Inventory (ITAI) - was developed to judge whether inquiry-based tasks perform these functions well. To test the reliability and validity of the ITAI, 4 faculty members were invited to use the ITAI to collect data from 53 inquiry-based tasks in the 3 most widely adopted senior secondary biology textbooks in Mainland China. The results indicate that (1) the inter-rater reliability reached 87.7%, (2) the grading criteria have high discriminant validity, (3) the items possess high convergent validity, and (4) the Cronbach's alpha reliability coefficient reached 0.792. The study concludes that the ITAI is valid and reliable. Because of its solid foundations in theoretical and empirical argumentation, the ITAI is trustworthy.
Stockman, Ida J; Newkirk-Turner, Brandi L; Swartzlander, Elaina; Morris, Lekeitha R
2016-02-01
This study is a response to the need for evidence-based measures of spontaneous oral language to assess African American children under the age of 4 years. We determined if pass/fail status on a minimal competence core for morphosyntax (MCC-MS) was more highly related to scores on the Index of Productive Syntax (IPSyn)-the measure of convergent criterion validity-than to scores on 3 measures of divergent validity: number of different words (Watkins, Kelly, Harbers, & Hollis, 1995), Percentage of Consonants Correct-Revised (Shriberg, Austin, Lewis, McSweeney, & Wilson, 1997), and the Leiter International Performance Scale-Revised (Roid & Miller, 1997). Archival language samples for 68 African American 3-year-olds were analyzed to determine MCC-MS pass/fail status and the scores on measures of convergent and divergent validity. Higher IPSyn scores were observed for 60 children who passed the MCC-MS than for 8 children who did not. A significant positive correlation, rpb = .73, between MCC-MS pass/fail status and IPSyn scores was observed. This coefficient was higher than MCC-MS correlations with measures of divergent validity: rpb = .13 (Leiter International Performance Scale-Revised), rpb = .42 (number of different words in 100 utterances), and rpb = .46 (Percentage of Consonants Correct-Revised). The MCC-MS has convergent criterion validity with the IPSyn. Although more research is warranted, both measures can be potentially used in oral language assessments of African American 3-year-olds.
Schleier, Jerome J.; Peterson, Robert K.D.; Irvine, Kathryn M.; Marshall, Lucy M.; Weaver, David K.; Preftakes, Collin J.
2012-01-01
One of the more effective ways of managing high densities of adult mosquitoes that vector human and animal pathogens is ultra-low-volume (ULV) aerosol applications of insecticides. The U.S. Environmental Protection Agency uses models that are not validated for ULV insecticide applications and exposure assumptions to perform their human and ecological risk assessments. Currently, there is no validated model that can accurately predict deposition of insecticides applied using ULV technology for adult mosquito management. In addition, little is known about the deposition and drift of small droplets like those used under conditions encountered during ULV applications. The objective of this study was to perform field studies to measure environmental concentrations of insecticides and to develop a validated model to predict the deposition of ULV insecticides. The final regression model was selected by minimizing the Bayesian Information Criterion and its prediction performance was evaluated using k-fold cross validation. Density of the formulation and the density and CMD interaction coefficients were the largest in the model. The results showed that as density of the formulation decreases, deposition increases. The interaction of density and CMD showed that higher density formulations and larger droplets resulted in greater deposition. These results are supported by the aerosol physics literature. A k-fold cross validation demonstrated that the mean square error of the selected regression model is not biased, and the mean square error and mean square prediction error indicated good predictive ability.
Usefulness of the Patient Health Questionnaire-9 for Korean medical students.
Yoon, Seoyoung; Lee, Yunhwan; Han, Changsu; Pae, Chi-Un; Yoon, Ho-Kyoung; Patkar, Ashwin A; Steffens, David C; Kim, Yong-Ku
2014-12-01
Depression may be highly prevalent among medical students, lowering their functioning and quality of life. Using appropriate extant depression scales to screen for depression and determining factors associated with depression can be helpful in managing it. This study examines the validity and reliability of the Patient Health Questionnaire-9 (PHQ-9) for medical students and the relationship between their scores and sociodemographic variables. This study surveyed 174 medical students using demographic questionnaires, the PHQ-9, the Beck Depression Inventory (BDI), the Patient Heath Questionnaire-15 (PHQ-15), the Beck Anxiety Inventory (BAI), and the Perceived Stress Scale (PSS). It calculated the Cronbach's α for internal consistency and Pearson's correlation coefficients for test-retest reliability and convergent validity of the PHQ-9. In order to examine the relationship between depression and demographic variables, this study performed independent t tests, one-way analysis of variance, chi-square, and binary logistic regressions. The PHQ-9 was reliable (Cronbach's α = 0.837, test-retest reliability, r = 0.650) and valid (r = 0.509-0.807) when employed with medical students. Total scores on the PHQ-9 were significantly higher among low-perceived academic achievers than among high-perceived academic achievers (p < 0.01). Depression was more prevalent in poor-perceived academic achievers than in high-perceived academic achievers. Similarly, poor-perceived academic achievers were at greater risk of depression than were high-perceived academic achievers (odds ratio [95 % confidence interval] 3.686 [1.092-12.439], p < 0.05). The PHQ-9 has satisfactory reliability and validity in medical students in South Korea. Depression is related to poor-perceived academic achievement when measured with the PHQ-9. Early screening for depression with the PHQ-9 in medical students and providing prompt management to high scorers may not only be beneficial to students' mental health but also improve their academic performance.
Dowd, Kieran P.; Harrington, Deirdre M.; Donnelly, Alan E.
2012-01-01
Background The activPAL has been identified as an accurate and reliable measure of sedentary behaviour. However, only limited information is available on the accuracy of the activPAL activity count function as a measure of physical activity, while no unit calibration of the activPAL has been completed to date. This study aimed to investigate the criterion validity of the activPAL, examine the concurrent validity of the activPAL, and perform and validate a value calibration of the activPAL in an adolescent female population. The performance of the activPAL in estimating posture was also compared with sedentary thresholds used with the ActiGraph accelerometer. Methodologies Thirty adolescent females (15 developmental; 15 cross-validation) aged 15–18 years performed 5 activities while wearing the activPAL, ActiGraph GT3X, and the Cosmed K4B2. A random coefficient statistics model examined the relationship between metabolic equivalent (MET) values and activPAL counts. Receiver operating characteristic analysis was used to determine activity thresholds and for cross-validation. The random coefficient statistics model showed a concordance correlation coefficient of 0.93 (standard error of the estimate = 1.13). An optimal moderate threshold of 2997 was determined using mixed regression, while an optimal vigorous threshold of 8229 was determined using receiver operating statistics. The activPAL count function demonstrated very high concurrent validity (r = 0.96, p<0.01) with the ActiGraph count function. Levels of agreement for sitting, standing, and stepping between direct observation and the activPAL and ActiGraph were 100%, 98.1%, 99.2% and 100%, 0%, 100%, respectively. Conclusions These findings suggest that the activPAL is a valid, objective measurement tool that can be used for both the measurement of physical activity and sedentary behaviours in an adolescent female population. PMID:23094069
Development of an Integrated Nozzle for a Symmetric, RBCC Launch Vehicle Configuration
NASA Technical Reports Server (NTRS)
Smith, Timothy D.; Canabal, Francisco, III; Rice, Tharen; Blaha, Bernard
2000-01-01
The development of rocket based combined cycle (RBCC) engines is highly dependent upon integrating several different modes of operation into a single system. One of the key components to develop acceptable performance levels through each mode of operation is the nozzle. It must be highly integrated to serve the expansion processes of both rocket and air-breathing modes without undue weight, drag, or complexity. The NASA GTX configuration requires a fixed geometry, altitude-compensating nozzle configuration. The initial configuration, used mainly to estimate weight and cooling requirements was a 1 So half-angle cone, which cuts a concave surface from a point within the flowpath to the vehicle trailing edge. Results of 3-D CFD calculations on this geometry are presented. To address the critical issues associated with integrated, fixed geometry, multimode nozzle development, the GTX team has initiated a series of tasks to evolve the nozzle design, and validate performance levels. An overview of these tasks is given. The first element is a design activity to develop tools for integration of efficient expansion surfaces With the existing flowpath and vehicle aft-body, and to develop a second-generation nozzle design. A preliminary result using a "streamline-tracing" technique is presented. As the nozzle design evolves, a combination of 3-D CFD analysis and experimental evaluation will be used to validate the design procedure and determine the installed performance for propulsion cycle modeling. The initial experimental effort will consist of cold-flow experiments designed to validate the general trends of the streamline-tracing methodology and anchor the CFD analysis. Experiments will also be conducted to simulate nozzle performance during each mode of operation. As the design matures, hot-fire tests will be conducted to refine performance estimates and anchor more sophisticated reacting-flow analysis.
Tomasi, Ivan; Marconi, Ombretta; Sileoni, Valeria; Perretti, Giuseppe
2017-01-01
Beer wort β-glucans are high-molecular-weight non-starch polysaccharides of that are great interest to the brewing industries. Because glucans can increase the viscosity of the solutions and form gels, hazes, and precipitates, they are often related to poor lautering performance and beer filtration problems. In this work, a simple and suitable method was developed to determine and characterize β-glucans in beer wort using size exclusion chromatography coupled with a triple-detector array, which is composed of a light scatterer, a viscometer, and a refractive-index detector. The method performances are comparable to the commercial reference method as result from the statistical validation and enable one to obtain interesting parameters of β-glucan in beer wort, such as the molecular weight averages, fraction description, hydrodynamic radius, intrinsic viscosity, polydispersity and Mark-Houwink parameters. This characterization can be useful in brewing science to understand filtration problems, which are not always explained through conventional analysis. Copyright © 2016 Elsevier Ltd. All rights reserved.
Exploration of Force Myography and surface Electromyography in hand gesture classification.
Jiang, Xianta; Merhi, Lukas-Karim; Xiao, Zhen Gang; Menon, Carlo
2017-03-01
Whereas pressure sensors increasingly have received attention as a non-invasive interface for hand gesture recognition, their performance has not been comprehensively evaluated. This work examined the performance of hand gesture classification using Force Myography (FMG) and surface Electromyography (sEMG) technologies by performing 3 sets of 48 hand gestures using a prototyped FMG band and an array of commercial sEMG sensors worn both on the wrist and forearm simultaneously. The results show that the FMG band achieved classification accuracies as good as the high quality, commercially available, sEMG system on both wrist and forearm positions; specifically, by only using 8 Force Sensitive Resisters (FSRs), the FMG band achieved accuracies of 91.2% and 83.5% in classifying the 48 hand gestures in cross-validation and cross-trial evaluations, which were higher than those of sEMG (84.6% and 79.1%). By using all 16 FSRs on the band, our device achieved high accuracies of 96.7% and 89.4% in cross-validation and cross-trial evaluations. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Islam, Md Mahbubul; Strachan, Alejandro
A detailed atomistic-level understanding of the ultrafast chemistry of detonation processes of high energy materials is crucial to understand their performance and safety. Recent advances in laser shocks and ultra-fast spectroscopy is yielding the first direct experimental evidence of chemistry at extreme conditions. At the same time, reactive molecular dynamics (MD) in current high-performance computing platforms enable an atomic description of shock-induced chemistry with length and timescales approaching those of experiments. We use MD simulations with the reactive force field ReaxFF to investigate the shock-induced chemical decomposition mechanisms of polyvinyl nitrate (PVN) and nitromethane (NM). The effect of shock pressure on chemical reaction mechanisms and kinetics of both the materials are investigated. For direct comparison of our simulation results with experimentally derived IR absorption data, we performed spectral analysis using atomistic velocity at various shock conditions. The combination of reactive MD simulations and ultrafast spectroscopy enables both the validation of ReaxFF at extreme conditions and contributes to the interpretation of the experimental data relating changes in spectral features to atomic processes. Office of Naval Research MURI program.
Towards a new protocol of scoliosis assessments and monitoring in clinical practice: A pilot study.
Lukovic, Tanja; Cukovic, Sasa; Lukovic, Vanja; Devedzic, Goran; Djordjevic, Dusica
2015-01-01
Although intensively investigated, the procedures for assessment and monitoring of scoliosis are still a subject of controversies. The aim of this study was to assess validity and reliability of a number of physiotherapeutic measurements that could be used for clinical monitoring of scoliosis. Fifteen healthy (symmetric) subjects were subjected to a set of measurements two times, by two experienced and two inexperienced physiotherapists. Intra-observer and inter-observer reliability of measurements were determined. Following measurements were performed: body height and weight, chest girth in inspirium and expirium, the length of legs, the spine translation, the lateral pelvic tilt, the equality of the shoulders, position of scapulas, the equality of stature triangles, the rib hump, the existence of m. iliopsoas contracture, Fröhner index, the size of lumbar lordosis and the angle of trunk rotation. Intraclass correlation coefficient was high (> 0.8) for majority of measurements when experienced physiotherapists performed them, while inexperienced physiotherapists performed precisely only basic, easy measurements. We showed in this pilot study on healthy subjects, that majority of basic physiotherapeutic measurements are valid and reliable when performed by specialized physiotherapist, and it can be expected that this protocol will gain high value when measurements on subjects with scoliosis are performed.
Luo, Zhiqiang; Chen, Xinjing; Wang, Guopeng; Du, Zhibo; Ma, Xiaoyun; Wang, Hao; Yu, Guohua; Liu, Aoxue; Li, Mengwei; Peng, Wei; Liu, Yang
2018-01-01
Trelagliptin succinate is a dipeptidyl peptidase IV (DPP-4) inhibitor which is used as a new long-acting drug for once-weekly treatment of type 2 diabetes mellitus (DM). In the present study, a rapid, sensitive and accurate high-performance liquid chromatography (HPLC) method was developed and validated for separation and determination of trelagliptin succinate and its eight potential process-related impurities. The chromatographic separation was achieved on a Waters Xselect CSH™ C 18 (250mm×4.6mm, 5.0μm) column. The mobile phases comprised of 0.05% trifluoroacetic acid in water as well as acetonitrile containing 0.05% trifluoroacetic acid. The compounds of interest were monitored at 224nm and 275nm. The stability-indicating capability of this method was evaluated by performing stress test studies. Trelagliptin succinate was found to degrade significantly in acid, base, oxidative and thermal stress conditions and only stable in photolytic degradation condition. The degradation products were well resolved from the main peak and its impurities. In addition, the major degradation impurities formed under acid, base, oxidative and thermal stress conditions were characterized by ultra-high-performance liquid chromatography coupled with linear ion trap-Orbitrap tandem mass spectrometry (UHPLC-LTQ-Orbitrap). The method was validated to fulfill International Conference on Harmonisation (ICH) requirements and this validation included specificity, linearity, limit of detection (LOD), limit of quantification (LOQ), accuracy, precision and robustness. The developed method in this study could be applied for routine quality control analysis of trelagliptin succinate tablets, since there is no official monograph. Copyright © 2017 Elsevier B.V. All rights reserved.
Bhinder, Bhavneet; Antczak, Christophe; Ramirez, Christina N.; Shum, David; Liu-Sullivan, Nancy; Radu, Constantin; Frattini, Mark G.
2013-01-01
Abstract RNA interference technology is becoming an integral tool for target discovery and validation.; With perhaps the exception of only few studies published using arrayed short hairpin RNA (shRNA) libraries, most of the reports have been either against pooled siRNA or shRNA, or arrayed siRNA libraries. For this purpose, we have developed a workflow and performed an arrayed genome-scale shRNA lethality screen against the TRC1 library in HeLa cells. The resulting targets would be a valuable resource of candidates toward a better understanding of cellular homeostasis. Using a high-stringency hit nomination method encompassing criteria of at least three active hairpins per gene and filtered for potential off-target effects (OTEs), referred to as the Bhinder–Djaballah analysis method, we identified 1,252 lethal and 6 rescuer gene candidates, knockdown of which resulted in severe cell death or enhanced growth, respectively. Cross referencing individual hairpins with the TRC1 validated clone database, 239 of the 1,252 candidates were deemed independently validated with at least three validated clones. Through our systematic OTE analysis, we have identified 31 microRNAs (miRNAs) in lethal and 2 in rescuer genes; all having a seed heptamer mimic in the corresponding shRNA hairpins and likely cause of the OTE observed in our screen, perhaps unraveling a previously unknown plausible essentiality of these miRNAs in cellular viability. Taken together, we report on a methodology for performing large-scale arrayed shRNA screens, a comprehensive analysis method to nominate high-confidence hits, and a performance assessment of the TRC1 library highlighting the intracellular inefficiencies of shRNA processing in general. PMID:23198867
Characterization of the faulted behavior of digital computers and fault tolerant systems
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Miner, Paul S.
1989-01-01
A development status evaluation is presented for efforts conducted at NASA-Langley since 1977, toward the characterization of the latent fault in digital fault-tolerant systems. Attention is given to the practical, high speed, generalized gate-level logic system simulator developed, as well as to the validation methodology used for the simulator, on the basis of faultable software and hardware simulations employing a prototype MIL-STD-1750A processor. After validation, latency tests will be performed.
Space Technology 5 - A Successful Micro-Satellite Constellation Mission
NASA Technical Reports Server (NTRS)
Carlisle, Candace; Webb, Evan H.
2007-01-01
The Space Technology 5 (ST5) constellation of three micro-satellites was launched March 22, 2006. During the three-month flight demonstration phase, the ST5 team validated key technologies that will make future low-cost micro-sat constellations possible, demonstrated operability concepts for future micro-sat science constellation missions, and demonstrated the utility of a micro-satellite constellation to perform research-quality science. The ST5 mission was successfully completed in June 2006, demonstrating high-quality science and technology validation results.
Hillen, Marij A; Postma, Rosa-May; Verdam, Mathilde G E; Smets, Ellen M A
2017-03-01
The original 18-item, four-dimensional Trust in Oncologist Scale assesses cancer patients' trust in their oncologist. The current aim was to develop and validate a short form version of the scale to enable more efficient assessment of cancer patients' trust. Existing validation data of the full-length Trust in Oncologist Scale were used to create a short form of the Trust in Oncologist Scale. The resulting short form was validated in a new sample of cancer patients (n = 92). Socio-demographics, medical characteristics, trust in the oncologist, satisfaction with communication, trust in healthcare, willingness to recommend the oncologist to others and to contact the oncologist in case of questions were assessed. Internal consistency, reliability, convergent and structural validity were tested. The five-item Trust in Oncologist Scale Short Form was created by selecting the statistically best performing item from each dimension of the original scale, to ensure content validity. Mean trust in the oncologist was high in the validation sample (response rate 86%, M = 4.30, SD = 0.98). Exploratory factor analyses supported one-dimensionality of the short form. Internal consistency was high, and temporal stability was moderate. Initial convergent validity was suggested by moderate correlations between trust scores with associated constructs. The Trust in Oncologist Scale Short Form appears to efficiently, reliably and validly measures cancer patients' trust in their oncologist. It may be used in research and as a quality indicator in clinical practice. More thorough validation of the scale is recommended to confirm this initial evidence of its validity.
Overview of a Proposed Flight Validation of Aerocapture System Technology for Planetary Missions
NASA Technical Reports Server (NTRS)
Keys, Andrew S.; Hall, Jeffery L.; Oh, David; Munk, Michelle M.
2006-01-01
Aerocapture System Technology for Planetary Missions is being proposed to NASA's New Millennium Program for flight aboard the Space Technology 9 (ST9) flight opportunity. The proposed ST9 aerocapture mission is a system-level flight validation of the aerocapture maneuver as performed by an instrumented, high-fidelity flight vehicle within a true in-space and atmospheric environment. Successful validation of the aerocapture maneuver will be enabled through the flight validation of an advanced guidance, navigation, and control system as developed by Ball Aerospace and two advanced Thermal Protection System (TPS) materials, Silicon Refined Ablative Material-20 (SRAM-20) and SRAM-14, as developed by Applied Research Associates (ARA) Ablatives Laboratory. The ST9 aerocapture flight validation will be sufficient for immediate infusion of these technologies into NASA science missions being proposed for flight to a variety of Solar System destinations possessing a significant planetary atmosphere.
Brunelli, Matteo; Beccari, Serena; Colombari, Romano; Gobbo, Stefano; Giobelli, Luca; Pellegrini, Andrea; Chilosi, Marco; Lunardi, Maria; Martignoni, Guido; Scarpa, Aldo; Eccher, Albino
2014-01-01
Validation of digital whole slide images is crucial to ensure that diagnostic performance is at least equivalent to that of glass slides and light microscopy. The College of American Pathologists Pathology and Laboratory Quality Center recently developed recommendations for internal digital pathology system validation. Following these guidelines we sought to validate the performance of a digital approach for routine diagnosis by using an iPad and digital control widescreen-assisted workstation through a pilot study. From January 2014, 61 histopathological slides were scanned by ScanScope Digital Slides Scanner (Aperio, Vista, CA). Two independent pathologists performed diagnosis on virtual slides in front of a widescreen by using two computer devices (ImageScope viewing software) located to different Health Institutions (AOUI Verona) connected by local network and a remote image server using an iPad tablet (Aperio, Vista, CA), after uploading the Citrix receiver for iPad. Quality indicators related to image characters and work-flow of the e-health cockpit enterprise system were scored based on subjective (high vs poor) perception. The images were re-evaluated two weeks apart. The whole glass slides encountered 10 liver: hepatocarcinoma, 10 renal carcinoma, 10 gastric carcinoma and 10 prostate biopsies: adenocarcinoma, 5 excisional skin biopsies: melanoma, 5 lymph-nodes: lymphoma. 6 immuno- and 5 special stains were available for intra- and internet remote viewing. Scan times averaged two minutes and 54 seconds per slide (standard deviation 2 minutes 34 seconds). Megabytes ranged from 256 to 680 (mean 390) per slide storage. Reliance on glass slide, image quality (resolution and color fidelity), slide navigation time, simultaneous viewers in geographically remote locations were considered of high performance score. Side by side comparisons between diagnosis performed on tissue glass slides versus widescreen were excellent showing an almost perfect concordance (0.81, kappa index). We validated our institutional digital pathology system for routine diagnostic facing with whole slide images in a cockpit enterprise digital system or iPad tablet. Computer widescreens are better for diagnosing scanned glass slide that iPad. For urgent requests, iPad may be used. Legal aspects have to be soon faced with to permit the clinical use of this technology in a manner that does not compromise patient care.
Stevens, Tom Gerardus Antonia; De Ruiter, Cornelis Johannes; Beek, Peter Jan; Savelsbergh, Geert Jozef Peter
2016-01-01
In order to determine whether small-sided game (SSG) locomotor performance can serve as a fitness indicator, we (1) compared 6-a-side (6v6) SSG-intensity of players varying in fitness and skill, (2) examined the relationship of the 6v6-SSG and Yo-Yo IR2 and (3) assessed the reliability of the 6v6-SSG. Thirty-three professional senior, 30 professional youth, 62 amateur and 16 professional woman football players performed 4 × 7 min 6v6-SSGs recorded by a Local Position Measurement system. A substantial subgroup (N = 113) also performed the Yo-Yo IR2. Forty-seven amateur players performed two or three 6v6-SSGs. No differences in 6v6-SSG time-motion variables were found between professional senior and professional youth players. Amateurs showed lower values than professional seniors on almost all time-motion variables (ES = 0.59-1.19). Women displayed lower high-intensity time-motion variables than all other subgroups. Total distance run during 6v6-SSG was only moderately related to Yo-Yo IR2 distance (r = 0.45), but estimated metabolic power, high speed (>14.4 km · h(-1)), high acceleration (>2 m · s(-2)), high power (>20 W · kg(-1)) and very high (35 W · kg(-1)) power showed higher correlations (r = 0.59-0.70) with Yo-Yo IR2 distance. Intraclass correlation coefficient values were higher for total distance (0.84) than other time-motion variables (0.74‒0.78). Although total distance and metabolic power during 6v6-SSG showed good reproducibility (coefficient of variation (CV) < 5%), CV was higher (8-14%) for all high-intensity time-motion variables. It was therefore concluded that standardised SSG locomotor performance cannot serve used as a valid and reliable fitness indicator for individual players.
Andrade, Susan E.; Harrold, Leslie R.; Tjia, Jennifer; Cutrona, Sarah L.; Saczynski, Jane S.; Dodd, Katherine S.; Goldberg, Robert J.; Gurwitz, Jerry H.
2012-01-01
Purpose To perform a systematic review of the validity of algorithms for identifying cerebrovascular accidents (CVAs) or transient ischemic attacks (TIAs) using administrative and claims data. Methods PubMed and Iowa Drug Information Service (IDIS) searches of the English language literature were performed to identify studies published between 1990 and 2010 that evaluated the validity of algorithms for identifying CVAs (ischemic and hemorrhagic strokes, intracranial hemorrhage and subarachnoid hemorrhage) and/or TIAs in administrative data. Two study investigators independently reviewed the abstracts and articles to determine relevant studies according to pre-specified criteria. Results A total of 35 articles met the criteria for evaluation. Of these, 26 articles provided data to evaluate the validity of stroke, 7 reported the validity of TIA, 5 reported the validity of intracranial bleeds (intracerebral hemorrhage and subarachnoid hemorrhage), and 10 studies reported the validity of algorithms to identify the composite endpoints of stroke/TIA or cerebrovascular disease. Positive predictive values (PPVs) varied depending on the specific outcomes and algorithms evaluated. Specific algorithms to evaluate the presence of stroke and intracranial bleeds were found to have high PPVs (80% or greater). Algorithms to evaluate TIAs in adult populations were generally found to have PPVs of 70% or greater. Conclusions The algorithms and definitions to identify CVAs and TIAs using administrative and claims data differ greatly in the published literature. The choice of the algorithm employed should be determined by the stroke subtype of interest. PMID:22262598
Performance Evaluation of a Data Validation System
NASA Technical Reports Server (NTRS)
Wong, Edmond (Technical Monitor); Sowers, T. Shane; Santi, L. Michael; Bickford, Randall L.
2005-01-01
Online data validation is a performance-enhancing component of modern control and health management systems. It is essential that performance of the data validation system be verified prior to its use in a control and health management system. A new Data Qualification and Validation (DQV) Test-bed application was developed to provide a systematic test environment for this performance verification. The DQV Test-bed was used to evaluate a model-based data validation package known as the Data Quality Validation Studio (DQVS). DQVS was employed as the primary data validation component of a rocket engine health management (EHM) system developed under NASA's NGLT (Next Generation Launch Technology) program. In this paper, the DQVS and DQV Test-bed software applications are described, and the DQV Test-bed verification procedure for this EHM system application is presented. Test-bed results are summarized and implications for EHM system performance improvements are discussed.
Azagra, R; Zwart, M; Aguyé, A; Martín-Sánchez, J C; Casado, E; Díaz-Herrera, M A; Moriña, D; Cooper, C; Díez-Pérez, A; Dennison, E M
2016-01-01
To perform an external validation of FRAX algorithm thresholds for reporting level of risk of fracture in Spanish women (low < 5%; intermediate ≥ 5% and < 7.5%; high ≥ 7.5%) taken from a prospective cohort "FRIDEX". A retrospective study of 1090 women aged ≥ 40 and ≤ 90 years old obtained from the general population (FROCAT cohort). FRAX was calculated with data registered in 2002. All fractures were validated in 2012. Sensitivity analysis was performed. When analyzing the cohort (884) excluding current or past anti osteoporotic medication (AOM), using our nominated thresholds, among the 621 (70.2%) women at low risk of fracture, 5.2% [CI95%: 3.4-7.6] sustained a fragility fracture; among the 99 at intermediate risk, 12.1% [6.4-20.2]; and among the 164 defined as high risk, 15.9% [10.6-24.2]. Sensitivity analysis against model risk stratification FRIDEX of FRAX Spain shows no significant difference. By including 206 women with AOM, the sensitivity analysis shows no difference in the group of intermediate and high risk and minimal differences in the low risk group. Our findings support and validate the use of FRIDEX thresholds of FRAX when discussing the risk of fracture and the initiation of therapy with patients. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Gupta, Shweta; Kesarla, Rajesh; Chotai, Narendra; Omri, Abdelwahab
2017-01-01
Efavirenz is an anti-viral agent of non-nucleoside reverse transcriptase inhibitor category used as a part of highly active retroviral therapy for the treatment of infections of human immune deficiency virus type-1. A simple, sensitive and rapid reversed-phase high performance liquid chromatographic gradient method was developed and validated for the determination of efavirenz in plasma. The method was developed with high performance liquid chromatography using Waters X-Terra Shield, RP18 50 x 4.6 mm, 3.5 μm column and a mobile phase consisting of phosphate buffer pH 3.5 and Acetonitrile. The elute was monitored with the UV-Visible detector at 260 nm with a flow rate of 1.5 mL/min. Tenofovir disoproxil fumarate was used as internal standard. The method was validated for linearity, precision, accuracy, specificity, robustness and data obtained were statistically analyzed. Calibration curve was found to be linear over the concentration range of 1-300 μg/mL. The retention times of efavirenz and tenofovir disoproxil fumarate (internal standard) were 5.941 min and 4.356 min respectively. The regression coefficient value was found to be 0.999. The limit of detection and the limit of quantification obtained were 0.03 and 0.1 μg/mL respectively. The developed HPLC method can be useful for quantitative pharmacokinetic parameters determination of efavirenz in plasma.
Blessy, S A Praylin Selva; Sulochana, C Helen
2015-01-01
Segmentation of brain tumor from Magnetic Resonance Imaging (MRI) becomes very complicated due to the structural complexities of human brain and the presence of intensity inhomogeneities. To propose a method that effectively segments brain tumor from MR images and to evaluate the performance of unsupervised optimal fuzzy clustering (UOFC) algorithm for segmentation of brain tumor from MR images. Segmentation is done by preprocessing the MR image to standardize intensity inhomogeneities followed by feature extraction, feature fusion and clustering. Different validation measures are used to evaluate the performance of the proposed method using different clustering algorithms. The proposed method using UOFC algorithm produces high sensitivity (96%) and low specificity (4%) compared to other clustering methods. Validation results clearly show that the proposed method with UOFC algorithm effectively segments brain tumor from MR images.
Müller, Nico; Baumeister, Sarah; Dziobek, Isabel; Banaschewski, Tobias; Poustka, Luise
2016-09-01
Impaired social cognition is one of the core characteristics of autism spectrum disorders (ASD). Appropriate measures of social cognition for high-functioning adolescents with ASD are, however, lacking. The Movie for the Assessment of Social Cognition (MASC) uses dynamic social stimuli, ensuring ecological validity, and has proven to be a sensitive measure in adulthood. In the current study, 33 adolescents with ASD and 23 controls were administered the MASC, while concurrent eye tracking was used to relate gaze behavior to performance levels. The ASD group exhibited reduced MASC scores, with social cognition performance being explained by shorter fixation duration on eyes and decreased pupil dilation. These potential diagnostic markers are discussed as indicators of different processing of social information in ASD.
Assessing the performance of dynamical trajectory estimates
NASA Astrophysics Data System (ADS)
Bröcker, Jochen
2014-06-01
Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.
Assessing the performance of dynamical trajectory estimates.
Bröcker, Jochen
2014-06-01
Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.
Katoh, Masakazu; Hamajima, Fumiyasu; Ogasawara, Takahiro; Hata, Ken-Ichiro
2009-06-01
A validation study of an in vitro skin irritation testing method using a reconstructed human skin model has been conducted by the European Centre for the Validation of Alternative Methods (ECVAM), and a protocol using EpiSkin (SkinEthic, France) has been approved. The structural and performance criteria of skin models for testing are defined in the ECVAM Performance Standards announced along with the approval. We have performed several evaluations of the new reconstructed human epidermal model LabCyte EPI-MODEL, and confirmed that it is applicable to skin irritation testing as defined in the ECVAM Performance Standards. We selected 19 materials (nine irritants and ten non-irritants) available in Japan as test chemicals among the 20 reference chemicals described in the ECVAM Performance Standard. A test chemical was applied to the surface of the LabCyte EPI-MODEL for 15 min, after which it was completely removed and the model then post-incubated for 42 hr. Cell v iability was measured by MTT assay and skin irritancy of the test chemical evaluated. In addition, interleukin-1 alpha (IL-1alpha) concentration in the culture supernatant after post-incubation was measured to provide a complementary evaluation of skin irritation. Evaluation of the 19 test chemicals resulted in 79% accuracy, 78% sensitivity and 80% specificity, confirming that the in vitro skin irritancy of the LabCyte EPI-MODEL correlates highly with in vivo skin irritation. These results suggest that LabCyte EPI-MODEL is applicable to the skin irritation testing protocol set out in the ECVAM Performance Standards.
Hildebrand, Ainslie M; Iansavichus, Arthur V; Haynes, R Brian; Wilczynski, Nancy L; Mehta, Ravindra L; Parikh, Chirag R; Garg, Amit X
2014-04-01
We frequently fail to identify articles relevant to the subject of acute kidney injury (AKI) when searching the large bibliographic databases such as PubMed, Ovid Medline or Embase. To address this issue, we used computer automation to create information search filters to better identify articles relevant to AKI in these databases. We first manually reviewed a sample of 22 992 full-text articles and used prespecified criteria to determine whether each article contained AKI content or not. In the development phase (two-thirds of the sample), we developed and tested the performance of >1.3-million unique filters. Filters with high sensitivity and high specificity for the identification of AKI articles were then retested in the validation phase (remaining third of the sample). We succeeded in developing and validating high-performance AKI search filters for each bibliographic database with sensitivities and specificities in excess of 90%. Filters optimized for sensitivity reached at least 97.2% sensitivity, and filters optimized for specificity reached at least 99.5% specificity. The filters were complex; for example one PubMed filter included >140 terms used in combination, including 'acute kidney injury', 'tubular necrosis', 'azotemia' and 'ischemic injury'. In proof-of-concept searches, physicians found more articles relevant to topics in AKI with the use of the filters. PubMed, Ovid Medline and Embase can be filtered for articles relevant to AKI in a reliable manner. These high-performance information filters are now available online and can be used to better identify AKI content in large bibliographic databases.
De La Vega, Francisco M; Dailey, David; Ziegle, Janet; Williams, Julie; Madden, Dawn; Gilbert, Dennis A
2002-06-01
Since public and private efforts announced the first draft of the human genome last year, researchers have reported great numbers of single nucleotide polymorphisms (SNPs). We believe that the availability of well-mapped, quality SNP markers constitutes the gateway to a revolution in genetics and personalized medicine that will lead to better diagnosis and treatment of common complex disorders. A new generation of tools and public SNP resources for pharmacogenomic and genetic studies--specifically for candidate-gene, candidate-region, and whole-genome association studies--will form part of the new scientific landscape. This will only be possible through the greater accessibility of SNP resources and superior high-throughput instrumentation-assay systems that enable affordable, highly productive large-scale genetic studies. We are contributing to this effort by developing a high-quality linkage disequilibrium SNP marker map and an accompanying set of ready-to-use, validated SNP assays across every gene in the human genome. This effort incorporates both the public sequence and SNP data sources, and Celera Genomics' human genome assembly and enormous resource ofphysically mapped SNPs (approximately 4,000,000 unique records). This article discusses our approach and methodology for designing the map, choosing quality SNPs, designing and validating these assays, and obtaining population frequency ofthe polymorphisms. We also discuss an advanced, high-performance SNP assay chemisty--a new generation of the TaqMan probe-based, 5' nuclease assay-and high-throughput instrumentation-software system for large-scale genotyping. We provide the new SNP map and validation information, validated SNP assays and reagents, and instrumentation systems as a novel resource for genetic discoveries.
Embedded performance validity testing in neuropsychological assessment: Potential clinical tools.
Rickards, Tyler A; Cranston, Christopher C; Touradji, Pegah; Bechtold, Kathleen T
2018-01-01
The article aims to suggest clinically-useful tools in neuropsychological assessment for efficient use of embedded measures of performance validity. To accomplish this, we integrated available validity-related and statistical research from the literature, consensus statements, and survey-based data from practicing neuropsychologists. We provide recommendations for use of 1) Cutoffs for embedded performance validity tests including Reliable Digit Span, California Verbal Learning Test (Second Edition) Forced Choice Recognition, Rey-Osterrieth Complex Figure Test Combination Score, Wisconsin Card Sorting Test Failure to Maintain Set, and the Finger Tapping Test; 2) Selecting number of performance validity measures to administer in an assessment; and 3) Hypothetical clinical decision-making models for use of performance validity testing in a neuropsychological assessment collectively considering behavior, patient reporting, and data indicating invalid or noncredible performance. Performance validity testing helps inform the clinician about an individual's general approach to tasks: response to failure, task engagement and persistence, compliance with task demands. Data-driven clinical suggestions provide a resource to clinicians and to instigate conversation within the field to make more uniform, testable decisions to further the discussion, and guide future research in this area.
Gutiérrez Sánchez, Daniel; Cuesta-Vargas, Antonio I
2018-04-01
Many measurements have been developed to assess the quality of death (QoD). Among these, the Quality of Dying and Death Questionnaire (QODD) is the most widely studied and best validated. Informal carers and health professionals who care for the patient during their last days of life can complete this assessment tool. The aim of the study is to carry out a cross-cultural adaptation and a psychometric analysis of the QODD for the Spanish population. The translation was performed using a double forward and backward method. An expert panel evaluated the content validity. The questionnaire was tested in a sample of 72 Spanish-speaking adult carers of deceased cancer patients. A psychometric analysis was performed to evaluate internal consistency, divergent criterion-related validity with the Mini-Suffering State Examination (MSSE) and concurrent criterion-related validity with the Palliative Outcome Scale (POS). Some items were deleted and modified to create the Spanish version of the QODD (QODD-ESP-26). The instrument was readable and acceptable. The content validity index was 0.96, suggesting that all items are relevant for the measure of the QoD. This questionnaire showed high internal consistency (Cronbach's α coefficient = 0.88). Divergent validity with MSSE (r = -0.64) and convergent validity with POS (r = -0.61) were also demonstrated. The QODD-ESP-26 is a valid and reliable instrument for the assessment of the QoD of deceased cancer patients that can be used in a clinical and research setting. Copyright © 2018 Elsevier Ltd. All rights reserved.
Bigbee, William L.; Gopalakrishnan, Vanathi; Weissfeld, Joel L.; Wilson, David O.; Dacic, Sanja; Lokshin, Anna E.; Siegfried, Jill M.
2012-01-01
Introduction Clinical decision-making in the setting of CT screening could benefit from accessible biomarkers that help predict the level of lung cancer risk in high-risk individuals with indeterminate pulmonary nodules. Methods To identify candidate serum biomarkers, we measured 70 cancer-related proteins by Luminex xMAP® multiplexed immunoassays in a training set of sera from 56 patients with biopsy-proven primary non small cell lung cancer and 56 age-, sex- and smoking-matched CT-screened controls. Results We identified a panel of 10 serum biomarkers – prolactin, transthyretin, thrombospondin-1, E-selectin, C-C motif chemokine 5, macrophage migration inhibitory factor, plasminogen activator inhibitor, receptor tyrosine-protein kinase, Cyfra 21.1, and serum amyloid A – that distinguished lung cancer from controls with an estimated balanced accuracy (average of sensitivity and specificity) of 76.0%±3.8% from 20-fold internal cross-validation. We then iteratively evaluated this model in independent test and verification case/control studies confirming the initial classification performance of the panel. The classification performance of the 10-biomarker panel was also analytically validated using ELISAs in a second independent case/control population further validating the robustness of the panel. Conclusions The performance of this 10-biomarker panel based model was 77.1% sensitivity/76.2% specificity in cross-validation in the expanded training set, 73.3% sensitivity/93.3% specificity (balanced accuracy 83.3%) in the blinded verification set with the best discriminative performance in Stage I/II cases: 85% sensitivity (balanced accuracy 89.2%). Importantly, the rate of misclassification of CT-screened controls was not different in most control subgroups with or without airflow obstruction or emphysema or pulmonary nodules. These biomarkers have potential to aid in the early detection of lung cancer and more accurate interpretation of indeterminate pulmonary nodules detected by screening CT. PMID:22425918
A Database for Comparative Electrochemical Performance of Commercial 18650-Format Lithium-Ion Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barkholtz, Heather M.; Fresquez, Armando; Chalamala, Babu R.
Lithium-ion batteries are a central technology to our daily lives with widespread use in mobile devices and electric vehicles. These batteries are also beginning to be widely used in electric grid infrastructure support applications which have stringent safety and reliability requirements. Typically, electrochemical performance data is not available for modelers to validate their simulations, mechanisms, and algorithms for lithium-ion battery performance and lifetime. In this paper, we report on the electrochemical performance of commercial 18650 cells at a variety of temperatures and discharge currents. We found that LiFePO 4 is temperature tolerant for discharge currents at or below 10 Amore » whereas LiCoO 2, LiNi xCo yAl 1-x-yO 2, and LiNi 0.80Mn 0.15Co 0.05O 2 exhibited optimal electrochemical performance when the temperature is maintained at 15°C. LiNi xCo yAl 1-x-yO 2 showed signs of lithium plating at lower temperatures, evidenced by irreversible capacity loss and emergence of a high-voltage differential capacity peak. Furthermore, all cells need to be monitored for self-heating, as environment temperature and high discharge currents may elicit an unintended abuse condition. Overall, this study shows that lithium-ion batteries are highly application-specific and electrochemical behavior must be well understood for safe and reliable operation. Additionally, data collected in this study is available for anyone to download for further analysis and model validation.« less
A Database for Comparative Electrochemical Performance of Commercial 18650-Format Lithium-Ion Cells
Barkholtz, Heather M.; Fresquez, Armando; Chalamala, Babu R.; ...
2017-09-08
Lithium-ion batteries are a central technology to our daily lives with widespread use in mobile devices and electric vehicles. These batteries are also beginning to be widely used in electric grid infrastructure support applications which have stringent safety and reliability requirements. Typically, electrochemical performance data is not available for modelers to validate their simulations, mechanisms, and algorithms for lithium-ion battery performance and lifetime. In this paper, we report on the electrochemical performance of commercial 18650 cells at a variety of temperatures and discharge currents. We found that LiFePO 4 is temperature tolerant for discharge currents at or below 10 Amore » whereas LiCoO 2, LiNi xCo yAl 1-x-yO 2, and LiNi 0.80Mn 0.15Co 0.05O 2 exhibited optimal electrochemical performance when the temperature is maintained at 15°C. LiNi xCo yAl 1-x-yO 2 showed signs of lithium plating at lower temperatures, evidenced by irreversible capacity loss and emergence of a high-voltage differential capacity peak. Furthermore, all cells need to be monitored for self-heating, as environment temperature and high discharge currents may elicit an unintended abuse condition. Overall, this study shows that lithium-ion batteries are highly application-specific and electrochemical behavior must be well understood for safe and reliable operation. Additionally, data collected in this study is available for anyone to download for further analysis and model validation.« less
NASA Technical Reports Server (NTRS)
Chen, Shu-cheng, S.
2009-01-01
For the preliminary design and the off-design performance analysis of axial flow turbines, a pair of intermediate level-of-fidelity computer codes, TD2-2 (design; reference 1) and AXOD (off-design; reference 2), are being evaluated for use in turbine design and performance prediction of the modern high performance aircraft engines. TD2-2 employs a streamline curvature method for design, while AXOD approaches the flow analysis with an equal radius-height domain decomposition strategy. Both methods resolve only the flows in the annulus region while modeling the impact introduced by the blade rows. The mathematical formulations and derivations involved in both methods are documented in references 3, 4 for TD2-2) and in reference 5 (for AXOD). The focus of this paper is to discuss the fundamental issues of applicability and compatibility of the two codes as a pair of companion pieces, to perform preliminary design and off-design analysis for modern aircraft engine turbines. Two validation cases for the design and the off-design prediction using TD2-2 and AXOD conducted on two existing high efficiency turbines, developed and tested in the NASA/GE Energy Efficient Engine (GE-E3) Program, the High Pressure Turbine (HPT; two stages, air cooled) and the Low Pressure Turbine (LPT; five stages, un-cooled), are provided in support of the analysis and discussion presented in this paper.
Training and Assessment of Hysteroscopic Skills: A Systematic Review.
Savran, Mona Meral; Sørensen, Stine Maya Dreier; Konge, Lars; Tolsgaard, Martin G; Bjerrum, Flemming
2016-01-01
The aim of this systematic review was to identify studies on hysteroscopic training and assessment. PubMed, Excerpta Medica, the Cochrane Library, and Web of Science were searched in January 2015. Manual screening of references and citation tracking were also performed. Studies on hysteroscopic educational interventions were selected without restrictions on study design, populations, language, or publication year. A qualitative data synthesis including the setting, study participants, training model, training characteristics, hysteroscopic skills, assessment parameters, and study outcomes was performed by 2 authors working independently. Effect sizes were calculated when possible. Overall, 2 raters independently evaluated sources of validity evidence supporting the outcomes of the hysteroscopy assessment tools. A total of 25 studies on hysteroscopy training were identified, of which 23 were performed in simulated settings. Overall, 10 studies used virtual-reality simulators and reported effect sizes for technical skills ranging from 0.31 to 2.65; 12 used inanimate models and reported effect sizes for technical skills ranging from 0.35 to 3.19. One study involved live animal models; 2 studies were performed in clinical settings. The validity evidence supporting the assessment tools used was low. Consensus between the 2 raters on the reported validity evidence was high (94%). This systematic review demonstrated large variations in the effect of different tools for hysteroscopy training. The validity evidence supporting the assessment of hysteroscopic skills was limited. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Reliability and concurrent validity of the Infant Motor Profile.
Heineman, Kirsten R; Middelburg, Karin J; Bos, Arend F; Eidhof, Lieke; La Bastide-Van Gemert, Sacha; Van Den Heuvel, Edwin R; Hadders-Algra, Mijna
2013-06-01
The Infant Motor Profile (IMP) is a qualitative assessment of motor behaviour in infancy. It consists of five domains: movement variation, variability, fluency, symmetry, and performance. The aim of this study was to assess interobserver reliability and concurrent validity of the IMP with the Alberta Infant Motor Scale (AIMS) and an age-specific neurological examination. Fifty-nine preterm infants (25 females, 34 males; median gestational age 29.7wks, median birthweight 1285g) and 146 term infants (74 females, 72 males; median gestational age 40.1wks, birthweight 3500g) were included. Assessments were performed at corrected ages of 4, 6, 10, 12, and 18 months and consisted of the IMP, AIMS, and an age-specific neurological examination. Interobserver reliability was investigated on a sample of 25 video recordings. Non-parametric statistics were used to analyse the data. Interobserver reliability was high (intraclass correlation coefficient 0.95). At all ages, AIMS scores correlated weakly to fairly with total IMP scores (Spearman's ρ 0.36-0.55), but moderately to strongly with scores on the performance domain of the IMP (Spearman's ρ 0.47-0.84). A clear relation was found between total IMP score and outcome of the neurological examination (Kruskal-Wallis p<0.001 at all ages). Interobserver reliability of the IMP is good. Concurrent validity with the AIMS is best for the IMP performance domain. Concurrent validity with age-specific neurological examination is very good. © The Authors. Developmental Medicine & Child Neurology © 2013 Mac Keith Press.
Measuring competence in endoscopic sinus surgery.
Syme-Grant, J; White, P S; McAleer, J P G
2008-02-01
Competence based education is currently being introduced into higher surgical training in the UK. Valid and reliable performance assessment tools are essential to ensure competencies are achieved. No such tools have yet been reported in the UK literature. We sought to develop and pilot test an Endoscopic Sinus Surgery Competence Assessment Tool (ESSCAT). The ESSCAT was designed for in-theatre assessment of higher surgical trainees in the UK. The ESSCAT rating matrix was developed through task analysis of ESS procedures. All otolaryngology consultants and specialist registrars in Scotland were given the opportunity to contribute to its refinement. Two cycles of in-theatre testing were used to ensure utility and gather quantitative data on validity and reliability. Videos of trainees performing surgery were used in establishing inter-rater reliability. National consultation, the consensus derived minimum standard of performance, Cronbach's alpha = 0.89 and demonstration of trainee learning (p = 0.027) during the in vivo application of the ESSCAT suggest a high level of validity. Inter-rater reliability was moderate for competence decisions (Cohen's Kappa = 0.5) and good for total scores (Intra-Class Correlation Co-efficient = 0.63). Intra-rater reliability was good for both competence decisions (Kappa = 0.67) and total scores (Kendall's Tau-b = 0.73). The ESSCAT generates a valid and reliable assessment of trainees' in-theatre performance of endoscopic sinus surgery. In conjunction with ongoing evaluation of the instrument we recommend the use of the ESSCAT in higher specialist training in otolaryngology in the UK.
Gauld, Ian C.; Giaquinto, J. M.; Delashmitt, J. S.; ...
2016-01-01
Destructive radiochemical assay measurements of spent nuclear fuel rod segments from an assembly irradiated in the Three Mile Island unit 1 (TMI-1) pressurized water reactor have been performed at Oak Ridge National Laboratory (ORNL). Assay data are reported for five samples from two fuel rods of the same assembly. The TMI-1 assembly was a 15 X 15 design with an initial enrichment of 4.013 wt% 235U, and the measured samples achieved burnups between 45.5 and 54.5 gigawatt days per metric ton of initial uranium (GWd/t). Measurements were performed mainly using inductively coupled plasma mass spectrometry after elemental separation via highmore » performance liquid chromatography. High precision measurements were achieved using isotope dilution techniques for many of the lanthanides, uranium, and plutonium isotopes. Measurements are reported for more than 50 different isotopes and 16 elements. One of the two TMI-1 fuel rods measured in this work had been measured previously by Argonne National Laboratory (ANL), and these data have been widely used to support code and nuclear data validation. Recently, ORNL provided an important opportunity to independently cross check results against previous measurements performed at ANL. The measured nuclide concentrations are used to validate burnup calculations using the SCALE nuclear systems modeling and simulation code suite. These results show that the new measurements provide reliable benchmark data for computer code validation.« less
Borovcová, Lucie; Pauk, Volodymyr; Lemr, Karel
2018-05-01
New psychoactive substances represent serious social and health problem as tens of new compounds are detected in Europe annually. They often show structural proximity or even isomerism, which complicates their analysis. Two methods based on ultra high performance supercritical fluid chromatography and ultra high performance liquid chromatography with mass spectrometric detection were validated and compared. A simple dilute-filter-and-shoot protocol utilizing propan-2-ol or methanol for supercritical fluid or liquid chromatography, respectively, was proposed to detect and quantify 15 cathinones and phenethylamines in human urine. Both methods offered fast separation (<3 min) and short total analysis time. Precision was well <15% with a few exceptions in liquid chromatography. Limits of detection in urine ranged from 0.01 to 2.3 ng/mL, except for cathinone (5 ng/mL) in supercritical fluid chromatography. Nevertheless, this technique distinguished all analytes including four pairs of isomers, while liquid chromatography was unable to resolve fluoromethcathinone regioisomers. Concerning matrix effects and recoveries, supercritical fluid chromatography produced more uniform results for different compounds and at different concentration levels. This work demonstrates the performance and reliability of supercritical fluid chromatography and corroborates its applicability as an alternative tool for analysis of new psychoactive substances in biological matrixes. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Michael K.; Davidson, Megan
As part of Sandia’s nuclear deterrence mission, the B61-12 Life Extension Program (LEP) aims to modernize the aging weapon system. Modernization requires requalification and Sandia is using high performance computing to perform advanced computational simulations to better understand, evaluate, and verify weapon system performance in conjunction with limited physical testing. The Nose Bomb Subassembly (NBSA) of the B61-12 is responsible for producing a fuzing signal upon ground impact. The fuzing signal is dependent upon electromechanical impact sensors producing valid electrical fuzing signals at impact. Computer generated models were used to assess the timing between the impact sensor’s response to themore » deceleration of impact and damage to major components and system subassemblies. The modeling and simulation team worked alongside the physical test team to design a large-scale reverse ballistic test to not only assess system performance, but to also validate their computational models. The reverse ballistic test conducted at Sandia’s sled test facility sent a rocket sled with a representative target into a stationary B61-12 (NBSA) to characterize the nose crush and functional response of NBSA components. Data obtained from data recorders and high-speed photometrics were integrated with previously generated computer models in order to refine and validate the model’s ability to reliably simulate real-world effects. Large-scale tests are impractical to conduct for every single impact scenario. By creating reliable computer models, we can perform simulations that identify trends and produce estimates of outcomes over the entire range of required impact conditions. Sandia’s HPCs enable geometric resolution that was unachievable before, allowing for more fidelity and detail, and creating simulations that can provide insight to support evaluation of requirements and performance margins. As computing resources continue to improve, researchers at Sandia are hoping to improve these simulations so they provide increasingly credible analysis of the system response and performance over the full range of conditions.« less
Muhamad, Zailani; Ramli, Ayiesah; Amat, Salleh
2015-05-01
The aim of this study was to determine the content validity, internal consistency, test-retest reliability and inter-rater reliability of the Clinical Competency Evaluation Instrument (CCEVI) in assessing the clinical performance of physiotherapy students. This study was carried out between June and September 2013 at University Kebangsaan Malaysia (UKM), Kuala Lumpur, Malaysia. A panel of 10 experts were identified to establish content validity by evaluating and rating each of the items used in the CCEVI with regards to their relevance in measuring students' clinical competency. A total of 50 UKM undergraduate physiotherapy students were assessed throughout their clinical placement to determine the construct validity of these items. The instrument's reliability was determined through a cross-sectional study involving a clinical performance assessment of 14 final-year undergraduate physiotherapy students. The content validity index of the entire CCEVI was 0.91, while the proportion of agreement on the content validity indices ranged from 0.83-1.00. The CCEVI construct validity was established with factor loading of ≥0.6, while internal consistency (Cronbach's alpha) overall was 0.97. Test-retest reliability of the CCEVI was confirmed with a Pearson's correlation range of 0.91-0.97 and an intraclass coefficient correlation range of 0.95-0.98. Inter-rater reliability of the CCEVI domains ranged from 0.59 to 0.97 on initial and subsequent assessments. This pilot study confirmed the content validity of the CCEVI. It showed high internal consistency, thereby providing evidence that the CCEVI has moderate to excellent inter-rater reliability. However, additional refinement in the wording of the CCEVI items, particularly in the domains of safety and documentation, is recommended to further improve the validity and reliability of the instrument.
Advanced UVOIR Mirror Technology Development for Very Large Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2011-01-01
Objective of this work is to define and initiate a long-term program to mature six inter-linked critical technologies for future UVOIR space telescope mirrors to TRL6 by 2018 so that a viable flight mission can be proposed to the 2020 Decadal Review. (1) Large-Aperture, Low Areal Density, High Stiffness Mirrors: 4 to 8 m monolithic & 8 to 16 m segmented primary mirrors require larger, thicker, stiffer substrates. (2) Support System:Large-aperture mirrors require large support systems to ensure that they survive launch and deploy on orbit in a stress-free and undistorted shape. (3) Mid/High Spatial Frequency Figure Error:A very smooth mirror is critical for producing a high-quality point spread function (PSF) for high-contrast imaging. (4) Segment Edges:Edges impact PSF for high-contrast imaging applications, contributes to stray light noise, and affects the total collecting aperture. (5) Segment-to-Segment Gap Phasing:Segment phasing is critical for producing a high-quality temporally stable PSF. (6) Integrated Model Validation:On-orbit performance is determined by mechanical and thermal stability. Future systems require validated performance models. We are pursuing multiple design paths give the science community the option to enable either a future monolithic or segmented space telescope.
Rekleiti, Maria; Souliotis, Kyriakos; Sarafis, Pavlos; Kyriazis, Ioannis; Tsironi, Maria
2018-06-01
The present study focuses on studying the validity and reliability of the Greek edition of DQOL-BCI. DQOL-BCI includes 15 questions-elements that are evaluated on a 5-grade scale like Likert and two general form-shapes. The translation process was conducted in conformity with the guidelines of EuroQol group. A non-random sample of 65 people-patients diagnosed with diabetes I and II was selected. The questionnaire that was used to collect the data was the translated version of DQOL-BCI, and included the demographic characteristics of the interviewees. The content validity of DQOL-BCI was re-examined from a team of five experts (expert panel) for qualitative and quantitative performance. The completion of the questionnaire was done via a personal interview. The sample consisted of 58 people (35 men and 23 women, 59.9 ± 10.9 years). The translation of the questionnaire was found appropriate in accordance to the peculiarities of the Greek language and culture. The largest deviation of values is observed in QOL1 (1.71) in comparison to QOL6 (2.98). The difference between the standard deviations is close to 0.6. The statistics results of the tests showed satisfactory content validity and high construct validity, while the high values for Cronbach alpha index (0.95) reveal high reliability and internal consistency. The Greek version of DQOL-BCI has acceptable psychometric properties and appears to demonstrate high internal reliability and satisfactory construct validity, which allows its use as an important tool in evaluating the quality of life of diabetic patients in relation to their health. Copyright © 2018. Published by Elsevier B.V.
Geraets, Daan; Cuzick, Jack; Cadman, Louise; Moore, Catherine; Vanden Broeck, Davy; Padalko, Elisaveta; Quint, Wim; Arbyn, Marc
2016-01-01
The Validation of Human Papillomavirus (HPV) Genotyping Tests (VALGENT) studies offer an opportunity to clinically validate HPV assays for use in primary screening for cervical cancer and also provide a framework for the comparison of analytical and type-specific performance. Through VALGENT, we assessed the performance of the cartridge-based Xpert HPV assay (Xpert HPV), which detects 14 high-risk (HR) types and resolves HPV16 and HPV18/45. Samples from women attending the United Kingdom cervical screening program enriched with cytologically abnormal samples were collated. All had been previously tested by a clinically validated standard comparator test (SCT), the GP5+/6+ enzyme immunoassay (EIA). The clinical sensitivity and specificity of the Xpert HPV for the detection of cervical intraepithelial neoplasia grade 2 or higher (CIN2+) and CIN3+ relative to those of the SCT were assessed as were the inter- and intralaboratory reproducibilities according to international criteria for test validation. Type concordance for HPV16 and HPV18/45 between the Xpert HPV and the SCT was also analyzed. The Xpert HPV detected 94% of CIN2+ and 98% of CIN3+ lesions among all screened women and 90% of CIN2+ and 96% of CIN3+ lesions in women 30 years and older. The specificity for CIN1 or less (≤CIN1) was 83% (95% confidence interval [CI], 80 to 85%) in all women and 88% (95% CI, 86 to 91%) in women 30 years and older. Inter- and intralaboratory agreements for the Xpert HPV were 98% and 97%, respectively. The kappa agreements for HPV16 and HPV18/45 between the clinically validated reference test (GP5+/6+ LMNX) and the Xpert HPV were 0.92 and 0.91, respectively. The clinical performance and reproducibility of the Xpert HPV are comparable to those of well-established HPV assays and fulfill the criteria for use in primary cervical cancer screening. PMID:27385707
Identification and validation of loss of function variants in clinical contexts.
Lescai, Francesco; Marasco, Elena; Bacchelli, Chiara; Stanier, Philip; Mantovani, Vilma; Beales, Philip
2014-01-01
The choice of an appropriate variant calling pipeline for exome sequencing data is becoming increasingly more important in translational medicine projects and clinical contexts. Within GOSgene, which facilitates genetic analysis as part of a joint effort of the University College London and the Great Ormond Street Hospital, we aimed to optimize a variant calling pipeline suitable for our clinical context. We implemented the GATK/Queue framework and evaluated the performance of its two callers: the classical UnifiedGenotyper and the new variant discovery tool HaplotypeCaller. We performed an experimental validation of the loss-of-function (LoF) variants called by the two methods using Sequenom technology. UnifiedGenotyper showed a total validation rate of 97.6% for LoF single-nucleotide polymorphisms (SNPs) and 92.0% for insertions or deletions (INDELs), whereas HaplotypeCaller was 91.7% for SNPs and 55.9% for INDELs. We confirm that GATK/Queue is a reliable pipeline in translational medicine and clinical context. We conclude that in our working environment, UnifiedGenotyper is the caller of choice, being an accurate method, with a high validation rate of error-prone calls like LoF variants. We finally highlight the importance of experimental validation, especially for INDELs, as part of a standard pipeline in clinical environments.
Eriksen, Anne Haahr Mellergaard; Andersen, Rikke Fredslund; Pallisgaard, Niels; Sørensen, Flemming Brandt; Jakobsen, Anders; Hansen, Torben Frøstrup
2016-01-01
MicroRNAs (miRNAs) play important roles in regulating biological processes at the post-transcriptional level. Deregulation of miRNAs has been observed in cancer, and miRNAs are being investigated as potential biomarkers regarding diagnosis, prognosis and prediction in cancer management. Real-time quantitative polymerase chain reaction (RT-qPCR) is commonly used, when measuring miRNA expression. Appropriate normalisation of RT-qPCR data is important to ensure reliable results. The aim of the present study was to identify stably expressed miRNAs applicable as normaliser candidates in future studies of miRNA expression in rectal cancer. We performed high-throughput miRNA profiling (OpenArray®) on ten pairs of laser micro-dissected rectal cancer tissue and adjacent stroma. A global mean expression normalisation strategy was applied to identify the most stably expressed miRNAs for subsequent validation. In the first validation experiment, a panel of miRNAs were analysed on 25 pairs of micro dissected rectal cancer tissue and adjacent stroma. Subsequently, the same miRNAs were analysed in 28 pairs of rectal cancer tissue and normal rectal mucosa. From the miRNA profiling experiment, miR-645, miR-193a-5p, miR-27a and let-7g were identified as stably expressed, both in malignant and stromal tissue. In addition, NormFinder confirmed high expression stability for the four miRNAs. In the RT-qPCR based validation experiments, no significant difference between tumour and stroma/normal rectal mucosa was detected for the mean of the normaliser candidates miR-27a, miR-193a-5p and let-7g (first validation P = 0.801, second validation P = 0.321). MiR-645 was excluded from the data analysis, because it was undetected in 35 of 50 samples (first validation) and in 24 of 56 samples (second validation), respectively. Significant difference in expression level of RNU6B was observed between tumour and adjacent stromal (first validation), and between tumour and normal rectal mucosa (second validation). We recommend the mean expression of miR-27a, miR-193a-5p and let-7g as normalisation factor, when performing miRNA expression analyses by RT-qPCR on rectal cancer tissue.
Performance of genomic prediction within and across generations in maritime pine.
Bartholomé, Jérôme; Van Heerwaarden, Joost; Isik, Fikret; Boury, Christophe; Vidal, Marjorie; Plomion, Christophe; Bouffier, Laurent
2016-08-11
Genomic selection (GS) is a promising approach for decreasing breeding cycle length in forest trees. Assessment of progeny performance and of the prediction accuracy of GS models over generations is therefore a key issue. A reference population of maritime pine (Pinus pinaster) with an estimated effective inbreeding population size (status number) of 25 was first selected with simulated data. This reference population (n = 818) covered three generations (G0, G1 and G2) and was genotyped with 4436 single-nucleotide polymorphism (SNP) markers. We evaluated the effects on prediction accuracy of both the relatedness between the calibration and validation sets and validation on the basis of progeny performance. Pedigree-based (best linear unbiased prediction, ABLUP) and marker-based (genomic BLUP and Bayesian LASSO) models were used to predict breeding values for three different traits: circumference, height and stem straightness. On average, the ABLUP model outperformed genomic prediction models, with a maximum difference in prediction accuracies of 0.12, depending on the trait and the validation method. A mean difference in prediction accuracy of 0.17 was found between validation methods differing in terms of relatedness. Including the progenitors in the calibration set reduced this difference in prediction accuracy to 0.03. When only genotypes from the G0 and G1 generations were used in the calibration set and genotypes from G2 were used in the validation set (progeny validation), prediction accuracies ranged from 0.70 to 0.85. This study suggests that the training of prediction models on parental populations can predict the genetic merit of the progeny with high accuracy: an encouraging result for the implementation of GS in the maritime pine breeding program.
Merolla, Giovanni; Corona, Katia; Zanoli, Gustavo; Cerciello, Simone; Giannotti, Stefano; Porcellini, Giuseppe
2017-12-01
The Kerlan-Jobe Orthopaedic Clinic (KJOC) Shoulder and Elbow score is a reliable and sensitive tool to measure the performance of overhead athletes. The purpose of this study was to carry out a cross-cultural adaptation and validation of the KJOC questionnaire in Italian and to assess its reliability, validity, and responsiveness. Ninety professional athletes with a painful shoulder were included in this study and were assigned to the "injury group" (n = 32) or the "overuse group" (n = 58); 65 were managed conservatively and 25 were treated by arthroscopic surgery. To assess the reliability of the KJOC score, patients were asked to fill in the questionnaire at baseline and after 2 weeks. To test the construct validity, KJOC scores were compared to those obtained with the Italian version of the Disabilities of the Arm, Shoulder, and Hand (DASH) scale, and with the DASH sports/performing arts module. To test KJOC score responsiveness, the follow-up KJOC scores of the participants treated conservatively were compared to those of the patients treated by arthroscopic surgery. Statistical analysis demonstrated that the KJOC questionnaire is reliable in terms of the single items and the overall score (ICC 0.95-0.99); that it has high construct validity (r s = -0.697; p < 0.01); and that it is responsive to clinical differences in shoulder function (p < 0.0001). The Italian version of the KJOC Shoulder and Elbow score performed in a similar way to the English version and demonstrated good validity, reliability, and responsiveness after conservative and surgical treatment. II.
Comparative assessment of three standardized robotic surgery training methods.
Hung, Andrew J; Jayaratna, Isuru S; Teruya, Kara; Desai, Mihir M; Gill, Inderbir S; Goh, Alvin C
2013-10-01
To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo, for their construct validity. To explore the concept of cross-method validity, where the relative performance of each method is compared. Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: 'novice/trainee': urology residents, previous experience <30 cases (n = 38) and 'experts': faculty surgeons, previous experience ≥30 cases (n = 11). Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the Global Evaluative Assessment of Robotic Skills (GEARS) tool. A Kruskal-Wallis test was used to evaluate performance differences between novices and experts (construct validity). Spearman's correlation coefficient (ρ) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross-method validity). Novice and expert surgeons had previously performed a median (range) of 0 (0-20) and 300 (30-2000) robotic cases, respectively (P < 0.001). Construct validity: experts consistently outperformed residents with all three methods (P < 0.001). Cross-method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (ρ = -0.7, P < 0.001) and in vivo robotic performance based on GEARS (ρ = -0.8, P < 0.0001). Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (ρ = 0.6, P < 0.001). We propose the novel concept of cross-method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment. We externally confirmed the construct validity of each featured training tool. © 2013 BJU International.
A High Performance Pulsatile Pump for Aortic Flow Experiments in 3-Dimensional Models.
Chaudhury, Rafeed A; Atlasman, Victor; Pathangey, Girish; Pracht, Nicholas; Adrian, Ronald J; Frakes, David H
2016-06-01
Aortic pathologies such as coarctation, dissection, and aneurysm represent a particularly emergent class of cardiovascular diseases. Computational simulations of aortic flows are growing increasingly important as tools for gaining understanding of these pathologies, as well as for planning their surgical repair. In vitro experiments are required to validate the simulations against real world data, and the experiments require a pulsatile flow pump system that can provide physiologic flow conditions characteristic of the aorta. We designed a newly capable piston-based pulsatile flow pump system that can generate high volume flow rates (850 mL/s), replicate physiologic waveforms, and pump high viscosity fluids against large impedances. The system is also compatible with a broad range of fluid types, and is operable in magnetic resonance imaging environments. Performance of the system was validated using image processing-based analysis of piston motion as well as particle image velocimetry. The new system represents a more capable pumping solution for aortic flow experiments than other available designs, and can be manufactured at a relatively low cost.
Chiesa, Luca; Panseri, Sara; Pasquale, Elisa; Malandra, Renato; Pavlovic, Radmila; Arioli, Francesco
2018-08-30
High performance liquid chromatography, coupled with a benchtop Q-Exactive Orbitrap high-resolution mass spectrometer, was successfully applied for the determination of 24 target antibiotics (selected beta-lactams, tetracyclines, fluoroquinolones, sulfonamids, phenicols, macrolides, cephalosporins, lincosamides, diaminopyrimidine) in fish matrices. The Q-Exactive parameters were carefully studied to accomplish the best compromise between a suitable scan speed and selectivity, considering the restrictions associated with generic sample preparation methodology. Retention time, an exact mass with tolerance of 2 ppm and data-dependent MS 2 spectra were the main identifiers. The method was validated through specificity, linearity, recovery, intra- and inter-day repeatability, decision limit (CCα) and detection capability (CCβ), according to 2002/657/EC. The values of CCα and CCβ ranged from 29.2 to 36.8 and 32.5 to 48.9, respectively, while overall recovery ranged from 91.1 to 105.6%. Fifty fish samples were analysed, showing the sporadic incidence of enrofloxacin, chlortetracycline, oxytetracycline, amoxicillin and trimethoprim, albeit below the maximum residual levels. Copyright © 2018 Elsevier Ltd. All rights reserved.
A protocol for validating Land Surface Temperature from Sentinel-3
NASA Astrophysics Data System (ADS)
Ghent, D.
2015-12-01
One of the main objectives of the Sentinel-3 mission is to measure sea- and land-surface temperature with high-end accuracy and reliability in support of environmental and climate monitoring in an operational context. Calibration and validation are thus key criteria for operationalization within the framework of the Sentinel-3 Mission Performance Centre (S3MPC).Land surface temperature (LST) has a long heritage of satellite observations which have facilitated our understanding of land surface and climate change processes, such as desertification, urbanization, deforestation and land/atmosphere coupling. These observations have been acquired from a variety of satellite instruments on platforms in both low-earth orbit and in geostationary orbit. Retrieval accuracy can be a challenge though; surface emissivities can be highly variable owing to the heterogeneity of the land, and atmospheric effects caused by the presence of aerosols and by water vapour absorption can give a bias to the underlying LST. As such, a rigorous validation is critical in order to assess the quality of the data and the associated uncertainties. The Sentinel-3 Cal-Val Plan for evaluating the level-2 SL_2_LST product builds on an established validation protocol for satellite-based LST. This set of guidelines provides a standardized framework for structuring LST validation activities, and is rapidly gaining international recognition. The protocol introduces a four-pronged approach which can be summarised thus: i) in situ validation where ground-based observations are available; ii) radiance-based validation over sites that are homogeneous in emissivity; iii) intercomparison with retrievals from other satellite sensors; iv) time-series analysis to identify artefacts on an interannual time-scale. This multi-dimensional approach is a necessary requirement for assessing the performance of the LST algorithm for SLSTR which is designed around biome-based coefficients, thus emphasizing the importance of non-traditional forms of validation such as radiance-based techniques. Here we present examples of the application of the protocol to data produced within the ESA DUE GlobTemperature Project. The lessons learnt here are helping to fine-tune the methodology in preparation for Sentinel-3 commissioning.
ERIC Educational Resources Information Center
Mintrop, Heinrich; Trujillo, Tina
2007-01-01
Based on in-depth data from nine demographically similar schools, the study asks five questions in regard to key aspects of the improvement process and that speak to the consequential validity of accountability indicators: Do schools that differ widely according to system performance criteria also differ on the quality of the educational…
NASA Technical Reports Server (NTRS)
Gupta, Pramod; Schumann, Johann
2004-01-01
High reliability of mission- and safety-critical software systems has been identified by NASA as a high-priority technology challenge. We present an approach for the performance analysis of a neural network (NN) in an advanced adaptive control system. This problem is important in the context of safety-critical applications that require certification, such as flight software in aircraft. We have developed a tool to measure the performance of the NN during operation by calculating a confidence interval (error bar) around the NN's output. Our tool can be used during pre-deployment verification as well as monitoring the network performance during operation. The tool has been implemented in Simulink and simulation results on a F-15 aircraft are presented.
NASA Technical Reports Server (NTRS)
Chen, Y.; Pietrzyk, R. A.; Whitson, P. A.
1997-01-01
A high-performance liquid chromatographic method was developed as an alternative to automated enzymatic analysis of uric acid in human urine preserved with thymol and/or thimerosal. Uric acid (tR = 10 min) and creatinine (tR = 5 min) were separated and quantified during isocratic elution (0.025 M acetate buffer, pH 4.5) from a mu Bondapak C18 column. The uric-acid peak was identified chemically by incubating urine samples with uricase. The thymol/thimerosal peak appeared at 31 min during the washing step and did not interfere with the analysis. We validated the high-performance liquid chromatographic method for linearity, precision and accuracy, and the results were found to be excellent.
Greher, Michael R; Wodushek, Thomas R
2017-03-01
Performance validity testing refers to neuropsychologists' methodology for determining whether neuropsychological test performances completed in the course of an evaluation are valid (ie, the results of true neurocognitive function) or invalid (ie, overly impacted by the patient's effort/engagement in testing). This determination relies upon the use of either standalone tests designed for this sole purpose, or specific scores/indicators embedded within traditional neuropsychological measures that have demonstrated this utility. In response to a greater appreciation for the critical role that performance validity issues play in neuropsychological testing and the need to measure this variable to the best of our ability, the scientific base for performance validity testing has expanded greatly over the last 20 to 30 years. As such, the majority of current day neuropsychologists in the United States use a variety of measures for the purpose of performance validity testing as part of everyday forensic and clinical practice and address this issue directly in their evaluations. The following is the first article of a 2-part series that will address the evolution of performance validity testing in the field of neuropsychology, both in terms of the science as well as the clinical application of this measurement technique. The second article of this series will review performance validity tests in terms of methods for development of these measures, and maximizing of diagnostic accuracy.
Poljak, Mario; Oštrbenk, Anja
2013-01-01
Human papillomavirus (HPV) testing has become an essential part of current clinical practice in the management of cervical cancer and precancerous lesions. We reviewed the most important validation studies of a next-generation real-time polymerase chain reaction-based assay, the RealTime High Risk HPV test (RealTime)(Abbott Molecular, Des Plaines, IL, USA), for triage in referral population settings and for use in primary cervical cancer screening in women 30 years and older published in peer-reviewed journals from 2009 to 2013. RealTime is designed to detect 14 high-risk HPV genotypes with concurrent distinction of HPV-16 and HPV-18 from 12 other HPV genotypes. The test was launched on the European market in January 2009 and is currently used in many laboratories worldwide for routine detection of HPV. We concisely reviewed validation studies of a next-generation real-time polymerase chain reaction (PCR)-based assay: the Abbott RealTime High Risk HPV test. Eight validation studies of RealTime in referral settings showed its consistently high absolute clinical sensitivity for both CIN2+ (range 88.3-100%) and CIN3+ (range 93.0-100%), as well as comparative clinical sensitivity relative to the currently most widely used HPV test: the Qiagen/Digene Hybrid Capture 2 HPV DNA Test (HC2). Due to the significantly different composition of the referral populations, RealTime absolute clinical specificity for CIN2+ and CIN3+ varied greatly across studies, but was comparable relative to HC2. Four validation studies of RealTime performance in cervical cancer screening settings showed its consistently high absolute clinical sensitivity for both CIN2+ and CIN3+, as well as comparative clinical sensitivity and specificity relative to HC2 and GP5+/6+ PCR. RealTime has been extensively evaluated in the last 4 years. RealTime can be considered clinically validated for triage in referral population settings and for use in primary cervical cancer screening in women 30 years and older.
Psychometric Properties of the Persian Version of the Tinnitus Handicap Inventory (THI-P)
Jalali, Mir Mohammad; Soleimani, Robabeh; Fallahi, Mahnaz; Aghajanpour, Mohammad; Elahi, Masoumeh
2015-01-01
Introduction: Tinnitus can have a significant effect on an individual’s quality of life, and is very difficult quantify. One of the most popular questionnaires used in this area is the Tinnitus Handicap Inventory (THI). The aim of this study was to determine the reliability and validity of a Persian translation of the Tinnitus Handicap Inventory (THI-P). Materials and Methods: This prospective clinical study was performed in the Otolaryngology Department of Guilan University of Medical Sciences, Iran. A total of 102 patients aged 23–80 years with tinnitus completed the (THI-P). The patients were instructed to complete the Beck Depression Inventory (BDI) and the State-Trait Anxiety Inventory (STAI). Audiometry was performed. Eight-five patients were asked to complete the THI-P for a second time 7–10 days after the initial interview. We assessed test–retest reliability and internal reliability of the THI-P. Validity was assessed by analyzing the THI-P of patients according to their age, tinnitus duration and psychological distress (BDI and STAI). A factor analysis was computed to verify if three subscales (functional, emotional, and catastrophic) represented three distinct variables. Results: Test–retest correlation coefficient scores were highly significant. The THI-P and its subscales showed good internal consistency reliability (α = 0.80 to 0.96). High-to-moderate correlations were observed between THI-P and psychological distress and tinnitus symptom ratings. A confirmatory factor analysis failed to validate the three subscales of THI, and high inter-correlations found between the subscales question whether they represent three distinct factors. Conclusion: The results suggest that the THI-P is a reliable and valid tool which can be used in a clinical setting to quantify the impact of tinnitus on the quality of life of Iranian patients. PMID:25938079
Chin, Kelly M; Gomberg-Maitland, Mardi; Channick, Richard N; Cuttica, Michael J; Fischer, Aryeh; Frantz, Robert P; Hunsche, Elke; Kleinman, Leah; McConnell, John W; McLaughlin, Vallerie V; Miller, Chad E; Zamanian, Roham T; Zastrow, Michael S; Badesch, David B
2018-04-26
Disease-specific patient-reported outcome (PRO) instruments are important in assessing the impact of disease and treatment. PAH-SYMPACT ® is the first questionnaire for quantifying pulmonary arterial hypertension (PAH) symptoms and impacts developed following the 2009 FDA PRO guidance; previous qualitative research with PAH patients supported its initial content validity. Content finalization and psychometric validation were conducted using data from SYMPHONY, a single-arm, 16-week study with macitentan 10mg in US patients with PAH. Item performance, Rasch, and factor analyses were used to select final item content of the PRO and define its domain structure. Internal consistency, test-retest reliability, known-group and construct validity, sensitivity to change, and influence of oxygen on item performance were evaluated. Data from 278 patients (79% female, mean age 60 years) were analyzed. Following removal of redundant/misfitting items, the final questionnaire has 11 symptom items across 2 domains (cardiopulmonary and cardiovascular symptoms) and 11 impact items across 2 domains (physical and cognitive/emotional impacts). Differential item function analysis confirmed PRO scoring is unaffected by oxygen use. For all 4 domains, internal consistency reliability was high (Cronbach's alpha >0.80) and scores were highly reproducible in stable patients (intra-class correlation coefficient 0.84-0.94). Correlations with CAMPHOR and SF-36 were moderate-to-high ([r]=0.34-0.80). The questionnaire differentiated well between patients with different disease severity levels, and was sensitive to improvements in clinician- and patient-reported disease severity. The PAH-SYMPACT ® is a brief, disease-specific PRO instrument possessing good psychometric properties which can be administered in clinical practice and clinical studies. Copyright © 2018. Published by Elsevier Inc.
Exploring geo-tagged photos for land cover validation with deep learning
NASA Astrophysics Data System (ADS)
Xing, Hanfa; Meng, Yuan; Wang, Zixuan; Fan, Kaixuan; Hou, Dongyang
2018-07-01
Land cover validation plays an important role in the process of generating and distributing land cover thematic maps, which is usually implemented by high cost of sample interpretation with remotely sensed images or field survey. With an increasing availability of geo-tagged landscape photos, the automatic photo recognition methodologies, e.g., deep learning, can be effectively utilised for land cover applications. However, they have hardly been utilised in validation processes, as challenges remain in sample selection and classification for highly heterogeneous photos. This study proposed an approach to employ geo-tagged photos for land cover validation by using the deep learning technology. The approach first identified photos automatically based on the VGG-16 network. Then, samples for validation were selected and further classified by considering photos distribution and classification probabilities. The implementations were conducted for the validation of the GlobeLand30 land cover product in a heterogeneous area, western California. Experimental results represented promises in land cover validation, given that GlobeLand30 showed an overall accuracy of 83.80% with classified samples, which was close to the validation result of 80.45% based on visual interpretation. Additionally, the performances of deep learning based on ResNet-50 and AlexNet were also quantified, revealing no substantial differences in final validation results. The proposed approach ensures geo-tagged photo quality, and supports the sample classification strategy by considering photo distribution, with accuracy improvement from 72.07% to 79.33% compared with solely considering the single nearest photo. Consequently, the presented approach proves the feasibility of deep learning technology on land cover information identification of geo-tagged photos, and has a great potential to support and improve the efficiency of land cover validation.
NASA Technical Reports Server (NTRS)
Larar, A.; Zhou, D.; Smith, W.
2009-01-01
Advanced satellite sensors are tasked with improving global-scale measurements of the Earth's atmosphere, clouds, and surface to enable enhancements in weather prediction, climate monitoring, and environmental change detection. Validation of the entire measurement system is crucial to achieving this goal and thus maximizing research and operational utility of resultant data. Field campaigns employing satellite under-flights with well-calibrated FTS sensors aboard high-altitude aircraft are an essential part of this validation task. The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Airborne Sounder Testbed-Interferometer (NAST-I) has been a fundamental contributor in this area by providing coincident high spectral/spatial resolution observations of infrared spectral radiances along with independently-retrieved geophysical products for comparison with like products from satellite sensors being validated. This paper focuses on some of the challenges associated with validating advanced atmospheric sounders and the benefits obtained from employing airborne interferometers such as the NAST-I. Select results from underflights of the Aqua Atmospheric InfraRed Sounder (AIRS) and the Infrared Atmospheric Sounding Interferometer (IASI) obtained during recent field campaigns will be presented.
Data Storage Hierarchy Systems for Data Base Computers
1979-08-01
Thesis Supervisor Accepted by ................................................ Chairman, Department Committee - /-111 Report...failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE AUG 1979 2. REPORT...with very large capacity and small access time. As part of the INFOPLEX research effort, this thesis is focused on the study of high performance, highly
Development and validation of a Client Problem Profile and Index for drug treatment.
Joe, George W; Simpson, D Dwayne; Greener, Jack M; Rowan-Szal, Grace A
2004-08-01
The development of the Client Problem Profile and Index are described, and initial concurrent and predictive validity data are presented for a sample of 547 patients in outpatient methadone treatment. Derived from the TCU Brief Intake for drug treatment admissions, the profile covers 14 problem areas related to drug use (particularly cocaine, heroin/opiate, marijuana, other illegal drugs, and multiple drug use), HIV risks, psychosocial-functioning, health, employment, and criminality. Analyses of predictive validity show the profile and its index (number of problem areas) were significantly related to therapeutic engagement, during-treatment performance, and posttreatment follow-up outcomes. Low moderate to high moderate effect sizes were observed in analyses of the index's discrimination.
DOT National Transportation Integrated Search
2017-09-01
Numerous studies have shown that G*/Sin, the high temperature specification parameter for current Performance Graded (PG) asphalt binder is not adequate to reflect the rutting characteristics of polymer-modified binders. Consequently, many state De...
Measurement of menadione in urine by HPLC
USDA-ARS?s Scientific Manuscript database
Mammals convert phylloquinone to MK-4, with menadione as a possible intermediate. We developed and validated a method measuring urinary menadione. A high performance liquid chromatography (HPLC) method with a C30 column, fluorescence detection and post-column zinc reduction was developed. The mobile...
Rezk, Naser L.; White, Nicole; Bridges, Arlene S.; Abdel-Megeed, Mohamed F.; Mohamed, Tarek M.; Moselhy, Said S.; Kashuba, Angela D. M.
2010-01-01
Studying the pharmacokinetics of antiretroviral drugs in breast milk has important implications for the health of both the mother and the infant, particularly in resource-poor countries. Breast milk is a highly complex biological matrix, yet it is necessary to develop and validate methods in this matrix, which simultaneously measure multiple analytes, as women may be taking any number of drug combinations to combat human immunodeficiency virus infection. Here, we report a novel extraction method coupled to high-performance liquid chromatography and tandem mass spectrometry for the accurate, precise, and specific measurement of 7 antiretroviral drugs currently prescribed to infected mothers. Using 200 µL of human breast milk, simultaneous quantification of lamivudine (3TC), stavudine (d4T), zidovudine (ZDV), nevirapine (NVP), nelfinavir (NFV), ritonavir, and lopinavir was validated over the range of 10–10,000 ng/mL. Intraday accuracy and precision for all analytes were 99.3% and 5.0 %, respectively. Interday accuracy and precision were 99.4 % and 7.8%, respectively. Cross-assay validation with UV detection was performed using clinical breast milk samples, and the results of the 2 assays were in good agreement (P = 0.0001, r = 0.97). Breast milk to plasma concentration ratios for the different antiretroviral drugs were determined as follows: 3TC = 2.96, d4T = 1.73, ZDV = 1.17, NVP = 0.82, and NFV = 0.21. PMID:18758393
Edelmann, Cathrin; Ghiassi, Ramesh; Vogt, Deborah R; Partridge, Martyn R; Khatami, Ramin; Leuppi, Jörg D; Miedinger, David
2017-01-01
The aim of this study was to evaluate the validity of a new pictorial form of a screening test for obstructive sleep apnea syndrome (OSAS) - the pictorial Sleepiness and Sleep Apnoea Scale (pSSAS). Validation was performed in a sample of patients admitted to sleep clinics in the UK and Switzerland. All study participants were investigated with objective sleep tests such as full-night-attended polysomnography or polygraphy. The pSSAS was validated by taking into account the individual result of the sleep study, sleep-related questionnaires and objective parameters such as body mass index (BMI) or neck circumference. Different scoring schemes of the pSSAS were evaluated, and an internal validation was undertaken. The full data set consisted of 431 individuals (234 patients from the UK, 197 patients from Switzerland). The pSSAS showed good predictive performance for OSAS with an area under the curve between 0.77 and 0.81 depending on which scoring scheme was used. The subscores of the pSSAS had a moderate-to-strong correlation with widely used screening questionnaires for OSAS or excessive daytime sleepiness as well as with BMI and neck circumference. The pSSAS can be used to select patients with a high probability of having OSAS. Due to its simple pictorial design with short questions, it might be suitable for screening in populations with low health literacy and in non-native English or German speakers.
Roberson, David W; Kentala, Erna; Forbes, Peter
2005-12-01
The goals of this project were 1) to develop and validate an objective instrument to measure surgical performance at tonsillectomy, 2) to assess its interobserver and interobservation reliability and construct validity, and 3) to select those items with best reliability and most independent information to design a simplified form suitable for routine use in otolaryngology surgical evaluation. Prospective, observational data collection for an educational quality improvement project. The evaluation instrument was based on previous instruments developed in general surgery with input from attending otolaryngologic surgeons and experts in medical education. It was pilot tested and subjected to iterative improvements. After the instrument was finalized, a total of 55 tonsillectomies were observed and scored during academic year 2002 to 2003: 45 cases by residents at different points during their rotation, 5 by fellows, and 5 by faculty. Results were assessed for interobserver reliability, interobservation reliability, and construct validity. Factor analysis was used to identify items with independent information. Interobserver and interobservation reliability was high. On technical items, faculty substantially outperformed fellows, who in turn outperformed residents (P < .0001 for both comparisons). On the "global" scale (overall assessment), residents improved an average of 1 full point (on a 5 point scale) during a 3 month rotation (P = .01). In the subscale of "patient care," results were less clear cut: fellows outperformed residents, who in turn outperformed faculty, but only the fellows to faculty comparison was statistically significant (P = .04), and residents did not clearly improve over time (P = .36). Factor analysis demonstrated that technical items and patient care items factor separately and thus represent separate skill domains in surgery. It is possible to objectively measure surgical skill at tonsillectomy with high reliability and good construct validity. Factor analysis demonstrated that patient care is a distinct domain in surgical skill. Although the interobserver reliability for some patient care items reached statistical significance, it was not high enough for "high stakes testing" purposes. Using reliability and factor analysis results, we propose a simplified instrument for use in evaluating trainees in otolaryngologic surgery.
Unterseer, Sandra; Bauer, Eva; Haberer, Georg; Seidel, Michael; Knaak, Carsten; Ouzunova, Milena; Meitinger, Thomas; Strom, Tim M; Fries, Ruedi; Pausch, Hubert; Bertani, Christofer; Davassi, Alessandro; Mayer, Klaus Fx; Schön, Chris-Carolin
2014-09-29
High density genotyping data are indispensable for genomic analyses of complex traits in animal and crop species. Maize is one of the most important crop plants worldwide, however a high density SNP genotyping array for analysis of its large and highly dynamic genome was not available so far. We developed a high density maize SNP array composed of 616,201 variants (SNPs and small indels). Initially, 57 M variants were discovered by sequencing 30 representative temperate maize lines and then stringently filtered for sequence quality scores and predicted conversion performance on the array resulting in the selection of 1.2 M polymorphic variants assayed on two screening arrays. To identify high-confidence variants, 285 DNA samples from a broad genetic diversity panel of worldwide maize lines including the samples used for sequencing, important founder lines for European maize breeding, hybrids, and proprietary samples with European, US, semi-tropical, and tropical origin were used for experimental validation. We selected 616 k variants according to their performance during validation, support of genotype calls through sequencing data, and physical distribution for further analysis and for the design of the commercially available Affymetrix® Axiom® Maize Genotyping Array. This array is composed of 609,442 SNPs and 6,759 indels. Among these are 116,224 variants in coding regions and 45,655 SNPs of the Illumina® MaizeSNP50 BeadChip for study comparison. In a subset of 45,974 variants, apart from the target SNP additional off-target variants are detected, which show only a minor bias towards intermediate allele frequencies. We performed principal coordinate and admixture analyses to determine the ability of the array to detect and resolve population structure and investigated the extent of LD within a worldwide validation panel. The high density Affymetrix® Axiom® Maize Genotyping Array is optimized for European and American temperate maize and was developed based on a diverse sample panel by applying stringent quality filter criteria to ensure its suitability for a broad range of applications. With 600 k variants it is the largest currently publically available genotyping array in crop species.
Niu, Tian-Zeng; Zhang, Yu-Wei; Bao, Yong-Li; Wu, Yin; Yu, Chun-Lei; Sun, Lu-Guo; Yi, Jing-Wen; Huang, Yan-Xin; Li, Yu-Xin
2013-03-25
A reversed phase high performance liquid chromatography method coupled with a diode array detector (HPLC-DAD) was developed for the first time for the simultaneous determination of 9 flavonoids in Senecio cannabifolius, a traditional Chinese medicinal herb. Agilent Zorbax SB-C18 column was used at room temperature and the mobile phase was a mixture of acetonitrile and 0.5% formic acid (v/v) in water in the gradient elution mode at a flow-rate of 1.0mlmin(-1), detected at 360nm. Validation of this method was performed to verify the linearity, precision, limits of detection and quantification, intra- and inter-day variabilities, reproducibility and recovery. The calibration curves showed good linearities (R(2)>0.9995) within the test ranges. The relative standard deviation (RSD) of the method was less than 3.0% for intra- and inter-day assays. The samples were stable for at least 96h, and the average recoveries were between 90.6% and 102.5%. High sensitivity was demonstrated with detection limits of 0.028-0.085μg/ml for flavonoids. The newly established HPLC method represents a powerful technique for the quality assurance of S. cannabifolius. Copyright © 2012 Elsevier B.V. All rights reserved.
Nyeborg, M; Pissavini, M; Lemasson, Y; Doucet, O
2010-02-01
The aim of the study was the validation of a high-performance liquid chromatography (HPLC) method for the simultaneous and quantitative determination of twelve commonly used organic UV-filters (phenylbenzimidazole sulfonic acid, benzophenone-3, isoamyl p-methoxycinnamate, diethylamino hydroxybenzoyl hexyl benzoate, octocrylene, ethylhexyl methoxycinnamate, ethylhexyl salicylate, butyl methoxydibenzoylmethane, diethylhexyl butamido triazone, ethylhexyl triazone, methylene bis-benzotriazolyl tetramethylbutylphenol and bis-ethylhexyloxyphenol methoxyphenyl triazine) contained in suncare products. The separation and quantitative determination was performed in <30 min, using a Symmetry Shield(R) C18 (5 microm) column from Waters and a mobile phase (gradient mode) consisting of ethanol and acidified water. UV measurements were carried out at multi-wavelengths, according to the absorption of the analytes.
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Riha, David S.
2013-01-01
Physics-based models are routinely used to predict the performance of engineered systems to make decisions such as when to retire system components, how to extend the life of an aging system, or if a new design will be safe or available. Model verification and validation (V&V) is a process to establish credibility in model predictions. Ideally, carefully controlled validation experiments will be designed and performed to validate models or submodels. In reality, time and cost constraints limit experiments and even model development. This paper describes elements of model V&V during the development and application of a probabilistic fracture assessment model to predict cracking in space shuttle main engine high-pressure oxidizer turbopump knife-edge seals. The objective of this effort was to assess the probability of initiating and growing a crack to a specified failure length in specific flight units for different usage and inspection scenarios. The probabilistic fracture assessment model developed in this investigation combined a series of submodels describing the usage, temperature history, flutter tendencies, tooth stresses and numbers of cycles, fatigue cracking, nondestructive inspection, and finally the probability of failure. The analysis accounted for unit-to-unit variations in temperature, flutter limit state, flutter stress magnitude, and fatigue life properties. The investigation focused on the calculation of relative risk rather than absolute risk between the usage scenarios. Verification predictions were first performed for three units with known usage and cracking histories to establish credibility in the model predictions. Then, numerous predictions were performed for an assortment of operating units that had flown recently or that were projected for future flights. Calculations were performed using two NASA-developed software tools: NESSUS(Registered Trademark) for the probabilistic analysis, and NASGRO(Registered Trademark) for the fracture mechanics analysis. The goal of these predictions was to provide additional information to guide decisions on the potential of reusing existing and installed units prior to the new design certification.
Accuracy assessment of high resolution satellite imagery orientation by leave-one-out method
NASA Astrophysics Data System (ADS)
Brovelli, Maria Antonia; Crespi, Mattia; Fratarcangeli, Francesca; Giannone, Francesca; Realini, Eugenio
Interest in high-resolution satellite imagery (HRSI) is spreading in several application fields, at both scientific and commercial levels. Fundamental and critical goals for the geometric use of this kind of imagery are their orientation and orthorectification, processes able to georeference the imagery and correct the geometric deformations they undergo during acquisition. In order to exploit the actual potentialities of orthorectified imagery in Geomatics applications, the definition of a methodology to assess the spatial accuracy achievable from oriented imagery is a crucial topic. In this paper we want to propose a new method for accuracy assessment based on the Leave-One-Out Cross-Validation (LOOCV), a model validation method already applied in different fields such as machine learning, bioinformatics and generally in any other field requiring an evaluation of the performance of a learning algorithm (e.g. in geostatistics), but never applied to HRSI orientation accuracy assessment. The proposed method exhibits interesting features which are able to overcome the most remarkable drawbacks involved by the commonly used method (Hold-Out Validation — HOV), based on the partitioning of the known ground points in two sets: the first is used in the orientation-orthorectification model (GCPs — Ground Control Points) and the second is used to validate the model itself (CPs — Check Points). In fact the HOV is generally not reliable and it is not applicable when a low number of ground points is available. To test the proposed method we implemented a new routine that performs the LOOCV in the software SISAR, developed by the Geodesy and Geomatics Team at the Sapienza University of Rome to perform the rigorous orientation of HRSI; this routine was tested on some EROS-A and QuickBird images. Moreover, these images were also oriented using the world recognized commercial software OrthoEngine v. 10 (included in the Geomatica suite by PCI), manually performing the LOOCV since only the HOV is implemented. The software comparison guaranteed about the overall correctness and good performances of the SISAR model, whereas the results showed the good features of the LOOCV method.
Fundamentals of endoscopic surgery: creation and validation of the hands-on test.
Vassiliou, Melina C; Dunkin, Brian J; Fried, Gerald M; Mellinger, John D; Trus, Thadeus; Kaneva, Pepa; Lyons, Calvin; Korndorffer, James R; Ujiki, Michael; Velanovich, Vic; Kochman, Michael L; Tsuda, Shawn; Martinez, Jose; Scott, Daniel J; Korus, Gary; Park, Adrian; Marks, Jeffrey M
2014-03-01
The Fundamentals of Endoscopic Surgery™ (FES) program consists of online materials and didactic and skills-based tests. All components were designed to measure the skills and knowledge required to perform safe flexible endoscopy. The purpose of this multicenter study was to evaluate the reliability and validity of the hands-on component of the FES examination, and to establish the pass score. Expert endoscopists identified the critical skill set required for flexible endoscopy. They were then modeled in a virtual reality simulator (GI Mentor™ II, Simbionix™ Ltd., Airport City, Israel) to create five tasks and metrics. Scores were designed to measure both speed and precision. Validity evidence was assessed by correlating performance with self-reported endoscopic experience (surgeons and gastroenterologists [GIs]). Internal consistency of each test task was assessed using Cronbach's alpha. Test-retest reliability was determined by having the same participant perform the test a second time and comparing their scores. Passing scores were determined by a contrasting groups methodology and use of receiver operating characteristic curves. A total of 160 participants (17 % GIs) performed the simulator test. Scores on the five tasks showed good internal consistency reliability and all had significant correlations with endoscopic experience. Total FES scores correlated 0.73, with participants' level of endoscopic experience providing evidence of their validity, and their internal consistency reliability (Cronbach's alpha) was 0.82. Test-retest reliability was assessed in 11 participants, and the intraclass correlation was 0.85. The passing score was determined and is estimated to have a sensitivity (true positive rate) of 0.81 and a 1-specificity (false positive rate) of 0.21. The FES hands-on skills test examines the basic procedural components required to perform safe flexible endoscopy. It meets rigorous standards of reliability and validity required for high-stakes examinations, and, together with the knowledge component, may help contribute to the definition and determination of competence in endoscopy.
Validation of the DECAF score to predict hospital mortality in acute exacerbations of COPD
Echevarria, C; Steer, J; Heslop-Marshall, K; Stenton, SC; Hickey, PM; Hughes, R; Wijesinghe, M; Harrison, RN; Steen, N; Simpson, AJ; Gibson, GJ; Bourke, SC
2016-01-01
Background Hospitalisation due to acute exacerbations of COPD (AECOPD) is common, and subsequent mortality high. The DECAF score was derived for accurate prediction of mortality and risk stratification to inform patient care. We aimed to validate the DECAF score, internally and externally, and to compare its performance to other predictive tools. Methods The study took place in the two hospitals within the derivation study (internal validation) and in four additional hospitals (external validation) between January 2012 and May 2014. Consecutive admissions were identified by screening admissions and searching coding records. Admission clinical data, including DECAF indices, and mortality were recorded. The prognostic value of DECAF and other scores were assessed by the area under the receiver operator characteristic (AUROC) curve. Results In the internal and external validation cohorts, 880 and 845 patients were recruited. Mean age was 73.1 (SD 10.3) years, 54.3% were female, and mean (SD) FEV1 45.5 (18.3) per cent predicted. Overall mortality was 7.7%. The DECAF AUROC curve for inhospital mortality was 0.83 (95% CI 0.78 to 0.87) in the internal cohort and 0.82 (95% CI 0.77 to 0.87) in the external cohort, and was superior to other prognostic scores for inhospital or 30-day mortality. Conclusions DECAF is a robust predictor of mortality, using indices routinely available on admission. Its generalisability is supported by consistent strong performance; it can identify low-risk patients (DECAF 0–1) potentially suitable for Hospital at Home or early supported discharge services, and high-risk patients (DECAF 3–6) for escalation planning or appropriate early palliation. Trial registration number UKCRN ID 14214. PMID:26769015
Hall, Emily A; Docherty, Carrie L
2017-07-01
To determine the concurrent validity of standard clinical outcome measures compared to laboratory outcome measure while performing the weight-bearing lunge test (WBLT). Cross-sectional study. Fifty participants performed the WBLT to determine dorsiflexion ROM using four different measurement techniques: dorsiflexion angle with digital inclinometer at 15cm distal to the tibial tuberosity (°), dorsiflexion angle with inclinometer at tibial tuberosity (°), maximum lunge distance (cm), and dorsiflexion angle using a 2D motion capture system (°). Outcome measures were recorded concurrently during each trial. To establish concurrent validity, Pearson product-moment correlation coefficients (r) were conducted, comparing each dependent variable to the 2D motion capture analysis (identified as the reference standard). A higher correlation indicates strong concurrent validity. There was a high correlation between each measurement technique and the reference standard. Specifically the correlation between the inclinometer placement at 15cm below the tibial tuberosity (44.9°±5.5°) and the motion capture angle (27.0°±6.0°) was r=0.76 (p=0.001), between the inclinometer placement at the tibial tuberosity angle (39.0°±4.6°) and the motion capture angle was r=0.71 (p=0.001), and between the distance from the wall clinical measure (10.3±3.0cm) to the motion capture angle was r=0.74 (p=0.001). This study determined that the clinical measures used during the WBLT have a high correlation with the reference standard for assessing dorsiflexion range of motion. Therefore, obtaining maximum lunge distance and inclinometer angles are both valid assessments during the weight-bearing lunge test. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Zammit, Andrea R; Hall, Charles B; Lipton, Richard B; Katz, Mindy J; Muniz-Terrera, Graciela
2018-05-01
The aim of this study was to identify natural subgroups of older adults based on cognitive performance, and to establish each subgroup's characteristics based on demographic factors, physical function, psychosocial well-being, and comorbidity. We applied latent class (LC) modeling to identify subgroups in baseline assessments of 1345 Einstein Aging Study (EAS) participants free of dementia. The EAS is a community-dwelling cohort study of 70+ year-old adults living in the Bronx, NY. We used 10 neurocognitive tests and 3 covariates (age, sex, education) to identify latent subgroups. We used goodness-of-fit statistics to identify the optimal class solution and assess model adequacy. We also validated our model using two-fold split-half cross-validation. The sample had a mean age of 78.0 (SD=5.4) and a mean of 13.6 years of education (SD=3.5). A 9-class solution based on cognitive performance at baseline was the best-fitting model. We characterized the 9 identified classes as (i) disadvantaged, (ii) poor language, (iii) poor episodic memory and fluency, (iv) poor processing speed and executive function, (v) low average, (vi) high average, (vii) average, (viii) poor executive and poor working memory, (ix) elite. The cross validation indicated stable class assignment with the exception of the average and high average classes. LC modeling in a community sample of older adults revealed 9 cognitive subgroups. Assignment of subgroups was reliable and associated with external validators. Future work will test the predictive validity of these groups for outcomes such as Alzheimer's disease, vascular dementia and death, as well as markers of biological pathways that contribute to cognitive decline. (JINS, 2018, 24, 511-523).
Oh, Deborah M; Kim, Joshua M; Garcia, Raymond E; Krilowicz, Beverly L
2005-06-01
There is increasing pressure, both from institutions central to the national scientific mission and from regional and national accrediting agencies, on natural sciences faculty to move beyond course examinations as measures of student performance and to instead develop and use reliable and valid authentic assessment measures for both individual courses and for degree-granting programs. We report here on a capstone course developed by two natural sciences departments, Biological Sciences and Chemistry/Biochemistry, which engages students in an important culminating experience, requiring synthesis of skills and knowledge developed throughout the program while providing the departments with important assessment information for use in program improvement. The student work products produced in the course, a written grant proposal, and an oral summary of the proposal, provide a rich source of data regarding student performance on an authentic assessment task. The validity and reliability of the instruments and the resulting student performance data were demonstrated by collaborative review by content experts and a variety of statistical measures of interrater reliability, including percentage agreement, intraclass correlations, and generalizability coefficients. The high interrater reliability reported when the assessment instruments were used for the first time by a group of external evaluators suggests that the assessment process and instruments reported here will be easily adopted by other natural science faculty.
Propulsion Risk Reduction Activities for Non-Toxic Cryogenic Propulsion
NASA Technical Reports Server (NTRS)
Smith, Timothy D.; Klem, Mark D.; Fisher, Kenneth
2010-01-01
The Propulsion and Cryogenics Advanced Development (PCAD) Project s primary objective is to develop propulsion system technologies for non-toxic or "green" propellants. The PCAD project focuses on the development of non-toxic propulsion technologies needed to provide necessary data and relevant experience to support informed decisions on implementation of non-toxic propellants for space missions. Implementation of non-toxic propellants in high performance propulsion systems offers NASA an opportunity to consider other options than current hypergolic propellants. The PCAD Project is emphasizing technology efforts in reaction control system (RCS) thruster designs, ascent main engines (AME), and descent main engines (DME). PCAD has a series of tasks and contracts to conduct risk reduction and/or retirement activities to demonstrate that non-toxic cryogenic propellants can be a feasible option for space missions. Work has focused on 1) reducing the risk of liquid oxygen/liquid methane ignition, demonstrating the key enabling technologies, and validating performance levels for reaction control engines for use on descent and ascent stages; 2) demonstrating the key enabling technologies and validating performance levels for liquid oxygen/liquid methane ascent engines; and 3) demonstrating the key enabling technologies and validating performance levels for deep throttling liquid oxygen/liquid hydrogen descent engines. The progress of these risk reduction and/or retirement activities will be presented.
Propulsion Risk Reduction Activities for Nontoxic Cryogenic Propulsion
NASA Technical Reports Server (NTRS)
Smith, Timothy D.; Klem, Mark D.; Fisher, Kenneth L.
2010-01-01
The Propulsion and Cryogenics Advanced Development (PCAD) Project s primary objective is to develop propulsion system technologies for nontoxic or "green" propellants. The PCAD project focuses on the development of nontoxic propulsion technologies needed to provide necessary data and relevant experience to support informed decisions on implementation of nontoxic propellants for space missions. Implementation of nontoxic propellants in high performance propulsion systems offers NASA an opportunity to consider other options than current hypergolic propellants. The PCAD Project is emphasizing technology efforts in reaction control system (RCS) thruster designs, ascent main engines (AME), and descent main engines (DME). PCAD has a series of tasks and contracts to conduct risk reduction and/or retirement activities to demonstrate that nontoxic cryogenic propellants can be a feasible option for space missions. Work has focused on 1) reducing the risk of liquid oxygen/liquid methane ignition, demonstrating the key enabling technologies, and validating performance levels for reaction control engines for use on descent and ascent stages; 2) demonstrating the key enabling technologies and validating performance levels for liquid oxygen/liquid methane ascent engines; and 3) demonstrating the key enabling technologies and validating performance levels for deep throttling liquid oxygen/liquid hydrogen descent engines. The progress of these risk reduction and/or retirement activities will be presented.
Simplified Summative Temporal Bone Dissection Scale Demonstrates Equivalence to Existing Measures.
Pisa, Justyn; Gousseau, Michael; Mowat, Stephanie; Westerberg, Brian; Unger, Bert; Hochman, Jordan B
2018-01-01
Emphasis on patient safety has created the need for quality assessment of fundamental surgical skills. Existing temporal bone rating scales are laborious, subject to evaluator fatigue, and contain inconsistencies when conferring points. To address these deficiencies, a novel binary assessment tool was designed and validated against a well-established rating scale. Residents completed a mastoidectomy with posterior tympanotomy on identical 3D-printed temporal bone models. Four neurotologists evaluated each specimen using a validated scale (Welling) and a newly developed "CanadaWest" scale, with scoring repeated after a 4-week interval. Nineteen participants were clustered into junior, intermediate, and senior cohorts. An ANOVA found significant differences between performance of the junior-intermediate and junior-senior cohorts for both Welling and CanadaWest scales ( P < .05). Neither scale found a significant difference between intermediate-senior resident performance ( P > .05). Cohen's kappa found strong intrarater reliability (0.711) with a high degree of interrater reliability of (0.858) for the CanadaWest scale, similar to scores on the Welling scale of (0.713) and (0.917), respectively. The CanadaWest scale was facile and delineated performance by experience level with strong intrarater reliability. Comparable to the validated Welling Scale, it distinguished junior from senior trainees but was challenged in differentiating intermediate and senior trainee performance.
NASA Technical Reports Server (NTRS)
Niiya, Karen E.; Walker, Richard E.; Pieper, Jerry L.; Nguyen, Thong V.
1993-01-01
This final report includes a discussion of the work accomplished during the period from Dec. 1988 through Nov. 1991. The objective of the program was to assemble existing performance and combustion stability models into a usable design methodology capable of designing and analyzing high-performance and stable LOX/hydrocarbon booster engines. The methodology was then used to design a validation engine. The capabilities and validity of the methodology were demonstrated using this engine in an extensive hot fire test program. The engine used LOX/RP-1 propellants and was tested over a range of mixture ratios, chamber pressures, and acoustic damping device configurations. This volume contains time domain and frequency domain stability plots which indicate the pressure perturbation amplitudes and frequencies from approximately 30 tests of a 50K thrust rocket engine using LOX/RP-1 propellants over a range of chamber pressures from 240 to 1750 psia with mixture ratios of from 1.2 to 7.5. The data is from test configurations which used both bitune and monotune acoustic cavities and from tests with no acoustic cavities. The engine had a length of 14 inches and a contraction ratio of 2.0 using a 7.68 inch diameter injector. The data was taken from both stable and unstable tests. All combustion instabilities were spontaneous in the first tangential mode. Although stability bombs were used and generated overpressures of approximately 20 percent, no tests were driven unstable by the bombs. The stability instrumentation included six high-frequency Kistler transducers in the combustion chamber, a high-frequency Kistler transducer in each propellant manifold, and tri-axial accelerometers. Performance data is presented, both characteristic velocity efficiencies and energy release efficiencies, for those tests of sufficient duration to record steady state values.
Validity of data in the Danish Colorectal Cancer Screening Database.
Thomsen, Mette Kielsholm; Njor, Sisse Helle; Rasmussen, Morten; Linnemann, Dorte; Andersen, Berit; Baatrup, Gunnar; Friis-Hansen, Lennart Jan; Jørgensen, Jens Christian Riis; Mikkelsen, Ellen Margrethe
2017-01-01
In Denmark, a nationwide screening program for colorectal cancer was implemented in March 2014. Along with this, a clinical database for program monitoring and research purposes was established. The aim of this study was to estimate the agreement and validity of diagnosis and procedure codes in the Danish Colorectal Cancer Screening Database (DCCSD). All individuals with a positive immunochemical fecal occult blood test (iFOBT) result who were invited to screening in the first 3 months since program initiation were identified. From these, a sample of 150 individuals was selected using stratified random sampling by age, gender and region of residence. Data from the DCCSD were compared with data from hospital records, which were used as the reference. Agreement, sensitivity, specificity and positive and negative predictive values were estimated for categories of codes "clean colon", "colonoscopy performed", "overall completeness of colonoscopy", "incomplete colonoscopy", "polypectomy", "tumor tissue left behind", "number of polyps", "lost polyps", "risk group of polyps" and "colorectal cancer and polyps/benign tumor". Hospital records were available for 136 individuals. Agreement was highest for "colorectal cancer" (97.1%) and lowest for "lost polyps" (88.2%). Sensitivity varied between moderate and high, with 60.0% for "incomplete colonoscopy" and 98.5% for "colonoscopy performed". Specificity was 92.7% or above, except for the categories "colonoscopy performed" and "overall completeness of colonoscopy", where the specificity was low; however, the estimates were imprecise. A high level of agreement between categories of codes in DCCSD and hospital records indicates that DCCSD reflects the hospital records well. Further, the validity of the categories of codes varied from moderate to high. Thus, the DCCSD may be a valuable data source for future research on colorectal cancer screening.
Lehotay, Steven J; Lightfield, Alan R; Geis-Asteggiante, Lucía; Schneider, Marilyn J; Dutko, Terry; Ng, Chilton; Bluhm, Louis; Mastovska, Katerina
2012-08-01
In the USA, the US Department of Agriculture's Food Safety and Inspection Service (FSIS) conducts the National Residue Program designed to monitor veterinary drug and other chemical residues in beef and other slaughtered food animals. Currently, FSIS uses a 7-plate bioassay in the laboratory to screen for antimicrobial drugs in bovine kidneys from those animals tested positive by inspectors in the slaughter establishments. The microbial inhibition bioassay has several limitations in terms of monitoring scope, sensitivity, selectivity, and analysis time. Ultra-high performance liquid chromatography - tandem mass spectrometry (UHPLC-MS/MS) has many advantages over the bioassay for this application, and this study was designed to develop, evaluate, and validate a fast UHPLC-MS/MS method for antibiotics and other high-priority veterinary drugs in bovine kidney. Five existing multi-class, multi-residue methods from the literature were tested and compared, and each performed similarly. Experiments with incurred samples demonstrated that a 5-min shake of 2 g homogenized kidney with 10 ml of 4/1 (v/v) acetonitrile/water followed by simultaneous clean-up of the initial extract with 0.5 g C18 and 10 ml hexane gave a fast, simple, and effective sample preparation method for the <10 min UHPLC-MS/MS analysis. An extensive 5-day validation process demonstrated that the final method could be used to acceptably screen for 54 of the 62 drugs tested, and 50 of those met qualitative MS identification criteria. Quantification was not needed in the application, but the method gave ≥ 70% recoveries and ≤ 25% reproducibilities for 30 of the drugs. Published 2012. This article is a U.S. Government work and is in the public domain of the USA.
Analytical Modeling and Performance Prediction of Remanufactured Gearbox Components
NASA Astrophysics Data System (ADS)
Pulikollu, Raja V.; Bolander, Nathan; Vijayakar, Sandeep; Spies, Matthew D.
Gearbox components operate in extreme environments, often leading to premature removal or overhaul. Though worn or damaged, these components still have the ability to function given the appropriate remanufacturing processes are deployed. Doing so reduces a significant amount of resources (time, materials, energy, manpower) otherwise required to produce a replacement part. Unfortunately, current design and analysis approaches require extensive testing and evaluation to validate the effectiveness and safety of a component that has been used in the field then processed outside of original OEM specification. To test all possible combination of component coupled with various levels of potential damage repaired through various options of processing would be an expensive and time consuming feat, thus prohibiting a broad deployment of remanufacturing processes across industry. However, such evaluation and validation can occur through Integrated Computational Materials Engineering (ICME) modeling and simulation. Sentient developed a microstructure-based component life prediction (CLP) tool to quantify and assist gearbox components remanufacturing process. This was achieved by modeling the design-manufacturing-microstructure-property relationship. The CLP tool assists in remanufacturing of high value, high demand rotorcraft, automotive and wind turbine gears and bearings. This paper summarizes the CLP models development, and validation efforts by comparing the simulation results with rotorcraft spiral bevel gear physical test data. CLP analyzes gear components and systems for safety, longevity, reliability and cost by predicting (1) New gearbox component performance, and optimal time-to-remanufacture (2) Qualification of used gearbox components for remanufacturing process (3) Predicting the remanufactured component performance.
McAllister, Sue; Lincoln, Michelle; Ferguson, Allison; McAllister, Lindy
2013-01-01
Valid assessment of health science students' ability to perform in the real world of workplace practice is critical for promoting quality learning and ultimately certifying students as fit to enter the world of professional practice. Current practice in performance assessment in the health sciences field has been hampered by multiple issues regarding assessment content and process. Evidence for the validity of scores derived from assessment tools are usually evaluated against traditional validity categories with reliability evidence privileged over validity, resulting in the paradoxical effect of compromising the assessment validity and learning processes the assessments seek to promote. Furthermore, the dominant statistical approaches used to validate scores from these assessments fall under the umbrella of classical test theory approaches. This paper reports on the successful national development and validation of measures derived from an assessment of Australian speech pathology students' performance in the workplace. Validation of these measures considered each of Messick's interrelated validity evidence categories and included using evidence generated through Rasch analyses to support score interpretation and related action. This research demonstrated that it is possible to develop an assessment of real, complex, work based performance of speech pathology students, that generates valid measures without compromising the learning processes the assessment seeks to promote. The process described provides a model for other health professional education programs to trial.
Revalidation of the NASA Ames 11-by 11-Foot Transonic Wind Tunnel with a Commercial Airplane Model
NASA Technical Reports Server (NTRS)
Kmak, Frank J.; Hudgins, M.; Hergert, D.; George, Michael W. (Technical Monitor)
2001-01-01
The 11-By 11-Foot Transonic leg of the Unitary Plan Wind Tunnel (UPWT) was modernized to improve tunnel performance, capability, productivity, and reliability. Wind tunnel tests to demonstrate the readiness of the tunnel for a return to production operations included an Integrated Systems Test (IST), calibration tests, and airplane validation tests. One of the two validation tests was a 0.037-scale Boeing 777 model that was previously tested in the 11-By 11-Foot tunnel in 1991. The objective of the validation tests was to compare pre-modernization and post-modernization results from the same airplane model in order to substantiate the operational readiness of the facility. Evaluation of within-test, test-to-test, and tunnel-to-tunnel data repeatability were made to study the effects of the tunnel modifications. Tunnel productivity was also evaluated to determine the readiness of the facility for production operations. The operation of the facility, including model installation, tunnel operations, and the performance of tunnel systems, was observed and facility deficiency findings generated. The data repeatability studies and tunnel-to-tunnel comparisons demonstrated outstanding data repeatability and a high overall level of data quality. Despite some operational and facility problems, the validation test was successful in demonstrating the readiness of the facility to perform production airplane wind tunnel%, tests.
S, Vijay Kumar; Dhiman, Vinay; Giri, Kalpesh Kumar; Sharma, Kuldeep; Zainuddin, Mohd; Mullangi, Ramesh
2015-09-01
A novel, simple, specific, sensitive and reproducible high-performance liquid chromatography (HPLC) assay method has been developed and validated for the estimation of tofacitinib in rat plasma. The bioanalytical procedure involves extraction of tofacitinib and itraconazole (internal standard, IS) from rat plasma with a simple liquid-liquid extraction process. The chromatographic analysis was performed on a Waters Alliance system using a gradient mobile phase conditions at a flow rate of 1.0 mL/min and C18 column maintained at 40 ± 1 °C. The eluate was monitored using an UV detector set at 287 nm. Tofacitinib and IS eluted at 6.5 and 8.3 min, respectively and the total run time was 10 min. Method validation was performed as per US Food and Drug Administration guidelines and the results met the acceptance criteria. The calibration curve was linear over a concentration range of 182-5035 ng/mL (r(2) = 0.995). The intra- and inter-day precisions were in the range of 1.41-11.2 and 3.66-8.81%, respectively, in rat plasma. The validated HPLC method was successfully applied to a pharmacokinetic study in rats. Copyright © 2015 John Wiley & Sons, Ltd.
Reliability and criterion-related validity of a new repeated agility test
Makni, E; Jemni, M; Elloumi, M; Chamari, K; Nabli, MA; Padulo, J; Moalla, W
2016-01-01
The study aimed to assess the reliability and the criterion-related validity of a new repeated sprint T-test (RSTT) that includes intense multidirectional intermittent efforts. The RSTT consisted of 7 maximal repeated executions of the agility T-test with 25 s of passive recovery rest in between. Forty-five team sports players performed two RSTTs separated by 3 days to assess the reliability of best time (BT) and total time (TT) of the RSTT. The intra-class correlation coefficient analysis revealed a high relative reliability between test and retest for BT and TT (>0.90). The standard error of measurement (<0.50) showed that the RSTT has a good absolute reliability. The minimal detectable change values for BT and TT related to the RSTT were 0.09 s and 0.58 s, respectively. To check the criterion-related validity of the RSTT, players performed a repeated linear sprint (RLS) and a repeated sprint with changes of direction (RSCD). Significant correlations between the BT and TT of the RLS, RSCD and RSTT were observed (p<0.001). The RSTT is, therefore, a reliable and valid measure of the intermittent repeated sprint agility performance. As this ability is required in all team sports, it is suggested that team sports coaches, fitness coaches and sports scientists consider this test in their training follow-up. PMID:27274109
Benedict, Ralph HB; DeLuca, John; Phillips, Glenn; LaRocca, Nicholas; Hudson, Lynn D; Rudick, Richard
2017-01-01
Cognitive and motor performance measures are commonly employed in multiple sclerosis (MS) research, particularly when the purpose is to determine the efficacy of treatment. The increasing focus of new therapies on slowing progression or reversing neurological disability makes the utilization of sensitive, reproducible, and valid measures essential. Processing speed is a basic elemental cognitive function that likely influences downstream processes such as memory. The Multiple Sclerosis Outcome Assessments Consortium (MSOAC) includes representatives from advocacy organizations, Food and Drug Administration (FDA), European Medicines Agency (EMA), National Institute of Neurological Disorders and Stroke (NINDS), academic institutions, and industry partners along with persons living with MS. Among the MSOAC goals is acceptance and qualification by regulators of performance outcomes that are highly reliable and valid, practical, cost-effective, and meaningful to persons with MS. A critical step for these neuroperformance metrics is elucidation of clinically relevant benchmarks, well-defined degrees of disability, and gradients of change that are deemed clinically meaningful. This topical review provides an overview of research on one particular cognitive measure, the Symbol Digit Modalities Test (SDMT), recognized as being particularly sensitive to slowed processing of information that is commonly seen in MS. The research in MS clearly supports the reliability and validity of this test and recently has supported a responder definition of SDMT change approximating 4 points or 10% in magnitude. PMID:28206827
Hofman, Jelle; Samson, Roeland
2014-09-01
Biomagnetic monitoring of tree leaf deposited particles has proven to be a good indicator of the ambient particulate concentration. The objective of this study is to apply this method to validate a local-scale air quality model (ENVI-met), using 96 tree crown sampling locations in a typical urban street canyon. To the best of our knowledge, the application of biomagnetic monitoring for the validation of pollutant dispersion modeling is hereby presented for the first time. Quantitative ENVI-met validation showed significant correlations between modeled and measured results throughout the entire in-leaf period. ENVI-met performed much better at the first half of the street canyon close to the ring road (r=0.58-0.79, RMSE=44-49%), compared to second part (r=0.58-0.64, RMSE=74-102%). The spatial model behavior was evaluated by testing effects of height, azimuthal position, tree position and distance from the main pollution source on the obtained model results and magnetic measurements. Our results demonstrate that biomagnetic monitoring seems to be a valuable method to evaluate the performance of air quality models. Due to the high spatial and temporal resolution of this technique, biomagnetic monitoring can be applied anywhere in the city (where urban green is present) to evaluate model performance at different spatial scales. Copyright © 2014 Elsevier Ltd. All rights reserved.
Benedict, Ralph Hb; DeLuca, John; Phillips, Glenn; LaRocca, Nicholas; Hudson, Lynn D; Rudick, Richard
2017-04-01
Cognitive and motor performance measures are commonly employed in multiple sclerosis (MS) research, particularly when the purpose is to determine the efficacy of treatment. The increasing focus of new therapies on slowing progression or reversing neurological disability makes the utilization of sensitive, reproducible, and valid measures essential. Processing speed is a basic elemental cognitive function that likely influences downstream processes such as memory. The Multiple Sclerosis Outcome Assessments Consortium (MSOAC) includes representatives from advocacy organizations, Food and Drug Administration (FDA), European Medicines Agency (EMA), National Institute of Neurological Disorders and Stroke (NINDS), academic institutions, and industry partners along with persons living with MS. Among the MSOAC goals is acceptance and qualification by regulators of performance outcomes that are highly reliable and valid, practical, cost-effective, and meaningful to persons with MS. A critical step for these neuroperformance metrics is elucidation of clinically relevant benchmarks, well-defined degrees of disability, and gradients of change that are deemed clinically meaningful. This topical review provides an overview of research on one particular cognitive measure, the Symbol Digit Modalities Test (SDMT), recognized as being particularly sensitive to slowed processing of information that is commonly seen in MS. The research in MS clearly supports the reliability and validity of this test and recently has supported a responder definition of SDMT change approximating 4 points or 10% in magnitude.
van Beers, Erik H; van Vliet, Martin H; Kuiper, Rowan; de Best, Leonie; Anderson, Kenneth C; Chari, Ajai; Jagannath, Sundar; Jakubowiak, Andrzej; Kumar, Shaji K; Levy, Joan B; Auclair, Daniel; Lonial, Sagar; Reece, Donna; Richardson, Paul; Siegel, David S; Stewart, A Keith; Trudel, Suzanne; Vij, Ravi; Zimmerman, Todd M; Fonseca, Rafael
2017-09-01
High risk and low risk multiple myeloma patients follow a very different clinical course as reflected in their PFS and OS. To be clinically useful, methodologies used to identify high and low risk disease must be validated in representative independent clinical data and available so that patients can be managed appropriately. A recent analysis has indicated that SKY92 combined with the International Staging System (ISS) identifies patients with different risk disease with high sensitivity. Here we computed the performance of eight gene expression based classifiers SKY92, UAMS70, UAMS80, IFM15, Proliferation Index, Centrosome Index, Cancer Testis Antigen and HM19 as well as the combination of SKY92/ISS in an independent cohort of 91 newly diagnosed MM patients. The classifiers identified between 9%-21% of patients as high risk, with hazard ratios (HRs) between 1.9 and 8.2. Among the eight signatures, SKY92 identified the largest proportion of patients (21%) also with the highest HR (8.2). Our analysis also validated the combination SKY92/ISS for identification of three classes; low risk (42%), intermediate risk (37%) and high risk (21%). Between low risk and high risk classes the HR is >10. Copyright © 2017 Elsevier Inc. All rights reserved.
Real-time validation of receiver state information in optical space-time block code systems.
Alamia, John; Kurzweg, Timothy
2014-06-15
Free space optical interconnect (FSOI) systems are a promising solution to interconnect bottlenecks in high-speed systems. To overcome some sources of diminished FSOI performance caused by close proximity of multiple optical channels, multiple-input multiple-output (MIMO) systems implementing encoding schemes such as space-time block coding (STBC) have been developed. These schemes utilize information pertaining to the optical channel to reconstruct transmitted data. The STBC system is dependent on accurate channel state information (CSI) for optimal system performance. As a result of dynamic changes in optical channels, a system in operation will need to have updated CSI. Therefore, validation of the CSI during operation is a necessary tool to ensure FSOI systems operate efficiently. In this Letter, we demonstrate a method of validating CSI, in real time, through the use of moving averages of the maximum likelihood decoder data, and its capacity to predict the bit error rate (BER) of the system.
NASA Astrophysics Data System (ADS)
Zimmermann, Judith; von Davier, Alina A.; Buhmann, Joachim M.; Heinimann, Hans R.
2018-01-01
Graduate admission has become a critical process in tertiary education, whereby selecting valid admissions instruments is key. This study assessed the validity of Graduate Record Examination (GRE) General Test scores for admission to Master's programmes at a technical university in Europe. We investigated the indicative value of GRE scores for the Master's programme grade point average (GGPA) with and without the addition of the undergraduate GPA (UGPA) and the TOEFL score, and of GRE scores for study completion and Master's thesis performance. GRE scores explained 20% of the variation in the GGPA, while additional 7% were explained by the TOEFL score and 3% by the UGPA. Contrary to common belief, the GRE quantitative reasoning score showed only little explanatory power. GRE scores were also weakly related to study progress but not to thesis performance. Nevertheless, GRE and TOEFL scores were found to be sensible admissions instruments. Rigorous methodology was used to obtain highly reliable results.
Validity of the International Fitness Scale "IFIS" in older adults.
Merellano-Navarro, Eugenio; Collado-Mateo, Daniel; García-Rubio, Javier; Gusi, Narcís; Olivares, Pedro R
2017-09-01
To validate the "International Fitness Scale" (IFIS) in older adults. Firstly, cognitive interviews were performed to ensure that the questionnaire was comprehensive for older Chilean adults. After that, a transversal study of 401 institutionalized and non-institutionalized older adults from Maule region in Chile was conducted. A battery of validated fitness tests for this population was used in order to compare the responses obtained in the IFIS with the objectively measured fitness performance (back scratch, chair sit-and-reach, handgrip, 30-s chair stand, timed up-and-go and 6-min walking). Indicated that IFIS presented a high compliance in the comprehension of the items which defined it, and it was able of categorizing older adults according to their measured physical fitness levels. The analysis of covariance ANCOVA adjusted by sex and age showed a concordance between IFIS and the score in physical fitness tests. Based on the results of this study, IFIS questionnaire is a good alternative to assess physical fitness in older adults. Copyright © 2017 Elsevier Inc. All rights reserved.
Validation of the group nuclear safety climate questionnaire.
Navarro, M Felisa Latorre; Gracia Lerín, Francisco J; Tomás, Inés; Peiró Silla, José María
2013-09-01
Group safety climate is a leading indicator of safety performance in high reliability organizations. Zohar and Luria (2005) developed a Group Safety Climate scale (ZGSC) and found it to have a single factor. The ZGSC scale was used as a basis in this study with the researchers rewording almost half of the items on this scale, changing the referents from the leader to the group, and trying to validate a two-factor scale. The sample was composed of 566 employees in 50 groups from a Spanish nuclear power plant. Item analysis, reliability, correlations, aggregation indexes and CFA were performed. Results revealed that the construct was shared by each unit, and our reworded Group Safety Climate (GSC) scale showed a one-factor structure and correlated to organizational safety climate, formalized procedures, safety behavior, and time pressure. This validation of the one-factor structure of the Zohar and Luria (2005) scale could strengthen and spread this scale and measure group safety climate more effectively. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.
Validation of the Gifted Rating Scales–School Form in China
Li, Huijun; Pfeiffer, Steven I.; Petscher, Yaacov; Kumtepe, Alper T.; Mo, Guofang
2015-01-01
The Gifted Rating Scales–School Form (GRS-S), a teacher-completed rating scale, is designed to identify five types of giftedness and motivation. This study examines the reliability and validity of a Chinese-translated version of the GRS-S with a sample of Chinese elementary and middle school students (N = 499). The Chinese GRSS was found to have high internal consistency. Results of the confirmatory factor analysis corroborated the six-factor solution of the original GRS-S. Comparison of the GRS-S scores and measures of academic performance provide preliminary support for the criterion validity of the Chinese-translated GRS-S. Significant age and gender differences on the Chinese GRS-S were found. Results provide preliminary support for the Chinese version of the GRS-S as a reliable and valid measure of giftedness for Chinese students. PMID:26346730
Kamal, Abid; Khan, Washim; Ahmad, Sayeed; Ahmad, F. J.; Saleem, Kishwar
2015-01-01
Objective: The present study was used to design simple, accurate and sensitive reversed phase-high-performance liquid chromatography RP-HPLC and high-performance thin-layer chromatography (HPTLC) methods for the development of quantification of khellin present in the seeds of Ammi visnaga. Materials and Methods: RP-HPLC analysis was performed on a C18 column with methanol: Water (75: 25, v/v) as a mobile phase. The HPTLC method involved densitometric evaluation of khellin after resolving it on silica gel plate using ethyl acetate: Toluene: Formic acid (5.5:4.0:0.5, v/v/v) as a mobile phase. Results: The developed HPLC and HPTLC methods were validated for precision (interday, intraday and intersystem), robustness and accuracy, limit of detection and limit of quantification. The relationship between the concentration of standard solutions and the peak response was linear in both HPLC and HPTLC methods with the concentration range of 10–80 μg/mL in HPLC and 25–1,000 ng/spot in HPTLC for khellin. The % relative standard deviation values for method precision was found to be 0.63–1.97%, 0.62–2.05% in HPLC and HPTLC for khellin respectively. Accuracy of the method was checked by recovery studies conducted at three different concentration levels and the average percentage recovery was found to be 100.53% in HPLC and 100.08% in HPTLC for khellin. Conclusions: The developed HPLC and HPTLC methods for the quantification of khellin were found simple, precise, specific, sensitive and accurate which can be used for routine analysis and quality control of A. visnaga and several formulations containing it as an ingredient. PMID:26681890
Identifying Wrist Fracture Patients with High Accuracy by Automatic Categorization of X-ray Reports
de Bruijn, Berry; Cranney, Ann; O’Donnell, Siobhan; Martin, Joel D.; Forster, Alan J.
2006-01-01
The authors performed this study to determine the accuracy of several text classification methods to categorize wrist x-ray reports. We randomly sampled 751 textual wrist x-ray reports. Two expert reviewers rated the presence (n = 301) or absence (n = 450) of an acute fracture of wrist. We developed two information retrieval (IR) text classification methods and a machine learning method using a support vector machine (TC-1). In cross-validation on the derivation set (n = 493), TC-1 outperformed the two IR based methods and six benchmark classifiers, including Naive Bayes and a Neural Network. In the validation set (n = 258), TC-1 demonstrated consistent performance with 93.8% accuracy; 95.5% sensitivity; 92.9% specificity; and 87.5% positive predictive value. TC-1 was easy to implement and superior in performance to the other classification methods. PMID:16929046
Computational design and experimental validation of new thermal barrier systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Shengmin
2015-03-31
The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validationmore » applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr 2.75O 8 and confirmed it’s hot corrosion performance.« less
Preliminary Report on Oak Ridge National Laboratory Testing of Drake/ACSS/MA2/E3X
DOE Office of Scientific and Technical Information (OSTI.GOV)
Irminger, Philip; King, Daniel J.; Herron, Andrew N.
2016-01-01
A key to industry acceptance of a new technology is extensive validation in field trials. The Powerline Conductor Accelerated Test facility (PCAT) at Oak Ridge National Laboratory (ORNL) is specifically designed to evaluate the performance and reliability of a new conductor technology under real world conditions. The facility is set up to capture large amounts of data during testing. General Cable used the ORNL PCAT facility to validate the performance of TransPowr with E3X Technology a standard overhead conductor with an inorganic high emissivity, low absorptivity surface coating. Extensive testing has demonstrated a significant improvement in conductor performance across amore » wide range of operating temperatures, indicating that E3X Technology can provide a reduction in temperature, a reduction in sag, and an increase in ampacity when applied to the surface of any overhead conductor. This report provides initial results of that testing.« less
Validation of the solar heating and cooling high speed performance (HISPER) computer code
NASA Technical Reports Server (NTRS)
Wallace, D. B.
1980-01-01
Developed to give a quick and accurate predictions HISPER, a simplification of the TRNSYS program, achieves its computational speed by not simulating detailed system operations or performing detailed load computations. In order to validate the HISPER computer for air systems the simulation was compared to the actual performance of an operational test site. Solar insolation, ambient temperature, water usage rate, and water main temperatures from the data tapes for an office building in Huntsville, Alabama were used as input. The HISPER program was found to predict the heating loads and solar fraction of the loads with errors of less than ten percent. Good correlation was found on both a seasonal basis and a monthly basis. Several parameters (such as infiltration rate and the outside ambient temperature above which heating is not required) were found to require careful selection for accurate simulation.
NASA Technical Reports Server (NTRS)
Ferzali, Wassim; Zacharakis, Vassilis; Upadhyay, Triveni; Weed, Dennis; Burke, Gregory
1995-01-01
The ICAO Aeronautical Mobile Communications Panel (AMCP) completed the drafting of the Aeronautical Mobile Satellite Service (AMSS) Standards and Recommended Practices (SARP's) and the associated Guidance Material and submitted these documents to ICAO Air Navigation Commission (ANC) for ratification in May 1994. This effort, encompassed an extensive, multi-national SARP's validation. As part of this activity, the US Federal Aviation Administration (FAA) sponsored an effort to validate the SARP's via computer simulation. This paper provides a description of this effort. Specifically, it describes: (1) the approach selected for the creation of a high-fidelity AMSS computer model; (2) the test traffic generation scenarios; and (3) the resultant AMSS performance assessment. More recently, the AMSS computer model was also used to provide AMSS performance statistics in support of the RTCA standardization activities. This paper describes this effort as well.
A new technique for measuring listening and reading literacy in developing countries
NASA Astrophysics Data System (ADS)
Greene, Barbara A.; Royer, James M.; Anzalone, Stephen
1990-03-01
One problem in evaluating educational interventions in developing countries is the absence of tests that adequately reflect the culture and curriculum. The Sentence Verification Technique is a new procedure for measuring reading and listening comprehension that allows for the development of tests based on materials indigenous to a given culture. The validity of using the Sentence Verification Technique to measure reading comprehension in Grenada was evaluated in the present study. The study involved 786 students at standards 3, 4 and 5. The tests for each standard consisted of passages that varied in difficulty. The students identified as high ability students in all three standards performed better than those identified as low ability. All students performed better with easier passages. Additionally, students in higher standards performed bettter than students in lower standards on a given passage. These results supported the claim that the Sentence Verification Technique is a valid measure of reading comprehension in Grenada.
Mangueira, Suzana de Oliveira; Lopes, Marcos Venícios de Oliveira
2016-10-01
To evaluate the clinical validity indicators for the nursing diagnosis of dysfunctional family processes related to alcohol abuse. Alcoholism is a chronic disease that negatively affects family relationships. Studies on the nursing diagnosis of dysfunctional family processes are scarce in the literature. This diagnosis is currently composed of 115 defining characteristics, hindering their use in practice and highlighting the need for clinical validation. This was a diagnostic accuracy study. A sample of 110 alcoholics admitted to a reference centre for alcohol treatment was assessed during the second half of 2013 for the presence or absence of the defining characteristics of the diagnosis. Operational definitions were created for each defining characteristic based on concept analysis and experts evaluated the content of these definitions. Diagnostic accuracy measures were calculated from latent class models with random effects. All 89 clinical indicators were found in the sample and a set of 24 clinical indicators was identified as clinically valid for a diagnostic screening for family dysfunction from the report of alcoholics. Main clinical indicators with high specificity included sexual abuse, disturbance in academic performance in children and manipulation. The main indicators that showed high sensitivity values were distress, loss, anxiety, low self-esteem, confusion, embarrassment, insecurity, anger, loneliness, deterioration in family relationships and disturbance in family dynamics. Eighteen clinical indicators showed a high capacity for diagnostic screening for alcoholics (high sensitivity) and six indicators can be used for confirmatory diagnosis (high specificity). © 2016 John Wiley & Sons Ltd.
Antecedents and Consequences of Supplier Performance Evaluation Efficacy
2016-06-30
forming groups of high and low values. These tests are contingent on the reliable and valid measure of high and low rating inflation and high and...year)? Future research could deploy a SPM system as a test case on a limited set of transactions. Using a quasi-experimental design , comparisons...single source, common method bias must be of concern. Harmon’s one -factor test showed that when latent-indicator items were forced onto a single
Simulation Assessment Validation Environment (SAVE). Software User’s Manual
2000-09-01
requirements and decisions are made. The integration is leveraging work from other DoD organizations so that high -end results are attainable much faster than...planning through the modeling and simulation data capture and visualization process. The planners can complete the manufacturing process plan with a high ...technologies. This tool is also used to perform “ high level” factory process simulation prior to full CAD model development and help define feasible
Glisson, Wesley J.; Conway, Courtney J.; Nadeau, Christopher P.; Borgmann, Kathi L.
2017-01-01
Understanding species–habitat relationships for endangered species is critical for their conservation. However, many studies have limited value for conservation because they fail to account for habitat associations at multiple spatial scales, anthropogenic variables, and imperfect detection. We addressed these three limitations by developing models for an endangered wetland bird, Yuma Ridgway's rail (Rallus obsoletus yumanensis), that examined how the spatial scale of environmental variables, inclusion of anthropogenic disturbance variables, and accounting for imperfect detection in validation data influenced model performance. These models identified associations between environmental variables and occupancy. We used bird survey and spatial environmental data at 2473 locations throughout the species' U.S. range to create and validate occupancy models and produce predictive maps of occupancy. We compared habitat-based models at three spatial scales (100, 224, and 500 m radii buffers) with and without anthropogenic disturbance variables using validation data adjusted for imperfect detection and an unadjusted validation dataset that ignored imperfect detection. The inclusion of anthropogenic disturbance variables improved the performance of habitat models at all three spatial scales, and the 224-m-scale model performed best. All models exhibited greater predictive ability when imperfect detection was incorporated into validation data. Yuma Ridgway's rail occupancy was negatively associated with ephemeral and slow-moving riverine features and high-intensity anthropogenic development, and positively associated with emergent vegetation, agriculture, and low-intensity development. Our modeling approach accounts for common limitations in modeling species–habitat relationships and creating predictive maps of occupancy probability and, therefore, provides a useful framework for other species.
Can virtual reality simulation be used for advanced bariatric surgical training?
Lewis, Trystan M; Aggarwal, Rajesh; Kwasnicki, Richard M; Rajaretnam, Niro; Moorthy, Krishna; Ahmed, Ahmed; Darzi, Ara
2012-06-01
Laparoscopic bariatric surgery is a safe and effective way of treating morbid obesity. However, the operations are technically challenging and training opportunities for junior surgeons are limited. This study aims to assess whether virtual reality (VR) simulation is an effective adjunct for training and assessment of laparoscopic bariatric technical skills. Twenty bariatric surgeons of varying experience (Five experienced, five intermediate, and ten novice) were recruited to perform a jejuno-jejunostomy on both cadaveric tissue and on the bariatric module of the Lapmentor VR simulator (Simbionix Corporation, Cleveland, OH). Surgical performance was assessed using validated global rating scales (GRS) and procedure specific video rating scales (PSRS). Subjects were also questioned about the appropriateness of VR as a training tool for surgeons. Construct validity of the VR bariatric module was demonstrated with a significant difference in performance between novice and experienced surgeons on the VR jejuno-jejunostomy module GRS (median 11-15.5; P = .017) and PSRS (median 11-13; P = .003). Content validity was demonstrated with surgeons describing the VR bariatric module as useful and appropriate for training (mean Likert score 4.45/7) and they would highly recommend VR simulation to others for bariatric training (mean Likert score 5/7). Face and concurrent validity were not established. This study shows that the bariatric module on a VR simulator demonstrates construct and content validity. VR simulation appears to be an effective method for training of advanced bariatric technical skills for surgeons at the start of their bariatric training. However, assessment of technical skills should still take place on cadaveric tissue. Copyright © 2012. Published by Mosby, Inc.
Genome-based prediction of test cross performance in two subsequent breeding cycles.
Hofheinz, Nina; Borchardt, Dietrich; Weissleder, Knuth; Frisch, Matthias
2012-12-01
Genome-based prediction of genetic values is expected to overcome shortcomings that limit the application of QTL mapping and marker-assisted selection in plant breeding. Our goal was to study the genome-based prediction of test cross performance with genetic effects that were estimated using genotypes from the preceding breeding cycle. In particular, our objectives were to employ a ridge regression approach that approximates best linear unbiased prediction of genetic effects, compare cross validation with validation using genetic material of the subsequent breeding cycle, and investigate the prospects of genome-based prediction in sugar beet breeding. We focused on the traits sugar content and standard molasses loss (ML) and used a set of 310 sugar beet lines to estimate genetic effects at 384 SNP markers. In cross validation, correlations >0.8 between observed and predicted test cross performance were observed for both traits. However, in validation with 56 lines from the next breeding cycle, a correlation of 0.8 could only be observed for sugar content, for standard ML the correlation reduced to 0.4. We found that ridge regression based on preliminary estimates of the heritability provided a very good approximation of best linear unbiased prediction and was not accompanied with a loss in prediction accuracy. We conclude that prediction accuracy assessed with cross validation within one cycle of a breeding program can not be used as an indicator for the accuracy of predicting lines of the next cycle. Prediction of lines of the next cycle seems promising for traits with high heritabilities.
NASA Astrophysics Data System (ADS)
Bouaziz, Laurène; de Boer-Euser, Tanja; Brauer, Claudia; Drogue, Gilles; Fenicia, Fabrizio; Grelier, Benjamin; de Niel, Jan; Nossent, Jiri; Pereira, Fernando; Savenije, Hubert; Thirel, Guillaume; Willems, Patrick
2016-04-01
International collaboration between institutes and universities is a promising way to reach consensus on hydrological model development. Education, experience and expert knowledge of the hydrological community have resulted in the development of a great variety of model concepts, calibration methods and analysis techniques. Although comparison studies are very valuable for international cooperation, they do often not lead to very clear new insights regarding the relevance of the modelled processes. We hypothesise that this is partly caused by model complexity and the used comparison methods, which focus on a good overall performance instead of focusing on specific events. We propose an approach that focuses on the evaluation of specific events. Eight international research groups calibrated their model for the Ourthe catchment in Belgium (1607 km2) and carried out a validation in time for the Ourthe (i.e. on two different periods, one of them on a blind mode for the modellers) and a validation in space for nested and neighbouring catchments of the Meuse in a completely blind mode. For each model, the same protocol was followed and an ensemble of best performing parameter sets was selected. Signatures were first used to assess model performances in the different catchments during validation. Comparison of the models was then followed by evaluation of selected events, which include: low flows, high flows and the transition from low to high flows. While the models show rather similar performances based on general metrics (i.e. Nash-Sutcliffe Efficiency), clear differences can be observed for specific events. While most models are able to simulate high flows well, large differences are observed during low flows and in the ability to capture the first peaks after drier months. The transferability of model parameters to neighbouring and nested catchments is assessed as an additional measure in the model evaluation. This suggested approach helps to select, among competing model alternatives, the most suitable model for a specific purpose.