In Defense of an Instrument-Based Approach to Validity
ERIC Educational Resources Information Center
Hood, S. Brian
2012-01-01
Paul E. Newton argues in favor of a conception of validity, viz, "the consensus definition of validity," according to which the extension of the predicate "is valid" is a subset of "assessment-based decision-making procedure[s], which [are] underwritten by an argument that the assessment procedure can be used to measure the attribute entailed by…
Assessing Procedural Competence: Validity Considerations.
Pugh, Debra M; Wood, Timothy J; Boulet, John R
2015-10-01
Simulation-based medical education (SBME) offers opportunities for trainees to learn how to perform procedures and to be assessed in a safe environment. However, SBME research studies often lack robust evidence to support the validity of the interpretation of the results obtained from tools used to assess trainees' skills. The purpose of this paper is to describe how a validity framework can be applied when reporting and interpreting the results of a simulation-based assessment of skills related to performing procedures. The authors discuss various sources of validity evidence because they relate to SBME. A case study is presented.
41 CFR 60-3.9 - No assumption of validity.
Code of Federal Regulations, 2010 CFR
2010-07-01
... general reputation of a test or other selection procedures, its author or its publisher, or casual reports... of validity based on a procedure's name or descriptive labels; all forms of promotional literature...
41 CFR 60-3.9 - No assumption of validity.
Code of Federal Regulations, 2011 CFR
2011-07-01
... general reputation of a test or other selection procedures, its author or its publisher, or casual reports... of validity based on a procedure's name or descriptive labels; all forms of promotional literature...
Meta-Analysis of Criterion Validity for Curriculum-Based Measurement in Written Language
ERIC Educational Resources Information Center
Romig, John Elwood; Therrien, William J.; Lloyd, John W.
2017-01-01
We used meta-analysis to examine the criterion validity of four scoring procedures used in curriculum-based measurement of written language. A total of 22 articles representing 21 studies (N = 21) met the inclusion criteria. Results indicated that two scoring procedures, correct word sequences and correct minus incorrect sequences, have acceptable…
Koller, Ingrid; Levenson, Michael R.; Glück, Judith
2017-01-01
The valid measurement of latent constructs is crucial for psychological research. Here, we present a mixed-methods procedure for improving the precision of construct definitions, determining the content validity of items, evaluating the representativeness of items for the target construct, generating test items, and analyzing items on a theoretical basis. To illustrate the mixed-methods content-scaling-structure (CSS) procedure, we analyze the Adult Self-Transcendence Inventory, a self-report measure of wisdom (ASTI, Levenson et al., 2005). A content-validity analysis of the ASTI items was used as the basis of psychometric analyses using multidimensional item response models (N = 1215). We found that the new procedure produced important suggestions concerning five subdimensions of the ASTI that were not identifiable using exploratory methods. The study shows that the application of the suggested procedure leads to a deeper understanding of latent constructs. It also demonstrates the advantages of theory-based item analysis. PMID:28270777
Hovgaard, Lisette Hvid; Andersen, Steven Arild Wuyts; Konge, Lars; Dalsgaard, Torur; Larsen, Christian Rifbjerg
2018-03-30
The use of robotic surgery for minimally invasive procedures has increased considerably over the last decade. Robotic surgery has potential advantages compared to laparoscopic surgery but also requires new skills. Using virtual reality (VR) simulation to facilitate the acquisition of these new skills could potentially benefit training of robotic surgical skills and also be a crucial step in developing a robotic surgical training curriculum. The study's objective was to establish validity evidence for a simulation-based test for procedural competency for the vaginal cuff closure procedure that can be used in a future simulation-based, mastery learning training curriculum. Eleven novice gynaecological surgeons without prior robotic experience and 11 experienced gynaecological robotic surgeons (> 30 robotic procedures) were recruited. After familiarization with the VR simulator, participants completed the module 'Guided Vaginal Cuff Closure' six times. Validity evidence was investigated for 18 preselected simulator metrics. The internal consistency was assessed using Cronbach's alpha and a composite score was calculated based on metrics with significant discriminative ability between the two groups. Finally, a pass/fail standard was established using the contrasting groups' method. The experienced surgeons significantly outperformed the novice surgeons on 6 of the 18 metrics. The internal consistency was 0.58 (Cronbach's alpha). The experienced surgeons' mean composite score for all six repetitions were significantly better than the novice surgeons' (76.1 vs. 63.0, respectively, p < 0.001). A pass/fail standard of 75/100 was established. Four novice surgeons passed this standard (false positives) and three experienced surgeons failed (false negatives). Our study has gathered validity evidence for a simulation-based test for procedural robotic surgical competency in the vaginal cuff closure procedure and established a credible pass/fail standard for future proficiency-based training.
Vermeulen, Margit I; Tromp, Fred; Zuithoff, Nicolaas P A; Pieters, Ron H M; Damoiseaux, Roger A M J; Kuyvenhoven, Marijke M
2014-12-01
Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the competencies that are required to complete GP training. The objective was to explore reliability and validity aspects of the instruments developed. The new selection procedure comprising the National GP Knowledge Test (LHK), a situational judgement tests (SJT), a patterned behaviour descriptive interview (PBDI) and a simulated encounter (SIM) was piloted alongside the current procedure. Forty-seven candidates volunteered in both procedures. Admission decision was based on the results of the current procedure. Study participants did hardly differ from the other candidates. The mean scores of the candidates on the LHK and SJT were 21.9 % (SD 8.7) and 83.8% (SD 3.1), respectively. The mean self-reported competency scores (PBDI) were higher than the observed competencies (SIM): 3.7(SD 0.5) and 2.9(SD 0.6), respectively. Content-related competencies showed low correlations with one another when measured with different instruments, whereas more diverse competencies measured by a single instrument showed strong to moderate correlations. Moreover, a moderate correlation between LHK and SJT was found. The internal consistencies (intraclass correlation, ICC) of LHK and SJT were poor while the ICC of PBDI and SIM showed acceptable levels of reliability. Findings on content validity and reliability of these new instruments are promising to realize a competency based procedure. Further development of the instruments and research on predictive validity should be pursued.
Hubert, Ph; Nguyen-Huu, J-J; Boulanger, B; Chapuzet, E; Chiap, P; Cohen, N; Compagnon, P-A; Dewé, W; Feinberg, M; Lallier, M; Laurentie, M; Mercier, N; Muzard, G; Nivet, C; Valat, L
2004-11-15
This paper is the first part of a summary report of a new commission of the Société Française des Sciences et Techniques Pharmaceutiques (SFSTP). The main objective of this commission was the harmonization of approaches for the validation of quantitative analytical procedures. Indeed, the principle of the validation of theses procedures is today widely spread in all the domains of activities where measurements are made. Nevertheless, this simple question of acceptability or not of an analytical procedure for a given application, remains incompletely determined in several cases despite the various regulations relating to the good practices (GLP, GMP, ...) and other documents of normative character (ISO, ICH, FDA, ...). There are many official documents describing the criteria of validation to be tested, but they do not propose any experimental protocol and limit themselves most often to the general concepts. For those reasons, two previous SFSTP commissions elaborated validation guides to concretely help the industrial scientists in charge of drug development to apply those regulatory recommendations. If these two first guides widely contributed to the use and progress of analytical validations, they present, nevertheless, weaknesses regarding the conclusions of the performed statistical tests and the decisions to be made with respect to the acceptance limits defined by the use of an analytical procedure. The present paper proposes to review even the bases of the analytical validation for developing harmonized approach, by distinguishing notably the diagnosis rules and the decision rules. This latter rule is based on the use of the accuracy profile, uses the notion of total error and allows to simplify the approach of the validation of an analytical procedure while checking the associated risk to its usage. Thanks to this novel validation approach, it is possible to unambiguously demonstrate the fitness for purpose of a new method as stated in all regulatory documents.
USDA-ARS?s Scientific Manuscript database
BACKGROUND In e-health intervention studies, there are concerns about the reliability of internet-based, self-reported (SR) data and about the potential for identity fraud. This study introduced and tested a novel procedure for assessing the validity of internet-based, SR identity and validated anth...
Validation of biomarkers of food intake-critical assessment of candidate biomarkers.
Dragsted, L O; Gao, Q; Scalbert, A; Vergères, G; Kolehmainen, M; Manach, C; Brennan, L; Afman, L A; Wishart, D S; Andres Lacueva, C; Garcia-Aloy, M; Verhagen, H; Feskens, E J M; Praticò, G
2018-01-01
Biomarkers of food intake (BFIs) are a promising tool for limiting misclassification in nutrition research where more subjective dietary assessment instruments are used. They may also be used to assess compliance to dietary guidelines or to a dietary intervention. Biomarkers therefore hold promise for direct and objective measurement of food intake. However, the number of comprehensively validated biomarkers of food intake is limited to just a few. Many new candidate biomarkers emerge from metabolic profiling studies and from advances in food chemistry. Furthermore, candidate food intake biomarkers may also be identified based on extensive literature reviews such as described in the guidelines for Biomarker of Food Intake Reviews (BFIRev). To systematically and critically assess the validity of candidate biomarkers of food intake, it is necessary to outline and streamline an optimal and reproducible validation process. A consensus-based procedure was used to provide and evaluate a set of the most important criteria for systematic validation of BFIs. As a result, a validation procedure was developed including eight criteria, plausibility, dose-response, time-response, robustness, reliability, stability, analytical performance, and inter-laboratory reproducibility. The validation has a dual purpose: (1) to estimate the current level of validation of candidate biomarkers of food intake based on an objective and systematic approach and (2) to pinpoint which additional studies are needed to provide full validation of each candidate biomarker of food intake. This position paper on biomarker of food intake validation outlines the second step of the BFIRev procedure but may also be used as such for validation of new candidate biomarkers identified, e.g., in food metabolomic studies.
48 CFR 1852.245-73 - Financial reporting of NASA property in the custody of contractors.
Code of Federal Regulations, 2013 CFR
2013-10-01
... due. However, contractors' procedures must document the process for developing these estimates based... shall have formal policies and procedures, which address the validation of NF 1018 data, including data... validation is to ensure that information reported is accurate and in compliance with the NASA FAR Supplement...
48 CFR 1852.245-73 - Financial reporting of NASA property in the custody of contractors.
Code of Federal Regulations, 2012 CFR
2012-10-01
... due. However, contractors' procedures must document the process for developing these estimates based... shall have formal policies and procedures, which address the validation of NF 1018 data, including data... validation is to ensure that information reported is accurate and in compliance with the NASA FAR Supplement...
Face and construct validity of a computer-based virtual reality simulator for ERCP.
Bittner, James G; Mellinger, John D; Imam, Toufic; Schade, Robert R; Macfadyen, Bruce V
2010-02-01
Currently, little evidence supports computer-based simulation for ERCP training. To determine face and construct validity of a computer-based simulator for ERCP and assess its perceived utility as a training tool. Novice and expert endoscopists completed 2 simulated ERCP cases by using the GI Mentor II. Virtual Education and Surgical Simulation Laboratory, Medical College of Georgia. Outcomes included times to complete the procedure, reach the papilla, and use fluoroscopy; attempts to cannulate the papilla, pancreatic duct, and common bile duct; and number of contrast injections and complications. Subjects assessed simulator graphics, procedural accuracy, difficulty, haptics, overall realism, and training potential. Only when performance data from cases A and B were combined did the GI Mentor II differentiate novices and experts based on times to complete the procedure, reach the papilla, and use fluoroscopy. Across skill levels, overall opinions were similar regarding graphics (moderately realistic), accuracy (similar to clinical ERCP), difficulty (similar to clinical ERCP), overall realism (moderately realistic), and haptics. Most participants (92%) claimed that the simulator has definite training potential or should be required for training. Small sample size, single institution. The GI Mentor II demonstrated construct validity for ERCP based on select metrics. Most subjects thought that the simulated graphics, procedural accuracy, and overall realism exhibit face validity. Subjects deemed it a useful training tool. Study repetition involving more participants and cases may help confirm results and establish the simulator's ability to differentiate skill levels based on ERCP-specific metrics.
The SCALE Verified, Archived Library of Inputs and Data - VALID
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Rearden, Bradley T
The Verified, Archived Library of Inputs and Data (VALID) at ORNL contains high quality, independently reviewed models and results that improve confidence in analysis. VALID is developed and maintained according to a procedure of the SCALE quality assurance (QA) plan. This paper reviews the origins of the procedure and its intended purpose, the philosophy of the procedure, some highlights of its implementation, and the future of the procedure and associated VALID library. The original focus of the procedure was the generation of high-quality models that could be archived at ORNL and applied to many studies. The review process associated withmore » model generation minimized the chances of errors in these archived models. Subsequently, the scope of the library and procedure was expanded to provide high quality, reviewed sensitivity data files for deployment through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Sensitivity data files for approximately 400 such models are currently available. The VALID procedure and library continue fulfilling these multiple roles. The VALID procedure is based on the quality assurance principles of ISO 9001 and nuclear safety analysis. Some of these key concepts include: independent generation and review of information, generation and review by qualified individuals, use of appropriate references for design data and documentation, and retrievability of the models, results, and documentation associated with entries in the library. Some highlights of the detailed procedure are discussed to provide background on its implementation and to indicate limitations of data extracted from VALID for use by the broader community. Specifically, external users of data generated within VALID must take responsibility for ensuring that the files are used within the QA framework of their organization and that use is appropriate. The future plans for the VALID library include expansion to include additional experiments from the IHECSBE, to include experiments from areas beyond criticality safety, such as reactor physics and shielding, and to include application models. In the future, external SCALE users may also obtain qualification under the VALID procedure and be involved in expanding the library. The VALID library provides a pathway for the criticality safety community to leverage modeling and analysis expertise at ORNL.« less
40 CFR 761.392 - Preparing validation study samples.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Preparing validation study samples..., AND USE PROHIBITIONS Comparison Study for Validating a New Performance-Based Decontamination Solvent Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to...
40 CFR 761.392 - Preparing validation study samples.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Preparing validation study samples..., AND USE PROHIBITIONS Comparison Study for Validating a New Performance-Based Decontamination Solvent Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to...
van Steenbergen, Liza N; Spooren, Anneke; van Rooden, Stephanie M; van Oosterhout, Frank J; Morrenhof, Jan W; Nelissen, Rob G H H
2015-01-01
Background and purpose A complete and correct national arthroplasty register is indispensable for the quality of arthroplasty outcome studies. We evaluated the coverage, completeness, and validity of the Dutch Arthroplasty Register (LROI) for hip and knee arthroplasty. Patients and methods The LROI is a nationwide population-based registry with information on joint arthroplasties in the Netherlands. Completeness of entered procedures was validated in 2 ways: (1) by comparison with the number of reimbursements for arthroplasty surgeries (Vektis database), and (2) by comparison with data from hospital information systems (HISs). The validity was examined by conducting checks on missing or incorrectly coded values in the LROI. Results The LROI contains over 300,000 hip and knee arthroplasties performed since 2007. Coverage of all Dutch hospitals (n = 100) was reached in 2012. Completeness of registered procedures was 98% for hip arthroplasty and 96% for knee arthroplasty in 2012, based on Vektis data. Based on comparison with data from the HIS, completeness of registered procedures was 97% for primary total hip arthroplasty and 96% for primary knee arthroplasty in 2013. Completeness of revision arthroplasty was 88% for hips and 90% for knees in 2013. The proportion of missing or incorrectly coded values of variables was generally less than 0.5%, except for encrypted personal identity numbers (17% of which were missing) and ASA scores (10% of which were missing). Interpretation The LROI now contains over 300,000 hip and knee arthroplasty procedures, with coverage of all hospitals. It has a good level of completeness (i.e. more than 95% for primary hip and knee arthroplasty procedures in 2012 and 2013) and the database has high validity. PMID:25758646
Stuart, Lauren N; Volmar, Keith E; Nowak, Jan A; Fatheree, Lisa A; Souers, Rhona J; Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Astles, J Rex; Nakhleh, Raouf E
2017-09-01
- A cooperative agreement between the College of American Pathologists (CAP) and the United States Centers for Disease Control and Prevention was undertaken to measure laboratories' awareness and implementation of an evidence-based laboratory practice guideline (LPG) on immunohistochemical (IHC) validation practices published in 2014. - To establish new benchmark data on IHC laboratory practices. - A 2015 survey on IHC assay validation practices was sent to laboratories subscribed to specific CAP proficiency testing programs and to additional nonsubscribing laboratories that perform IHC testing. Specific questions were designed to capture laboratory practices not addressed in a 2010 survey. - The analysis was based on responses from 1085 laboratories that perform IHC staining. Ninety-six percent (809 of 844) always documented validation of IHC assays. Sixty percent (648 of 1078) had separate procedures for predictive and nonpredictive markers, 42.7% (220 of 515) had procedures for laboratory-developed tests, 50% (349 of 697) had procedures for testing cytologic specimens, and 46.2% (363 of 785) had procedures for testing decalcified specimens. Minimum case numbers were specified by 85.9% (720 of 838) of laboratories for nonpredictive markers and 76% (584 of 768) for predictive markers. Median concordance requirements were 95% for both types. For initial validation, 75.4% (538 of 714) of laboratories adopted the 20-case minimum for nonpredictive markers and 45.9% (266 of 579) adopted the 40-case minimum for predictive markers as outlined in the 2014 LPG. The most common method for validation was correlation with morphology and expected results. Laboratories also reported which assay changes necessitated revalidation and their minimum case requirements. - Benchmark data on current IHC validation practices and procedures may help laboratories understand the issues and influence further refinement of LPG recommendations.
A Proposal on the Validation Model of Equivalence between PBLT and CBLT
ERIC Educational Resources Information Center
Chen, Huilin
2014-01-01
The validity of the computer-based language test is possibly affected by three factors: computer familiarity, audio-visual cognitive competence, and other discrepancies in construct. Therefore, validating the equivalence between the paper-and-pencil language test and the computer-based language test is a key step in the procedure of designing a…
Jørgensen, Line Dahl; Willadsen, Elisabeth
2017-01-01
The purpose of this study was to develop and validate a clinically useful speech-language screening procedure for young children with cleft palate ± cleft lip (CP) to identify those in need of speech-language intervention. Twenty-two children with CP were assigned to a +/- need for intervention conditions based on assessment of consonant inventory using a real-time listening procedure in combination with parent-reported expressive vocabulary. These measures allowed evaluation of early speech-language skills found to correlate significantly with later speech-language performance in longitudinal studies of children with CP. The external validity of this screening procedure was evaluated by comparing the +/- need for intervention assignment determined by the screening procedure to experienced speech-language pathologist (SLP)s' clinical judgement of whether or not a child needed early intervention. The results of real-time listening assessment showed good-excellent inter-rater agreement on different consonant inventory measures. Furthermore, there was almost perfect agreement between the children selected for intervention with the screening procedure and the clinical judgement of experienced SLPs indicate that the screening procedure is a valid way of identifying children with CP who need early intervention.
Teaching and assessing procedural skills using simulation: metrics and methodology.
Lammers, Richard L; Davenport, Moira; Korley, Frederick; Griswold-Theodorson, Sharon; Fitch, Michael T; Narang, Aneesh T; Evans, Leigh V; Gross, Amy; Rodriguez, Elliot; Dodge, Kelly L; Hamann, Cara J; Robey, Walter C
2008-11-01
Simulation allows educators to develop learner-focused training and outcomes-based assessments. However, the effectiveness and validity of simulation-based training in emergency medicine (EM) requires further investigation. Teaching and testing technical skills require methods and assessment instruments that are somewhat different than those used for cognitive or team skills. Drawing from work published by other medical disciplines as well as educational, behavioral, and human factors research, the authors developed six research themes: measurement of procedural skills; development of performance standards; assessment and validation of training methods, simulator models, and assessment tools; optimization of training methods; transfer of skills learned on simulator models to patients; and prevention of skill decay over time. The article reviews relevant and established educational research methodologies and identifies gaps in our knowledge of how physicians learn procedures. The authors present questions requiring further research that, once answered, will advance understanding of simulation-based procedural training and assessment in EM.
Reliable and valid assessment of Lichtenstein hernia repair skills.
Carlsen, C G; Lindorff-Larsen, K; Funch-Jensen, P; Lund, L; Charles, P; Konge, L
2014-08-01
Lichtenstein hernia repair is a common surgical procedure and one of the first procedures performed by a surgical trainee. However, formal assessment tools developed for this procedure are few and sparsely validated. The aim of this study was to determine the reliability and validity of an assessment tool designed to measure surgical skills in Lichtenstein hernia repair. Key issues were identified through a focus group interview. On this basis, an assessment tool with eight items was designed. Ten surgeons and surgical trainees were video recorded while performing Lichtenstein hernia repair, (four experts, three intermediates, and three novices). The videos were blindly and individually assessed by three raters (surgical consultants) using the assessment tool. Based on these assessments, validity and reliability were explored. The internal consistency of the items was high (Cronbach's alpha = 0.97). The inter-rater reliability was very good with an intra-class correlation coefficient (ICC) = 0.93. Generalizability analysis showed a coefficient above 0.8 even with one rater. The coefficient improved to 0.92 if three raters were used. One-way analysis of variance found a significant difference between the three groups which indicates construct validity, p < 0.001. Lichtenstein hernia repair skills can be assessed blindly by a single rater in a reliable and valid fashion with the new procedure-specific assessment tool. We recommend this tool for future assessment of trainees performing Lichtenstein hernia repair to ensure that the objectives of competency-based surgical training are met.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, A H; Kerr, L A; Cailliet, G M
2007-11-04
Canary rockfish (Sebastes pinniger) have long been an important part of recreational and commercial rockfish fishing from southeast Alaska to southern California, but localized stock abundances have declined considerably. Based on age estimates from otoliths and other structures, lifespan estimates vary from about 20 years to over 80 years. For the purpose of monitoring stocks, age composition is routinely estimated by counting growth zones in otoliths; however, age estimation procedures and lifespan estimates remain largely unvalidated. Typical age validation techniques have limited application for canary rockfish because they are deep dwelling and may be long lived. In this study, themore » unaged otolith of the pair from fish aged at the Department of Fisheries and Oceans Canada was used in one of two age validation techniques: (1) lead-radium dating and (2) bomb radiocarbon ({sup 14}C) dating. Age estimate accuracy and the validity of age estimation procedures were validated based on the results from each technique. Lead-radium dating proved successful in determining a minimum estimate of lifespan was 53 years and provided support for age estimation procedures up to about 50-60 years. These findings were further supported by {Delta}{sup 14}C data, which indicated a minimum estimate of lifespan was 44 {+-} 3 years. Both techniques validate, to differing degrees, age estimation procedures and provide support for inferring that canary rockfish can live more than 80 years.« less
Technical skills assessment toolbox: a review using the unitary framework of validity.
Ghaderi, Iman; Manji, Farouq; Park, Yoon Soo; Juul, Dorthea; Ott, Michael; Harris, Ilene; Farrell, Timothy M
2015-02-01
The purpose of this study was to create a technical skills assessment toolbox for 35 basic and advanced skills/procedures that comprise the American College of Surgeons (ACS)/Association of Program Directors in Surgery (APDS) surgical skills curriculum and to provide a critical appraisal of the included tools, using contemporary framework of validity. Competency-based training has become the predominant model in surgical education and assessment of performance is an essential component. Assessment methods must produce valid results to accurately determine the level of competency. A search was performed, using PubMed and Google Scholar, to identify tools that have been developed for assessment of the targeted technical skills. A total of 23 assessment tools for the 35 ACS/APDS skills modules were identified. Some tools, such as Operative Performance Rating System (OSATS) and Objective Structured Assessment of Technical Skill (OPRS), have been tested for more than 1 procedure. Therefore, 30 modules had at least 1 assessment tool, with some common surgical procedures being addressed by several tools. Five modules had none. Only 3 studies used Messick's framework to design their validity studies. The remaining studies used an outdated framework on the basis of "types of validity." When analyzed using the contemporary framework, few of these studies demonstrated validity for content, internal structure, and relationship to other variables. This study provides an assessment toolbox for common surgical skills/procedures. Our review shows that few authors have used the contemporary unitary concept of validity for development of their assessment tools. As we progress toward competency-based training, future studies should provide evidence for various sources of validity using the contemporary framework.
ERIC Educational Resources Information Center
Geri, George A.; Hubbard, David C.
Two adaptive psychophysical procedures (tracking and "yes-no" staircase) for obtaining human visual contrast sensitivity functions (CSF) were evaluated. The procedures were chosen based on their proven validity and the desire to evaluate the practical effects of stimulus transients, since tracking procedures traditionally employ gradual…
Jeong, Eun Ju; Chung, Hyun Soo; Choi, Jeong Yun; Kim, In Sook; Hong, Seong Hee; Yoo, Kyung Sook; Kim, Mi Kyoung; Won, Mi Yeol; Eum, So Yeon; Cho, Young Soon
2017-06-01
The aim of this study was to develop a simulation-based time-out learning programme targeted to nurses participating in high-risk invasive procedures and to figure out the effects of application of the new programme on acceptance of nurses. This study was performed using a simulation-based learning predesign and postdesign to figure out the effects of implementation of this programme. It was targeted to 48 registered nurses working in the general ward and the emergency department in a tertiary teaching hospital. Difference between acceptance and performance rates has been figured out by using mean, standard deviation, and Wilcoxon-signed rank test. The perception survey and score sheet have been validated through content validation index, and the reliability of evaluator has been verified by using intraclass correlation coefficient. Results showed high level of acceptance of high-risk invasive procedure (P<.01). Further, improvement was consistent regardless of clinical experience, workplace, or experience in simulation-based learning. The face validity of the programme showed over 4.0 out of 5.0. This simulation-based learning programme was effective in improving the recognition of time-out protocol and has given the participants the opportunity to become proactive in cases of high-risk invasive procedures performed outside of operating room. © 2017 John Wiley & Sons Australia, Ltd.
Validation of biological activity testing procedure of recombinant human interleukin-7.
Lutsenko, T N; Kovalenko, M V; Galkin, O Yu
2017-01-01
Validation procedure for method of monitoring the biological activity of reсombinant human interleukin-7 has been developed and conducted according to the requirements of national and international recommendations. This method is based on the ability of recombinant human interleukin-7 to induce proliferation of T lymphocytes. It has been shown that to control the biological activity of recombinant human interleukin-7 peripheral blood mononuclear cells (PBMCs) derived from blood or cell lines can be used. Validation characteristics that should be determined depend on the method, type of product or object test/measurement and biological test systems used in research. The validation procedure for the method of control of biological activity of recombinant human interleukin-7 in peripheral blood mononuclear cells showed satisfactory results on all parameters tested such as specificity, accuracy, precision and linearity.
Validation of a Video-based Game-Understanding Test Procedure in Badminton.
ERIC Educational Resources Information Center
Blomqvist, Minna T.; Luhtanen, Pekka; Laakso, Lauri; Keskinen, Esko
2000-01-01
Reports the development and validation of video-based game-understanding tests in badminton for elementary and secondary students. The tests included different sequences that simulated actual game situations. Players had to solve tactical problems by selecting appropriate solutions and arguments for their decisions. Results suggest that the test…
41 CFR 60-3.9 - No assumption of validity.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 41 Public Contracts and Property Management 1 2012-07-01 2009-07-01 true No assumption of validity. 60-3.9 Section 60-3.9 Public Contracts and Property Management Other Provisions Relating to Public... of validity based on a procedure's name or descriptive labels; all forms of promotional literature...
An improved procedure for the validation of satellite-based precipitation estimates
NASA Astrophysics Data System (ADS)
Tang, Ling; Tian, Yudong; Yan, Fang; Habib, Emad
2015-09-01
The objective of this study is to propose and test a new procedure to improve the validation of remote-sensing, high-resolution precipitation estimates. Our recent studies show that many conventional validation measures do not accurately capture the unique error characteristics in precipitation estimates to better inform both data producers and users. The proposed new validation procedure has two steps: 1) an error decomposition approach to separate the total retrieval error into three independent components: hit error, false precipitation and missed precipitation; and 2) the hit error is further analyzed based on a multiplicative error model. In the multiplicative error model, the error features are captured by three model parameters. In this way, the multiplicative error model separates systematic and random errors, leading to more accurate quantification of the uncertainties. The proposed procedure is used to quantitatively evaluate the recent two versions (Version 6 and 7) of TRMM's Multi-sensor Precipitation Analysis (TMPA) real-time and research product suite (3B42 and 3B42RT) for seven years (2005-2011) over the continental United States (CONUS). The gauge-based National Centers for Environmental Prediction (NCEP) Climate Prediction Center (CPC) near-real-time daily precipitation analysis is used as the reference. In addition, the radar-based NCEP Stage IV precipitation data are also model-fitted to verify the effectiveness of the multiplicative error model. The results show that winter total bias is dominated by the missed precipitation over the west coastal areas and the Rocky Mountains, and the false precipitation over large areas in Midwest. The summer total bias is largely coming from the hit bias in Central US. Meanwhile, the new version (V7) tends to produce more rainfall in the higher rain rates, which moderates the significant underestimation exhibited in the previous V6 products. Moreover, the error analysis from the multiplicative error model provides a clear and concise picture of the systematic and random errors, with both versions of 3B42RT have higher errors in varying degrees than their research (post-real-time) counterparts. The new V7 algorithm shows obvious improvements in reducing random errors in both winter and summer seasons, compared to its predecessors V6. Stage IV, as expected, surpasses the satellite-based datasets in all the metrics over CONUS. Based on the results, we recommend the new procedure be adopted for routine validation of satellite-based precipitation datasets, and we expect the procedure will work effectively for higher resolution data to be produced in the Global Precipitation Measurement (GPM) era.
28 CFR 50.14 - Guidelines on employee selection procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...
28 CFR 50.14 - Guidelines on employee selection procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...
28 CFR 50.14 - Guidelines on employee selection procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...
28 CFR 50.14 - Guidelines on employee selection procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...
28 CFR 50.14 - Guidelines on employee selection procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...
Frickmann, H; Bachert, S; Warnke, P; Podbielski, A
2018-03-01
Preanalytic aspects can make results of hygiene studies difficult to compare. Efficacy of surface disinfection was assessed with an evaluated swabbing procedure. A validated microbial screening of surfaces was performed in the patients' environment and from hands of healthcare workers on two intensive care units (ICUs) prior to and after a standardized disinfection procedure. From a pure culture, the recovery rate of the swabs for Staphylococcus aureus was 35%-64% and dropped to 0%-22% from a mixed culture with 10-times more Staphylococcus epidermidis than S. aureus. Microbial surface loads 30 min before and after the cleaning procedures were indistinguishable. The quality-ensured screening procedure proved that adequate hygiene procedures are associated with a low overall colonization of surfaces and skin of healthcare workers. Unchanged microbial loads before and after surface disinfection demonstrated the low additional impact of this procedure in the endemic situation when the pathogen load prior to surface disinfection is already low. Based on a validated screening system ensuring the interpretability and reliability of the results, the study confirms the efficiency of combined hand and surface hygiene procedures to guarantee low rates of bacterial colonization. © 2017 The Society for Applied Microbiology.
A design procedure and handling quality criteria for lateral directional flight control systems
NASA Technical Reports Server (NTRS)
Stein, G.; Henke, A. H.
1972-01-01
A practical design procedure for aircraft augmentation systems is described based on quadratic optimal control technology and handling-quality-oriented cost functionals. The procedure is applied to the design of a lateral-directional control system for the F4C aircraft. The design criteria, design procedure, and final control system are validated with a program of formal pilot evaluation experiments.
NASA Astrophysics Data System (ADS)
Clements, Logan W.; Collins, Jarrod A.; Wu, Yifei; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.
2015-03-01
Soft tissue deformation represents a significant error source in current surgical navigation systems used for open hepatic procedures. While numerous algorithms have been proposed to rectify the tissue deformation that is encountered during open liver surgery, clinical validation of the proposed methods has been limited to surface based metrics and sub-surface validation has largely been performed via phantom experiments. Tracked intraoperative ultrasound (iUS) provides a means to digitize sub-surface anatomical landmarks during clinical procedures. The proposed method involves the validation of a deformation correction algorithm for open hepatic image-guided surgery systems via sub-surface targets digitized with tracked iUS. Intraoperative surface digitizations were acquired via a laser range scanner and an optically tracked stylus for the purposes of computing the physical-to-image space registration within the guidance system and for use in retrospective deformation correction. Upon completion of surface digitization, the organ was interrogated with a tracked iUS transducer where the iUS images and corresponding tracked locations were recorded. After the procedure, the clinician reviewed the iUS images to delineate contours of anatomical target features for use in the validation procedure. Mean closest point distances between the feature contours delineated in the iUS images and corresponding 3-D anatomical model generated from the preoperative tomograms were computed to quantify the extent to which the deformation correction algorithm improved registration accuracy. The preliminary results for two patients indicate that the deformation correction method resulted in a reduction in target error of approximately 50%.
Constant temperature hot wire anemometry data reduction procedure
NASA Technical Reports Server (NTRS)
Klopfer, G. H.
1974-01-01
The theory and data reduction procedure for constant temperature hot wire anemometry are presented. The procedure is valid for all Mach and Prandtl numbers, but limited to Reynolds numbers based on wire diameter between 0.1 and 300. The fluids are limited to gases which approximate ideal gas behavior. Losses due to radiation, free convection and conduction are included.
Design for validation: An approach to systems validation
NASA Technical Reports Server (NTRS)
Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)
1989-01-01
Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.
Podsakoff, Nathan P; Podsakoff, Philip M; Mackenzie, Scott B; Klinger, Ryan L
2013-01-01
Several researchers have persuasively argued that the most important evidence to consider when assessing construct validity is whether variations in the construct of interest cause corresponding variations in the measures of the focal construct. Unfortunately, the literature provides little practical guidance on how researchers can go about testing this. Therefore, the purpose of this article is to describe how researchers can use video techniques to test whether their scales measure what they purport to measure. First, we discuss how researchers can develop valid manipulations of the focal construct that they hope to measure. Next, we explain how to design a study to use this manipulation to test the validity of the scale. Finally, comparing and contrasting traditional and contemporary perspectives on validation, we discuss the advantages and limitations of video-based validation procedures. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Melchiors, Jacob; Henriksen, Mikael Johannes Vuokko; Dikkers, Frederik G; Gavilán, Javier; Noordzij, J Pieter; Fried, Marvin P; Novakovic, Daniel; Fagan, Johannes; Charabi, Birgitte W; Konge, Lars; von Buchwald, Christian
2018-05-01
Proper training and assessment of skill in flexible pharyngo-laryngoscopy are central in the education of otorhinolaryngologists. To facilitate an evidence-based approach to curriculum development in this field, a structured analysis of what constitutes flexible pharyngo-laryngoscopy is necessary. Our aim was to develop an assessment tool based on this analysis. We conducted an international Delphi study involving experts from twelve countries in five continents. Utilizing reiterative assessment, the panel defined the procedure and reached consensus (defined as 80% agreement) on the phrasing of an assessment tool. FIFTY PANELISTS COMPLETED THE DELPHI PROCESS. THE MEDIAN AGE OF THE PANELISTS WAS 44 YEARS (RANGE 33-64 YEARS). MEDIAN EXPERIENCE IN OTORHINOLARYNGOLOGY WAS 15 YEARS (RANGE 6-35 YEARS). TWENTY-FIVE WERE SPECIALIZED IN LARYNGOLOGY, 16 WERE HEAD AND NECK SURGEONS, AND NINE WERE GENERAL OTORHINOLARYNGOLOGISTS. AN ASSESSMENT TOOL WAS CREATED CONSISTING OF TWELVE DISTINCT ITEMS.: Conclusion The gathering of validity evidence for assessment of core procedural skills within Otorhinolaryngology is central to the development of a competence-based education. The use of an international Delphi panel allows for the creation of an assessment tool which is widely applicable and valid. This work allows for an informed approach to technical skills training for flexible pharyngo-laryngoscopy and as further validity evidence is gathered allows for a valid assessment of clinical performance within this important skillset.
A multivariate model and statistical method for validating tree grade lumber yield equations
Donald W. Seegrist
1975-01-01
Lumber yields within lumber grades can be described by a multivariate linear model. A method for validating lumber yield prediction equations when there are several tree grades is presented. The method is based on multivariate simultaneous test procedures.
A hydrochemical data base for the Hanford Site, Washington
DOE Office of Scientific and Technical Information (OSTI.GOV)
Early, T.O.; Mitchell, M.D.; Spice, G.D.
1986-05-01
This data package contains a revision of the Site Hydrochemical Data Base for water samples associated with the Basalt Waste Isolation Project (BWIP). In addition to the detailed chemical analyses, a summary description of the data base format, detailed descriptions of verification procedures used to check data entries, and detailed descriptions of validation procedures used to evaluate data quality are included. 32 refs., 21 figs., 3 tabs.
NASA Astrophysics Data System (ADS)
Hidayati, A.; Rahmi, A.; Yohandri; Ratnawulan
2018-04-01
The importance of teaching materials in accordance with the characteristics of students became the main reason for the development of basic electronics I module integrated character values based on conceptual change teaching model. The module development in this research follows the development procedure of Plomp which includes preliminary research, prototyping phase and assessment phase. In the first year of this research, the module is validated. Content validity is seen from the conformity of the module with the development theory in accordance with the demands of learning model characteristics. The validity of the construct is seen from the linkage and consistency of each module component developed with the characteristic of the integrated learning model of character values obtained through validator assessment. The average validation value assessed by the validator belongs to a very valid category. Based on the validator assessment then revised the basic electronics I module integrated character values based on conceptual change teaching model.
Kramp, Kelvin H; van Det, Marc J; Veeger, Nic J G M; Pierie, Jean-Pierre E N
2016-06-01
There is no widely used method to evaluate procedure-specific laparoscopic skills. The first aim of this study was to develop a procedure-based assessment method. The second aim was to compare its validity, reliability and feasibility with currently available global rating scales (GRSs). An independence-scaled procedural assessment was created by linking the procedural key steps of the laparoscopic cholecystectomy to an independence scale. Subtitled and blinded videos of a novice, an intermediate and an almost competent trainee, were evaluated with GRSs (OSATS and GOALS) and the independence-scaled procedural assessment by seven surgeons, three senior trainees and six scrub nurses. Participants received a short introduction to the GRSs and independence-scaled procedural assessment before assessment. The validity was estimated with the Friedman and Wilcoxon test and the reliability with the intra-class correlation coefficient (ICC). A questionnaire was used to evaluate user opinion. Independence-scaled procedural assessment and GRS scores improved significantly with surgical experience (OSATS p = 0.001, GOALS p < 0.001, independence-scaled procedural assessment p < 0.001). The ICCs of the OSATS, GOALS and independence-scaled procedural assessment were 0.78, 0.74 and 0.84, respectively, among surgeons. The ICCs increased when the ratings of scrub nurses were added to those of the surgeons. The independence-scaled procedural assessment was not considered more of an administrative burden than the GRSs (p = 0.692). A procedural assessment created by combining procedural key steps to an independence scale is a valid, reliable and acceptable assessment instrument in surgery. In contrast to the GRSs, the reliability of the independence-scaled procedural assessment exceeded the threshold of 0.8, indicating that it can also be used for summative assessment. It furthermore seems that scrub nurses can assess the operative competence of surgical trainees.
Electro-thermal battery model identification for automotive applications
NASA Astrophysics Data System (ADS)
Hu, Y.; Yurkovich, S.; Guezennec, Y.; Yurkovich, B. J.
This paper describes a model identification procedure for identifying an electro-thermal model of lithium ion batteries used in automotive applications. The dynamic model structure adopted is based on an equivalent circuit model whose parameters are scheduled on the state-of-charge, temperature, and current direction. Linear spline functions are used as the functional form for the parametric dependence. The model identified in this way is valid inside a large range of temperatures and state-of-charge, so that the resulting model can be used for automotive applications such as on-board estimation of the state-of-charge and state-of-health. The model coefficients are identified using a multiple step genetic algorithm based optimization procedure designed for large scale optimization problems. The validity of the procedure is demonstrated experimentally for an A123 lithium ion iron-phosphate battery.
Rönspies, Jelena; Schmidt, Alexander F; Melnikova, Anna; Krumova, Rosina; Zolfagari, Asadeh; Banse, Rainer
2015-07-01
The present study was conducted to validate an adaptation of the Implicit Relational Assessment Procedure (IRAP) as an indirect latency-based measure of sexual orientation. Furthermore, reliability and criterion validity of the IRAP were compared to two established indirect measures of sexual orientation: a Choice Reaction Time task (CRT) and a Viewing Time (VT) task. A sample of 87 heterosexual and 35 gay men completed all three indirect measures in an online study. The IRAP and the VT predicted sexual orientation nearly perfectly. Both measures also showed a considerable amount of convergent validity. Reliabilities (internal consistencies) reached satisfactory levels. In contrast, the CRT did not tap into sexual orientation in the present study. In sum, the VT measure performed best, with the IRAP showing only slightly lower reliability and criterion validity, whereas the CRT did not yield any evidence of reliability or criterion validity in the present research. The results were discussed in the light of specific task properties of the indirect latency-based measures (task-relevance vs. task-irrelevance).
Liu, Charles; Kayima, Peter; Riesel, Johanna; Situma, Martin; Chang, David; Firth, Paul
2017-11-01
The lack of a classification system for surgical procedures in resource-limited settings hinders outcomes measurement and reporting. Existing procedure coding systems are prohibitively large and expensive to implement. We describe the creation and prospective validation of 3 brief procedure code lists applicable in low-resource settings, based on analysis of surgical procedures performed at Mbarara Regional Referral Hospital, Uganda's second largest public hospital. We reviewed operating room logbooks to identify all surgical operations performed at Mbarara Regional Referral Hospital during 2014. Based on the documented indication for surgery and procedure(s) performed, we assigned each operation up to 4 procedure codes from the International Classification of Diseases, 9th Revision, Clinical Modification. Coding of procedures was performed by 2 investigators, and a random 20% of procedures were coded by both investigators. These codes were aggregated to generate procedure code lists. During 2014, 6,464 surgical procedures were performed at Mbarara Regional Referral Hospital, to which we assigned 435 unique procedure codes. Substantial inter-rater reliability was achieved (κ = 0.7037). The 111 most common procedure codes accounted for 90% of all codes assigned, 180 accounted for 95%, and 278 accounted for 98%. We considered these sets of codes as 3 procedure code lists. In a prospective validation, we found that these lists described 83.2%, 89.2%, and 92.6% of surgical procedures performed at Mbarara Regional Referral Hospital during August to September of 2015, respectively. Empirically generated brief procedure code lists based on International Classification of Diseases, 9th Revision, Clinical Modification can be used to classify almost all surgical procedures performed at a Ugandan referral hospital. Such a standardized procedure coding system may enable better surgical data collection for administration, research, and quality improvement in resource-limited settings. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John
2011-01-01
A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.
ERIC Educational Resources Information Center
Wijnen-Meijer, M.; Van der Schaaf, M.; Booij, E.; Harendza, S.; Boscardin, C.; Wijngaarden, J. Van; Ten Cate, Th. J.
2013-01-01
There is a need for valid methods to assess the readiness for clinical practice of medical graduates. This study evaluates the validity of Utrecht Hamburg Trainee Responsibility for Unfamiliar Situations Test (UHTRUST), an authentic simulation procedure to assess whether medical trainees are ready to be entrusted with unfamiliar clinical tasks…
Development and Validation of the Meaning of Work Inventory among French Workers
ERIC Educational Resources Information Center
Arnoux-Nicolas, Caroline; Sovet, Laurent; Lhotellier, Lin; Bernaud, Jean-Luc
2017-01-01
The purpose of this study was to validate a psychometric instrument among French workers for assessing the meaning of work. Following an empirical framework, a two-step procedure consisted of exploring and then validating the scale among distinctive samples. The consequent Meaning of Work Inventory is a 15-item scale based on a four-factor model,…
A verification library for multibody simulation software
NASA Technical Reports Server (NTRS)
Kim, Sung-Soo; Haug, Edward J.; Frisch, Harold P.
1989-01-01
A multibody dynamics verification library, that maintains and manages test and validation data is proposed, based on RRC Robot arm and CASE backhoe validation and a comparitive study of DADS, DISCOS, and CONTOPS that are existing public domain and commercial multibody dynamic simulation programs. Using simple representative problems, simulation results from each program are cross checked, and the validation results are presented. Functionalities of the verification library are defined, in order to automate validation procedure.
van Rossum, Huub H; Kemperman, Hans
2017-02-01
To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.
Sánchez-Margallo, Juan A; Sánchez-Margallo, Francisco M; Oropesa, Ignacio; Enciso, Silvia; Gómez, Enrique J
2017-02-01
The aim of this study is to present the construct and concurrent validity of a motion-tracking method of laparoscopic instruments based on an optical pose tracker and determine its feasibility as an objective assessment tool of psychomotor skills during laparoscopic suturing. A group of novice ([Formula: see text] laparoscopic procedures), intermediate (11-100 laparoscopic procedures) and experienced ([Formula: see text] laparoscopic procedures) surgeons performed three intracorporeal sutures on an ex vivo porcine stomach. Motion analysis metrics were recorded using the proposed tracking method, which employs an optical pose tracker to determine the laparoscopic instruments' position. Construct validation was measured for all 10 metrics across the three groups and between pairs of groups. Concurrent validation was measured against a previously validated suturing checklist. Checklists were completed by two independent surgeons over blinded video recordings of the task. Eighteen novices, 15 intermediates and 11 experienced surgeons took part in this study. Execution time and path length travelled by the laparoscopic dissector presented construct validity. Experienced surgeons required significantly less time ([Formula: see text]), travelled less distance using both laparoscopic instruments ([Formula: see text]) and made more efficient use of the work space ([Formula: see text]) compared with novice and intermediate surgeons. Concurrent validation showed strong correlation between both the execution time and path length and the checklist score ([Formula: see text] and [Formula: see text], [Formula: see text]). The suturing performance was successfully assessed by the motion analysis method. Construct and concurrent validity of the motion-based assessment method has been demonstrated for the execution time and path length metrics. This study demonstrates the efficacy of the presented method for objective evaluation of psychomotor skills in laparoscopic suturing. However, this method does not take into account the quality of the suture. Thus, future works will focus on developing new methods combining motion analysis and qualitative outcome evaluation to provide a complete performance assessment to trainees.
NASA Technical Reports Server (NTRS)
Koppen, Sandra V.; Nguyen, Truong X.; Mielnik, John J.
2010-01-01
The NASA Langley Research Center's High Intensity Radiated Fields Laboratory has developed a capability based on the RTCA/DO-160F Section 20 guidelines for radiated electromagnetic susceptibility testing in reverberation chambers. Phase 1 of the test procedure utilizes mode-tuned stirrer techniques and E-field probe measurements to validate chamber uniformity, determines chamber loading effects, and defines a radiated susceptibility test process. The test procedure is segmented into numbered operations that are largely software controlled. This document is intended as a laboratory test reference and includes diagrams of test setups, equipment lists, as well as test results and analysis. Phase 2 of development is discussed.
Liu, Guang-Hui; Shen, Hong-Bin; Yu, Dong-Jun
2016-04-01
Accurately predicting protein-protein interaction sites (PPIs) is currently a hot topic because it has been demonstrated to be very useful for understanding disease mechanisms and designing drugs. Machine-learning-based computational approaches have been broadly utilized and demonstrated to be useful for PPI prediction. However, directly applying traditional machine learning algorithms, which often assume that samples in different classes are balanced, often leads to poor performance because of the severe class imbalance that exists in the PPI prediction problem. In this study, we propose a novel method for improving PPI prediction performance by relieving the severity of class imbalance using a data-cleaning procedure and reducing predicted false positives with a post-filtering procedure: First, a machine-learning-based data-cleaning procedure is applied to remove those marginal targets, which may potentially have a negative effect on training a model with a clear classification boundary, from the majority samples to relieve the severity of class imbalance in the original training dataset; then, a prediction model is trained on the cleaned dataset; finally, an effective post-filtering procedure is further used to reduce potential false positive predictions. Stringent cross-validation and independent validation tests on benchmark datasets demonstrated the efficacy of the proposed method, which exhibits highly competitive performance compared with existing state-of-the-art sequence-based PPIs predictors and should supplement existing PPI prediction methods.
Reliability and Validity of Curriculum-Based Informal Reading Inventories.
ERIC Educational Resources Information Center
Fuchs, Lynn; And Others
A study was conducted to explore the reliability and validity of three prominent procedures used in informal reading inventories (IRIs): (1) choosing a 95% word recognition accuracy standard for determining student instructional level, (2) arbitrarily selecting a passage to represent the difficulty level of a basal reader, and (3) employing…
Dagostino, Concetta; De Gregori, Manuela; Gieger, Christian; Manz, Judith; Gudelj, Ivan; Lauc, Gordan; Divizia, Laura; Wang, Wei; Sim, Moira; Pemberton, Iain K; MacDougall, Jane; Williams, Frances; Van Zundert, Jan; Primorac, Dragan; Aulchenko, Yurii; Kapural, Leonardo; Allegri, Massimo
2017-01-01
Chronic low back pain (CLBP) is one of the most common medical conditions, ranking as the greatest contributor to global disability and accounting for huge societal costs based on the Global Burden of Disease 2010 study. Large genetic and -omics studies provide a promising avenue for the screening, development and validation of biomarkers useful for personalized diagnosis and treatment (precision medicine). Multicentre studies are needed for such an effort, and a standardized and homogeneous approach is vital for recruitment of large numbers of participants among different centres (clinical and laboratories) to obtain robust and reproducible results. To date, no validated standard operating procedures (SOPs) for genetic/-omics studies in chronic pain have been developed. In this study, we validated an SOP model that will be used in the multicentre (5 centres) retrospective "PainOmics" study, funded by the European Community in the 7th Framework Programme, which aims to develop new biomarkers for CLBP through three different -omics approaches: genomics, glycomics and activomics. The SOPs describe the specific procedures for (1) blood collection, (2) sample processing and storage, (3) shipping details and (4) cross-check testing and validation before assays that all the centres involved in the study have to follow. Multivariate analysis revealed the absolute specificity and homogeneity of the samples collected by the five centres for all genetics, glycomics and activomics analyses. The SOPs used in our multicenter study have been validated. Hence, they could represent an innovative tool for the correct management and collection of reliable samples in other large-omics-based multicenter studies.
Describes procedures written based on the assumption that they will be performed by analysts who are formally trained in at least the basic principles of chemical analysis and in the use of the subject technology.
Describes procedures written based on the assumption that they will be performed by analysts who are formally trained in at least the basic principles of chemical analysis and in the use of the subject technology.
Crowe, Sonya; Brown, Kate L; Pagel, Christina; Muthialu, Nagarajan; Cunningham, David; Gibbs, John; Bull, Catherine; Franklin, Rodney; Utley, Martin; Tsang, Victor T
2013-05-01
The study objective was to develop a risk model incorporating diagnostic information to adjust for case-mix severity during routine monitoring of outcomes for pediatric cardiac surgery. Data from the Central Cardiac Audit Database for all pediatric cardiac surgery procedures performed in the United Kingdom between 2000 and 2010 were included: 70% for model development and 30% for validation. Units of analysis were 30-day episodes after the first surgical procedure. We used logistic regression for 30-day mortality. Risk factors considered included procedural information based on Central Cardiac Audit Database "specific procedures," diagnostic information defined by 24 "primary" cardiac diagnoses and "univentricular" status, and other patient characteristics. Of the 27,140 30-day episodes in the development set, 25,613 were survivals, 834 were deaths, and 693 were of unknown status (mortality, 3.2%). The risk model includes procedure, cardiac diagnosis, univentricular status, age band (neonate, infant, child), continuous age, continuous weight, presence of non-Down syndrome comorbidity, bypass, and year of operation 2007 or later (because of decreasing mortality). A risk score was calculated for 95% of cases in the validation set (weight missing in 5%). The model discriminated well; the C-index for validation set was 0.77 (0.81 for post-2007 data). Removal of all but procedural information gave a reduced C-index of 0.72. The model performed well across the spectrum of predicted risk, but there was evidence of underestimation of mortality risk in neonates undergoing operation from 2007. The risk model performs well. Diagnostic information added useful discriminatory power. A future application is risk adjustment during routine monitoring of outcomes in the United Kingdom to assist quality assurance. Copyright © 2013 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.
Pretty, Iain A; Maupomé, Gerardo
2004-04-01
Dentists are involved in diagnosing disease in every aspect of their clinical practice. A range of tests, systems, guides and equipment--which can be generally referred to as diagnostic procedures--are available to aid in diagnostic decision making. In this era of evidence-based dentistry, and given the increasing demand for diagnostic accuracy and properly targeted health care, it is important to assess the value of such diagnostic procedures. Doing so allows dentists to weight appropriately the information these procedures supply, to purchase new equipment if it proves more reliable than existing equipment or even to discard a commonly used procedure if it is shown to be unreliable. This article, the first in a 6-part series, defines several concepts used to express the usefulness of diagnostic procedures, including reliability and validity, and describes some of their operating characteristics (statistical measures of performance), in particular, specificity and sensitivity. Subsequent articles in the series will discuss the value of diagnostic procedures used in daily dental practice and will compare today's most innovative procedures with established methods.
Verification and Validation of KBS with Neural Network Components
NASA Technical Reports Server (NTRS)
Wen, Wu; Callahan, John
1996-01-01
Artificial Neural Network (ANN) play an important role in developing robust Knowledge Based Systems (KBS). The ANN based components used in these systems learn to give appropriate predictions through training with correct input-output data patterns. Unlike traditional KBS that depends on a rule database and a production engine, the ANN based system mimics the decisions of an expert without specifically formulating the if-than type of rules. In fact, the ANNs demonstrate their superiority when such if-then type of rules are hard to generate by human expert. Verification of traditional knowledge based system is based on the proof of consistency and completeness of the rule knowledge base and correctness of the production engine.These techniques, however, can not be directly applied to ANN based components.In this position paper, we propose a verification and validation procedure for KBS with ANN based components. The essence of the procedure is to obtain an accurate system specification through incremental modification of the specifications using an ANN rule extraction algorithm.
Resampling procedures to identify important SNPs using a consensus approach.
Pardy, Christopher; Motyer, Allan; Wilson, Susan
2011-11-29
Our goal is to identify common single-nucleotide polymorphisms (SNPs) (minor allele frequency > 1%) that add predictive accuracy above that gained by knowledge of easily measured clinical variables. We take an algorithmic approach to predict each phenotypic variable using a combination of phenotypic and genotypic predictors. We perform our procedure on the first simulated replicate and then validate against the others. Our procedure performs well when predicting Q1 but is less successful for the other outcomes. We use resampling procedures where possible to guard against false positives and to improve generalizability. The approach is based on finding a consensus regarding important SNPs by applying random forests and the least absolute shrinkage and selection operator (LASSO) on multiple subsamples. Random forests are used first to discard unimportant predictors, narrowing our focus to roughly 100 important SNPs. A cross-validation LASSO is then used to further select variables. We combine these procedures to guarantee that cross-validation can be used to choose a shrinkage parameter for the LASSO. If the clinical variables were unavailable, this prefiltering step would be essential. We perform the SNP-based analyses simultaneously rather than one at a time to estimate SNP effects in the presence of other causal variants. We analyzed the first simulated replicate of Genetic Analysis Workshop 17 without knowledge of the true model. Post-conference knowledge of the simulation parameters allowed us to investigate the limitations of our approach. We found that many of the false positives we identified were substantially correlated with genuine causal SNPs.
Aeroservoelastic Model Validation and Test Data Analysis of the F/A-18 Active Aeroelastic Wing
NASA Technical Reports Server (NTRS)
Brenner, Martin J.; Prazenica, Richard J.
2003-01-01
Model validation and flight test data analysis require careful consideration of the effects of uncertainty, noise, and nonlinearity. Uncertainty prevails in the data analysis techniques and results in a composite model uncertainty from unmodeled dynamics, assumptions and mechanics of the estimation procedures, noise, and nonlinearity. A fundamental requirement for reliable and robust model development is an attempt to account for each of these sources of error, in particular, for model validation, robust stability prediction, and flight control system development. This paper is concerned with data processing procedures for uncertainty reduction in model validation for stability estimation and nonlinear identification. F/A-18 Active Aeroelastic Wing (AAW) aircraft data is used to demonstrate signal representation effects on uncertain model development, stability estimation, and nonlinear identification. Data is decomposed using adaptive orthonormal best-basis and wavelet-basis signal decompositions for signal denoising into linear and nonlinear identification algorithms. Nonlinear identification from a wavelet-based Volterra kernel procedure is used to extract nonlinear dynamics from aeroelastic responses, and to assist model development and uncertainty reduction for model validation and stability prediction by removing a class of nonlinearity from the uncertainty.
Farhan, Bilal; Soltani, Tandis; Do, Rebecca; Perez, Claudia; Choi, Hanul; Ghoniem, Gamal
2018-05-02
Endoscopic injection of urethral bulking agents is an office procedure that is used to treat stress urinary incontinence secondary to internal sphincteric deficiency. Validation studies important part of simulator evaluation and is considered important step to establish the effectiveness of simulation-based training. The endoscopic needle injection (ENI) simulator has not been formally validated, although it has been used widely at University of California, Irvine. We aimed to assess the face, content, and construct validity of the UC, Irvine ENI simulator. Dissected female porcine bladders were mounted in a modified Hysteroscopy Diagnostic Trainer. Using routine endoscopic equipment for this procedure with video monitoring, 6 urologists (experts group) and 6 urology trainee (novice group) completed urethral bulking agents injections on a total of 12 bladders using ENI simulator. Face and content validities were assessed by using structured quantitative survey which rating the realism. Construct validity was assessed by comparing the performance, time of the procedure, and the occlusive (anatomical and functional) evaluations between the experts and novices. Trainees also completed a postprocedure feedback survey. Effective injections were evaluated by measuring the retrograde urethral opening pressure, visual cystoscopic coaptation, and postprocedure gross anatomic examination. All 12 participants felt the simulator was a good training tool and should be used as essential part of urology training (face validity). ENI simulator showed good face and content validity with average score varies between the experts and the novices was 3.9/5 and 3.8/5, respectively. Content validity evaluation showed that most aspects of the simulator were adequately realistic (mean Likert scores 3.9-3.8/5). However, the bladder does not bleed, and sometimes thin. Experts significantly outperformed novices (p < 001) across all measure of performance therefore establishing construct validity. The ENI simulator shows face, content and construct validities, although few aspects of simulator were not very realistic (e.g., bleeding).This study provides a base for the future formal validation for this simulator and for continuing use of this simulator in endourology training. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Specific test and evaluation plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hays, W.H.
1998-03-20
The purpose of this Specific Test and Evaluation Plan (STEP) is to provide a detailed written plan for the systematic testing of modifications made to the 241-AX-B Valve Pit by the W-314 Project. The STEP develops the outline for test procedures that verify the system`s performance to the established Project design criteria. The STEP is a lower tier document based on the W-314 Test and Evaluation Plan (TEP). Testing includes Validations and Verifications (e.g., Commercial Grade Item Dedication activities), Factory Acceptance Tests (FATs), installation tests and inspections, Construction Acceptance Tests (CATs), Acceptance Test Procedures (ATPs), Pre-Operational Test Procedures (POTPs), andmore » Operational Test Procedures (OTPs). It should be noted that POTPs are not required for testing of the transfer line addition. The STEP will be utilized in conjunction with the TEP for verification and validation.« less
Computer-Based and Paper-Based Measurement of Recognition Performance.
ERIC Educational Resources Information Center
Federico, Pat-Anthony
To determine the relative reliabilities and validities of paper-based and computer-based measurement procedures, 83 male student pilots and radar intercept officers were administered computer and paper-based tests of aircraft recognition. The subject matter consisted of line drawings of front, side, and top silhouettes of aircraft. Reliabilities…
SATS HVO Concept Validation Experiment
NASA Technical Reports Server (NTRS)
Consiglio, Maria; Williams, Daniel; Murdoch, Jennifer; Adams, Catherine
2005-01-01
A human-in-the-loop simulation experiment was conducted at the NASA Langley Research Center s (LaRC) Air Traffic Operations Lab (ATOL) in an effort to comprehensively validate tools and procedures intended to enable the Small Aircraft Transportation System, Higher Volume Operations (SATS HVO) concept of operations. The SATS HVO procedures were developed to increase the rate of operations at non-towered, non-radar airports in near all-weather conditions. A key element of the design is the establishment of a volume of airspace around designated airports where pilots accept responsibility for self-separation. Flights operating at these airports, are given approach sequencing information computed by a ground based automated system. The SATS HVO validation experiment was conducted in the ATOL during the spring of 2004 in order to determine if a pilot can safely and proficiently fly an airplane while performing SATS HVO procedures. Comparative measures of flight path error, perceived workload and situation awareness were obtained for two types of scenarios. Baseline scenarios were representative of today s system utilizing procedure separation, where air traffic control grants one approach or departure clearance at a time. SATS HVO scenarios represented approaches and departure procedures as described in the SATS HVO concept of operations. Results from the experiment indicate that low time pilots were able to fly SATS HVO procedures and maintain self-separation as safely and proficiently as flying today's procedures.
Verification and Validation: High Charge and Energy (HZE) Transport Codes and Future Development
NASA Technical Reports Server (NTRS)
Wilson, John W.; Tripathi, Ram K.; Mertens, Christopher J.; Blattnig, Steve R.; Clowdsley, Martha S.; Cucinotta, Francis A.; Tweed, John; Heinbockel, John H.; Walker, Steven A.; Nealy, John E.
2005-01-01
In the present paper, we give the formalism for further developing a fully three-dimensional HZETRN code using marching procedures but also development of a new Green's function code is discussed. The final Green's function code is capable of not only validation in the space environment but also in ground based laboratories with directed beams of ions of specific energy and characterized with detailed diagnostic particle spectrometer devices. Special emphasis is given to verification of the computational procedures and validation of the resultant computational model using laboratory and spaceflight measurements. Due to historical requirements, two parallel development paths for computational model implementation using marching procedures and Green s function techniques are followed. A new version of the HZETRN code capable of simulating HZE ions with either laboratory or space boundary conditions is under development. Validation of computational models at this time is particularly important for President Bush s Initiative to develop infrastructure for human exploration with first target demonstration of the Crew Exploration Vehicle (CEV) in low Earth orbit in 2008.
Is Earth-based scaling a valid procedure for calculating heat flows for Mars?
NASA Astrophysics Data System (ADS)
Ruiz, Javier; Williams, Jean-Pierre; Dohm, James M.; Fernández, Carlos; López, Valle
2013-09-01
Heat flow is a very important parameter for constraining the thermal evolution of a planetary body. Several procedures for calculating heat flows for Mars from geophysical or geological proxies have been used, which are valid for the time when the structures used as indicators were formed. The more common procedures are based on estimates of lithospheric strength (the effective elastic thickness of the lithosphere or the depth to the brittle-ductile transition). On the other hand, several works by Kargel and co-workers have estimated martian heat flows from scaling the present-day terrestrial heat flow to Mars, but the so-obtained values are much higher than those deduced from lithospheric strength. In order to explain the discrepancy, a recent paper by Rodriguez et al. (Rodriguez, J.A.P., Kargel, J.S., Tanaka, K.L., Crown, D.A., Berman, D.C., Fairén, A.G., Baker, V.R., Furfaro, R., Candelaria, P., Sasaki, S. [2011]. Icarus 213, 150-194) criticized the heat flow calculations for ancient Mars presented by Ruiz et al. (Ruiz, J., Williams, J.-P., Dohm, J.M., Fernández, C., López, V. [2009]. Icarus 207, 631-637) and other studies calculating ancient martian heat flows from lithospheric strength estimates, and casted doubts on the validity of the results obtained by these works. Here however we demonstrate that the discrepancy is due to computational and conceptual errors made by Kargel and co-workers, and we conclude that the scaling from terrestrial heat flow values is not a valid procedure for estimating reliable heat flows for Mars.
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
PRIMsrc is a novel implementation of a non-parametric bump hunting procedure, based on the Patient Rule Induction Method (PRIM), offering a unified treatment of outcome variables, including censored time-to-event (Survival), continuous (Regression) and discrete (Classification) responses. To fit the model, it uses a recursive peeling procedure with specific peeling criteria and stopping rules depending on the response. To validate the model, it provides an objective function based on prediction-error or other specific statistic, as well as two alternative cross-validation techniques, adapted to the task of decision-rule making and estimation in the three types of settings. PRIMsrc comes as an open source R package, including at this point: (i) a main function for fitting a Survival Bump Hunting model with various options allowing cross-validated model selection to control model size (#covariates) and model complexity (#peeling steps) and generation of cross-validated end-point estimates; (ii) parallel computing; (iii) various S3-generic and specific plotting functions for data visualization, diagnostic, prediction, summary and display of results. It is available on CRAN and GitHub. PMID:26798326
Development and Validation of a Mobile Device-based External Ventricular Drain Simulator.
Morone, Peter J; Bekelis, Kimon; Root, Brandon K; Singer, Robert J
2017-10-01
Multiple external ventricular drain (EVD) simulators have been created, yet their cost, bulky size, and nonreusable components limit their accessibility to residency programs. To create and validate an animated EVD simulator that is accessible on a mobile device. We developed a mobile-based EVD simulator that is compatible with iOS (Apple Inc., Cupertino, California) and Android-based devices (Google, Mountain View, California) and can be downloaded from the Apple App and Google Play Store. Our simulator consists of a learn mode, which teaches users the procedure, and a test mode, which assesses users' procedural knowledge. Twenty-eight participants, who were divided into expert and novice categories, completed the simulator in test mode and answered a postmodule survey. This was graded using a 5-point Likert scale, with 5 representing the highest score. Using the survey results, we assessed the module's face and content validity, whereas construct validity was evaluated by comparing the expert and novice test scores. Participants rated individual survey questions pertaining to face and content validity a median score of 4 out of 5. When comparing test scores, generated by the participants completing the test mode, the experts scored higher than the novices (mean, 71.5; 95% confidence interval, 69.2 to 73.8 vs mean, 48; 95% confidence interval, 44.2 to 51.6; P < .001). We created a mobile-based EVD simulator that is inexpensive, reusable, and accessible. Our results demonstrate that this simulator is face, content, and construct valid. Copyright © 2017 by the Congress of Neurological Surgeons
ERIC Educational Resources Information Center
Macmann, Gregg M.; Barnett, David W.
1994-01-01
Describes exploratory and confirmatory analyses of verbal-performance procedures to illustrate concepts and procedures for analysis of correlated factors. Argues that, based on convergent and discriminant validity criteria, factors should have higher correlations with variables that they purport to measure than with other variables. Discusses…
Describes procedures written based on the assumption that they will be performed by analysts who are formally trained in at least the basic principles of chemical analysis and in the use of the subject technology.
Reliability and validity of the Japanese version of the Organizational Justice Questionnaire.
Inoue, Akiomi; Kawakami, Norito; Tsutsumi, Akizumi; Shimazu, Akihito; Tsuchiya, Masao; Ishizaki, Masao; Tabata, Masaji; Akiyama, Miki; Kitazume, Akiko; Kuroda, Mitsuyo; Kivimäki, Mika
2009-01-01
Previous European studies reporting low procedural justice and low interactional justice were associated with increased health problems have used a modified version of the Moorman's Organizational Justice Questionnaire (OJQ, Elovainio et al., 2002) to assess organizational justice. We translated the modified OJQ into the Japanese language and examined the internal consistency reliability, and factor-based and construct validity of this measure. A back-translation procedure confirmed that the translation was appropriate, pending a minor revision. A total of 185 men and 58 women at a manufacturing factory in Japan were surveyed using a mailed questionnaire including the OJQ and other job stressors. Cronbach alpha coefficients of the two OJQ subscales were high (0.85-0.94) for both sexes. The hypothesized two factors (i.e., procedural justice and interactional justice) were extracted by the factor analysis for men; for women, procedural justice was further split into two separate dimensions supporting a three- rather than two-factor structure. Convergent validity was supported by expected correlations of the OJQ with job control, supervisor support, effort-reward imbalance, and job future ambiguity in particular among the men. The present study shows that the Japanese version of the OJQ has acceptable levels of reliability and validity at least for male employees.
Robust estimation of the proportion of treatment effect explained by surrogate marker information.
Parast, Layla; McDermott, Mary M; Tian, Lu
2016-05-10
In randomized treatment studies where the primary outcome requires long follow-up of patients and/or expensive or invasive obtainment procedures, the availability of a surrogate marker that could be used to estimate the treatment effect and could potentially be observed earlier than the primary outcome would allow researchers to make conclusions regarding the treatment effect with less required follow-up time and resources. The Prentice criterion for a valid surrogate marker requires that a test for treatment effect on the surrogate marker also be a valid test for treatment effect on the primary outcome of interest. Based on this criterion, methods have been developed to define and estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on the surrogate marker. These methods aim to identify useful statistical surrogates that capture a large proportion of the treatment effect. However, current methods to estimate this proportion usually require restrictive model assumptions that may not hold in practice and thus may lead to biased estimates of this quantity. In this paper, we propose a nonparametric procedure to estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on a potential surrogate marker and extend this procedure to a setting with multiple surrogate markers. We compare our approach with previously proposed model-based approaches and propose a variance estimation procedure based on a perturbation-resampling method. Simulation studies demonstrate that the procedure performs well in finite samples and outperforms model-based procedures when the specified models are not correct. We illustrate our proposed procedure using a data set from a randomized study investigating a group-mediated cognitive behavioral intervention for peripheral artery disease participants. Copyright © 2015 John Wiley & Sons, Ltd.
2013-01-01
Background The Parent-Infant Relationship Global Assessment Scale (PIR-GAS) signifies a conceptually relevant development in the multi-axial, developmentally sensitive classification system DC:0-3R for preschool children. However, information about the reliability and validity of the PIR-GAS is rare. A review of the available empirical studies suggests that in research, PIR-GAS ratings can be based on a ten-minute videotaped interaction sequence. The qualification of raters may be very heterogeneous across studies. Methods To test whether the use of the PIR-GAS still allows for a reliable assessment of the parent-infant relationship, our study compared a PIR-GAS ratings based on a full-information procedure across multiple settings with ratings based on a ten-minute video by two doctoral candidates of medicine. For each mother-child dyad at a family day hospital (N = 48), we obtained two video ratings and one full-information rating at admission to therapy and at discharge. This pre-post design allowed for a replication of our findings across the two measurement points. We focused on the inter-rater reliability between the video coders, as well as between the video and full-information procedure, including mean differences and correlations between the raters. Additionally, we examined aspects of the validity of video and full-information ratings based on their correlation with measures of child and maternal psychopathology. Results Our results showed that a ten-minute video and full-information PIR-GAS ratings were not interchangeable. Most results at admission could be replicated by the data obtained at discharge. We concluded that a higher degree of standardization of the assessment procedure should increase the reliability of the PIR-GAS, and a more thorough theoretical foundation of the manual should increase its validity. PMID:23705962
Experimental Verification of Buffet Calculation Procedure Using Unsteady PSP
NASA Technical Reports Server (NTRS)
Panda, Jayanta
2016-01-01
Typically a limited number of dynamic pressure sensors are employed to determine the unsteady aerodynamic forces on large, slender aerospace structures. The estimated forces are known to be very sensitive to the number of the dynamic pressure sensors and the details of the integration scheme. This report describes a robust calculation procedure, based on frequency-specific correlation lengths, that is found to produce good estimation of fluctuating forces from a few dynamic pressure sensors. The validation test was conducted on a flat panel, placed on the floor of a wind tunnel, and was subjected to vortex shedding from a rectangular bluff-body. The panel was coated with fast response Pressure Sensitive Paint (PSP), which allowed time-resolved measurements of unsteady pressure fluctuations on a dense grid of spatial points. The first part of the report describes the detail procedure used to analyze the high-speed, PSP camera images. The procedure includes steps to reduce contamination by electronic shot noise, correction for spatial non-uniformities, and lamp brightness variation, and finally conversion of fluctuating light intensity to fluctuating pressure. The latter involved applying calibration constants from a few dynamic pressure sensors placed at selective points on the plate. Excellent comparison in the spectra, coherence and phase, calculated via PSP and dynamic pressure sensors validated the PSP processing steps. The second part of the report describes the buffet validation process, for which the first step was to use pressure histories from all PSP points to determine the "true" force fluctuations. In the next step only a selected number of pixels were chosen as "virtual sensors" and a correlation-length based buffet calculation procedure was applied to determine "modeled" force fluctuations. By progressively decreasing the number of virtual sensors it was observed that the present calculation procedure was able to make a close estimate of the "true" unsteady forces only from four sensors. It is believed that the present work provides the first validation of the buffet calculation procedure which has been used for the development of many space vehicles.
Ten Cate, Olle
2015-08-01
Competency-based medical education stresses the attainment of competencies rather than the completion of fixed time in rotations. This sometimes leads to the interpretation that quantitative features of a program are of less importance, such as procedures practiced and weeks or months spent in clinical practice. An educational philosophy like "We don't require numbers of procedures completed but focus on competencies" suggests a dichotomy of either competency-based or time and procedures based education. The author argues that this dichotomy is not useful, and may even compromise education, as long as valid assessment of all relevant competencies is not possible or feasible. Requiring quantities of experiences of learners is not in contrast with competency-based education.
Project W-314 specific test and evaluation plan for AZ tank farm upgrades
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hays, W.H.
1998-08-12
The purpose of this Specific Test and Evaluation Plan (STEP) is to provide a detailed written plan for the systematic testing of modifications made by the addition of the SN-631 transfer line from the AZ-O1A pit to the AZ-02A pit by the W-314 Project. The STEP develops the outline for test procedures that verify the system`s performance to the established Project design criteria. The STEP is a lower tier document based on the W-314 Test and Evaluation P1 an (TEP). Testing includes Validations and Verifications (e.g., Commercial Grade Item Dedication activities, etc), Factory Tests and Inspections (FTIs), installation tests andmore » inspections, Construction Tests and Inspections (CTIs), Acceptance Test Procedures (ATPs), Pre-Operational Test Procedures (POTPs), and Operational Test Procedures (OTPs). The STEP will be utilized in conjunction with the TEP for verification and validation.« less
Evaluation of dynamical models: dissipative synchronization and other techniques.
Aguirre, Luis Antonio; Furtado, Edgar Campos; Tôrres, Leonardo A B
2006-12-01
Some recent developments for the validation of nonlinear models built from data are reviewed. Besides giving an overall view of the field, a procedure is proposed and investigated based on the concept of dissipative synchronization between the data and the model, which is very useful in validating models that should reproduce dominant dynamical features, like bifurcations, of the original system. In order to assess the discriminating power of the procedure, four well-known benchmarks have been used: namely, Duffing-Ueda, Duffing-Holmes, and van der Pol oscillators, plus the Hénon map. The procedure, developed for discrete-time systems, is focused on the dynamical properties of the model, rather than on statistical issues. For all the systems investigated, it is shown that the discriminating power of the procedure is similar to that of bifurcation diagrams--which in turn is much greater than, say, that of correlation dimension--but at a much lower computational cost.
ERIC Educational Resources Information Center
Lievens, Filip; Sackett, Paul R.
2012-01-01
This study provides conceptual and empirical arguments why an assessment of applicants' procedural knowledge about interpersonal behavior via a video-based situational judgment test might be valid for academic and postacademic success criteria. Four cohorts of medical students (N = 723) were followed from admission to employment. Procedural…
Interpreting Variance Components as Evidence for Reliability and Validity.
ERIC Educational Resources Information Center
Kane, Michael T.
The reliability and validity of measurement is analyzed by a sampling model based on generalizability theory. A model for the relationship between a measurement procedure and an attribute is developed from an analysis of how measurements are used and interpreted in science. The model provides a basis for analyzing the concept of an error of…
ERIC Educational Resources Information Center
Wang, Xihui; Zhang, Zhidong; Zhang, Xiuying; Hou, Dadong
2013-01-01
The Epistemic Beliefs Inventory (EBI), as a theory-based and empirically validated instrument, was originally developed and widely used in the North American context. Through a strict translation procedure the authors translated the EBI into Chinese, and then administered it to 451 students in 7 universities in mainland China. The construct…
ERIC Educational Resources Information Center
George-Ezzelle, Carol E.; Skaggs, Gary
2004-01-01
Current testing standards call for test developers to provide evidence that testing procedures and test scores, and the inferences made based on the test scores, show evidence of validity and are comparable across subpopulations (American Educational Research Association [AERA], American Psychological Association [APA], & National Council on…
40 CFR 761.392 - Preparing validation study samples.
Code of Federal Regulations, 2014 CFR
2014-07-01
... establish a surface concentration to be included in the standard operating procedure. The surface levels of... Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to... surfaces must be ≥20 µg/100 cm2. (2) To validate a procedure to decontaminate a specified surface...
40 CFR 761.392 - Preparing validation study samples.
Code of Federal Regulations, 2012 CFR
2012-07-01
... establish a surface concentration to be included in the standard operating procedure. The surface levels of... Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to... surfaces must be ≥20 µg/100 cm2. (2) To validate a procedure to decontaminate a specified surface...
40 CFR 761.392 - Preparing validation study samples.
Code of Federal Regulations, 2013 CFR
2013-07-01
... establish a surface concentration to be included in the standard operating procedure. The surface levels of... Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to... surfaces must be ≥20 µg/100 cm2. (2) To validate a procedure to decontaminate a specified surface...
URANS simulations of the tip-leakage cavitating flow with verification and validation procedures
NASA Astrophysics Data System (ADS)
Cheng, Huai-yu; Long, Xin-ping; Liang, Yun-zhi; Long, Yun; Ji, Bin
2018-04-01
In the present paper, the Vortex Identified Zwart-Gerber-Belamri (VIZGB) cavitation model coupled with the SST-CC turbulence model is used to investigate the unsteady tip-leakage cavitating flow induced by a NACA0009 hydrofoil. A qualitative comparison between the numerical and experimental results is made. In order to quantitatively evaluate the reliability of the numerical data, the verification and validation (V&V) procedures are used in the present paper. Errors of numerical results are estimated with seven error estimators based on the Richardson extrapolation method. It is shown that though a strict validation cannot be achieved, a reasonable prediction of the gross characteristics of the tip-leakage cavitating flow can be obtained. Based on the numerical results, the influence of the cavitation on the tip-leakage vortex (TLV) is discussed, which indicates that the cavitation accelerates the fusion of the TLV and the tip-separation vortex (TSV). Moreover, the trajectory of the TLV, when the cavitation occurs, is close to the side wall.
Duchez, Pascale; Rodriguez, Laura; Chevaleyre, Jean; De La Grange, Philippe Brunet; Ivanovic, Zoran
2016-12-01
Survival of ex vivo expanded hematopoietic stem cells (HSC) and progenitor cells is low with the standard cryopreservation procedure. We recently showed that the efficiency of cryopreservation of these cells may be greatly enhanced by adding a serum-free xeno-free culture medium (HP01 Macopharma), which improves the antioxidant and biochemical properties of the cryopreservation solution. Here we present the clinical-scale validation of this cryopreservation procedure. The hematopoietic cells expanded in clinical-scale cultures were cryopreserved applying the new HP01-based procedure. The viability, apoptosis rate and number of functional committed progenitors (methyl-cellulose colony forming cell test), short-term repopulating HSCs (primary recipient NSG mice) and long-term HSCs (secondary recipient NSG mice) were tested before and after thawing. The efficiency of clinical-scale procedure reproduced the efficiency of cryopreservation obtained earlier in miniature sample experiments. Furthermore, the full preservation of short- and long-term HSCs was obtained in clinical scale conditions. Because the results obtained in clinical-scale volume are comparable to our earlier results in miniature-scale cultures, the clinical-scale procedure should be considered validated. It allows cryopreservation of the whole ex vivo expanded culture content, conserving full short- and long-term HSC activity. Copyright © 2016 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
Hanrahan, Kirsten; McCarthy, Ann Marie; Kleiber, Charmaine; Ataman, Kaan; Street, W Nick; Zimmerman, M Bridget; Ersig, Anne L
2012-10-01
This secondary data analysis used data mining methods to develop predictive models of child risk for distress during a healthcare procedure. Data used came from a study that predicted factors associated with children's responses to an intravenous catheter insertion while parents provided distraction coaching. From the 255 items used in the primary study, 44 predictive items were identified through automatic feature selection and used to build support vector machine regression models. Models were validated using multiple cross-validation tests and by comparing variables identified as explanatory in the traditional versus support vector machine regression. Rule-based approaches were applied to the model outputs to identify overall risk for distress. A decision tree was then applied to evidence-based instructions for tailoring distraction to characteristics and preferences of the parent and child. The resulting decision support computer application, titled Children, Parents and Distraction, is being used in research. Future use will support practitioners in deciding the level and type of distraction intervention needed by a child undergoing a healthcare procedure.
The development of a virtual reality training curriculum for colonoscopy.
Sugden, Colin; Aggarwal, Rajesh; Banerjee, Amrita; Haycock, Adam; Thomas-Gibson, Siwan; Williams, Christopher B; Darzi, Ara
2012-07-01
The development of a structured virtual reality (VR) training curriculum for colonoscopy using high-fidelity simulation. Colonoscopy requires detailed knowledge and technical skill. Changes to working practices in recent times have reduced the availability of traditional training opportunities. Much might, therefore, be achieved by applying novel technologies such as VR simulation to colonoscopy. Scientifically developed device-specific curricula aim to maximize the yield of laboratory-based training by focusing on validated modules and linking progression to the attainment of benchmarked proficiency criteria. Fifty participants comprised of 30 novices (<10 colonoscopies), 10 intermediates (100 to 500 colonoscopies), and 10 experienced (>500 colonoscopies) colonoscopists were recruited to participate. Surrogates of proficiency, such as number of procedures undertaken, determined prospective allocation to 1 of 3 groups (novice, intermediate, and experienced). Construct validity and learning value (comparison between groups and within groups respectively) for each task and metric on the chosen simulator model determined suitability for inclusion in the curriculum. Eight tasks in possession of construct validity and significant learning curves were included in the curriculum: 3 abstract tasks, 4 part-procedural tasks, and 1 procedural task. The whole-procedure task was valid for 11 metrics including the following: "time taken to complete the task" (1238, 343, and 293 s; P < 0.001) and "insertion length with embedded tip" (23.8, 3.6, and 4.9 cm; P = 0.005). Learning curves consistently plateaued at or beyond the ninth attempt. Valid metrics were used to define benchmarks, derived from the performance of the experienced cohort, for each included task. A comprehensive, stratified, benchmarked, whole-procedure curriculum has been developed for a modern high-fidelity VR colonoscopy simulator.
Khoury, Joseph D; Wang, Wei-Lien; Prieto, Victor G; Medeiros, L Jeffrey; Kalhor, Neda; Hameed, Meera; Broaddus, Russell; Hamilton, Stanley R
2018-02-01
Biomarkers that guide therapy selection are gaining unprecedented importance as targeted therapy options increase in scope and complexity. In conjunction with high-throughput molecular techniques, therapy-guiding biomarker assays based upon immunohistochemistry (IHC) have a critical role in cancer care in that they inform about the expression status of a protein target. Here, we describe the validation procedures for four clinical IHC biomarker assays-PTEN, RB, MLH1, and MSH2-for use as integral biomarkers in the nationwide NCI-Molecular Analysis for Therapy Choice (NCI-MATCH) EAY131 clinical trial. Validation procedures were developed through an iterative process based on collective experience and adaptation of broad guidelines from the FDA. The steps included primary antibody selection; assay optimization; development of assay interpretation criteria incorporating biological considerations; and expected staining patterns, including indeterminate results, orthogonal validation, and tissue validation. Following assay lockdown, patient samples and cell lines were used for analytic and clinical validation. The assays were then approved as laboratory-developed tests and used for clinical trial decisions for treatment selection. Calculations of sensitivity and specificity were undertaken using various definitions of gold-standard references, and external validation was required for the PTEN IHC assay. In conclusion, validation of IHC biomarker assays critical for guiding therapy in clinical trials is feasible using comprehensive preanalytic, analytic, and postanalytic steps. Implementation of standardized guidelines provides a useful framework for validating IHC biomarker assays that allow for reproducibility across institutions for routine clinical use. Clin Cancer Res; 24(3); 521-31. ©2017 AACR . ©2017 American Association for Cancer Research.
Ribeiro de Oliveira, Marcelo Magaldi; Nicolato, Arthur; Santos, Marcilea; Godinho, Joao Victor; Brito, Rafael; Alvarenga, Alexandre; Martins, Ana Luiza Valle; Prosdocimi, André; Trivelato, Felipe Padovani; Sabbagh, Abdulrahman J; Reis, Augusto Barbosa; Maestro, Rolando Del
2016-05-01
OBJECT The development of neurointerventional treatments of central nervous system disorders has resulted in the need for adequate training environments for novice interventionalists. Virtual simulators offer anatomical definition but lack adequate tactile feedback. Animal models, which provide more lifelike training, require an appropriate infrastructure base. The authors describe a training model for neurointerventional procedures using the human placenta (HP), which affords haptic training with significantly fewer resource requirements, and discuss its validation. METHODS Twelve HPs were prepared for simulated endovascular procedures. Training exercises performed by interventional neuroradiologists and novice fellows were placental angiography, stent placement, aneurysm coiling, and intravascular liquid embolic agent injection. RESULTS The endovascular training exercises proposed can be easily reproduced in the HP. Face, content, and construct validity were assessed by 6 neurointerventional radiologists and 6 novice fellows in interventional radiology. CONCLUSIONS The use of HP provides an inexpensive training model for the training of neurointerventionalists. Preliminary validation results show that this simulation model has face and content validity and has demonstrated construct validity for the interventions assessed in this study.
Lehotsky, Á; Szilágyi, L; Bánsághi, S; Szerémy, P; Wéber, G; Haidegger, T
2017-09-01
Ultraviolet spectrum markers are widely used for hand hygiene quality assessment, although their microbiological validation has not been established. A microbiology-based assessment of the procedure was conducted. Twenty-five artificial hand models underwent initial full contamination, then disinfection with UV-dyed hand-rub solution, digital imaging under UV-light, microbiological sampling and cultivation, and digital imaging of the cultivated flora were performed. Paired images of each hand model were registered by a software tool, then the UV-marked regions were compared with the pathogen-free sites pixel by pixel. Statistical evaluation revealed that the method indicates correctly disinfected areas with 95.05% sensitivity and 98.01% specificity. Copyright © 2017 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
Wavelet-based identification of rotor blades in passage-through-resonance tests
NASA Astrophysics Data System (ADS)
Carassale, Luigi; Marrè-Brunenghi, Michela; Patrone, Stefano
2018-01-01
Turbine blades are critical components in turbo engines and their design process usually includes experimental tests in order to validate and/or update numerical models. These tests are generally carried out on full-scale rotors having some blades instrumented with strain gauges and usually involve a run-up or a run-down phase. The quantification of damping in these conditions is rather challenging for several reasons. In this work, we show through numerical simulations that the usual identification procedures lead to a systematic overestimation of damping due both to the finite sweep velocity, as well as to the variation of the blade natural frequencies with the rotation speed. To overcome these problems, an identification procedure based on the continuous wavelet transform is proposed and validated through numerical simulation.
NASA Astrophysics Data System (ADS)
Ritter, Jennifer M.; Boone, William J.; Rubba, Peter A.
2001-06-01
This paper presents an overview of the procedures used to develop and validate an instrument to measure the self-efficacy beliefs of prospective elementary teachers about equitable science teaching and learning. The instrument, titled the SEBEST, was based on the work of Ashton and Webb (1986a, 1986b) and Bandura (1977, 1986). It was modeled after the Science Teaching Efficacy Belief Instrument (STEBI) (Riggs, 1988) and the Science Teaching Efficacy Belief Instrument for Prospective Teachers (STEBI-B) (Enochs & Riggs, 1990). Based on the standardized development procedures used and associated evidence, the SEBEST appears to be a content and construct valid instrument, with high internal reliability qualities. "Most probable response" plots are introduced and used to bring meaning to SEBEST raw scores.
Iowa Department of Education Guidelines for PK-12 Competency-Based Pathways
ERIC Educational Resources Information Center
Iowa Department of Education, 2013
2013-01-01
This document provides guidelines for developing competency-based pathways in Iowa districts and schools and outlines waiver requirements and procedures. Competency-based pathways provide ways to validate learning of standards that occurs outside the structure of the traditional school and offer flexibility for schools to engage students in…
Analytical procedure validation and the quality by design paradigm.
Rozet, Eric; Lebrun, Pierre; Michiels, Jean-François; Sondag, Perceval; Scherder, Tara; Boulanger, Bruno
2015-01-01
Since the adoption of the ICH Q8 document concerning the development of pharmaceutical processes following a quality by design (QbD) approach, there have been many discussions on the opportunity for analytical procedure developments to follow a similar approach. While development and optimization of analytical procedure following QbD principles have been largely discussed and described, the place of analytical procedure validation in this framework has not been clarified. This article aims at showing that analytical procedure validation is fully integrated into the QbD paradigm and is an essential step in developing analytical procedures that are effectively fit for purpose. Adequate statistical methodologies have also their role to play: such as design of experiments, statistical modeling, and probabilistic statements. The outcome of analytical procedure validation is also an analytical procedure design space, and from it, control strategy can be set.
Training, Simulation, the Learning Curve, and How to Reduce Complications in Urology.
Brunckhorst, Oliver; Volpe, Alessandro; van der Poel, Henk; Mottrie, Alexander; Ahmed, Kamran
2016-04-01
Urology is at the forefront of minimally invasive surgery to a great extent. These procedures produce additional learning challenges and possess a steep initial learning curve. Training and assessment methods in surgical specialties such as urology are known to lack clear structure and often rely on differing operative flow experienced by individuals and institutions. This article aims to assess current urology training modalities, to identify the role of simulation within urology, to define and identify the learning curves for various urologic procedures, and to discuss ways to decrease complications in the context of training. A narrative review of the literature was conducted through December 2015 using the PubMed/Medline, Embase, and Cochrane Library databases. Evidence of the validity of training methods in urology includes observation of a procedure, mentorship and fellowship, e-learning, and simulation-based training. Learning curves for various urologic procedures have been recommended based on the available literature. The importance of structured training pathways is highlighted, with integration of modular training to ensure patient safety. Valid training pathways are available in urology. The aim in urology training should be to combine all of the available evidence to produce procedure-specific curricula that utilise the vast array of training methods available to ensure that we continue to improve patient outcomes and reduce complications. The current evidence for different training methods available in urology, including simulation-based training, was reviewed, and the learning curves for various urologic procedures were critically analysed. Based on the evidence, future pathways for urology curricula have been suggested to ensure that patient safety is improved. Copyright © 2016 European Association of Urology. Published by Elsevier B.V. All rights reserved.
de Vries, Anna H; Muijtjens, Arno M M; van Genugten, Hilde G J; Hendrikx, Ad J M; Koldewijn, Evert L; Schout, Barbara M A; van der Vleuten, Cees P M; Wagner, Cordula; Tjiam, Irene M; van Merriënboer, Jeroen J G
2018-06-05
The current shift towards competency-based residency training has increased the need for objective assessment of skills. In this study, we developed and validated an assessment tool that measures technical and non-technical competency in transurethral resection of bladder tumour (TURBT). The 'Test Objective Competency' (TOCO)-TURBT tool was designed by means of cognitive task analysis (CTA), which included expert consensus. The tool consists of 51 items, divided into 3 phases: preparatory (n = 15), procedural (n = 21), and completion (n = 15). For validation of the TOCO-TURBT tool, 2 TURBT procedures were performed and videotaped by 25 urologists and 51 residents in a simulated setting. The participants' degree of competence was assessed by a panel of eight independent expert urologists using the TOCO-TURBT tool. Each procedure was assessed by two raters. Feasibility, acceptability and content validity were evaluated by means of a quantitative cross-sectional survey. Regression analyses were performed to assess the strength of the relation between experience and test scores (construct validity). Reliability was analysed by generalizability theory. The majority of assessors and urologists indicated the TOCO-TURBT tool to be a valid assessment of competency and would support the implementation of the TOCO-TURBT assessment as a certification method for residents. Construct validity was clearly established for all outcome measures of the procedural phase (all r > 0.5, p < 0.01). Generalizability-theory analysis showed high reliability (coefficient Phi ≥ 0.8) when using the format of two assessors and two cases. This study provides first evidence that the TOCO-TURBT tool is a feasible, valid and reliable assessment tool for measuring competency in TURBT. The tool has the potential to be used for future certification of competencies for residents and urologists. The methodology of CTA might be valuable in the development of assessment tools in other areas of clinical practice.
Procedural training and assessment of competency utilizing simulation.
Sawyer, Taylor; Gray, Megan M
2016-11-01
This review examines the current environment of neonatal procedural learning, describes an updated model of skills training, defines the role of simulation in assessing competency, and discusses potential future directions for simulation-based competency assessment. In order to maximize impact, simulation-based procedural training programs should follow a standardized and evidence-based approach to designing and evaluating educational activities. Simulation can be used to facilitate the evaluation of competency, but must incorporate validated assessment tools to ensure quality and consistency. True competency evaluation cannot be accomplished with simulation alone: competency assessment must also include evaluations of procedural skill during actual clinical care. Future work in this area is needed to measure and track clinically meaningful patient outcomes resulting from simulation-based training, examine the use of simulation to assist physicians undergoing re-entry to practice, and to examine the use of procedural skills simulation as part of a maintenance of competency and life-long learning. Copyright © 2016 Elsevier Inc. All rights reserved.
A Descriptive Analysis of the Use of Workplace-Based Assessments in UK Surgical Training.
Shalhoub, Joseph; Santos, Cristel; Bussey, Maria; Eardley, Ian; Allum, William
2015-01-01
Workplace-based assessments (WBAs) were introduced formally in the UK in 2007. The aim of the study was to describe the use of WBAs by UK surgical trainees and examine variations by training region, specialty, and level of training. The database of the Intercollegiate Surgical Curriculum Programme was examined for WBAs between August 2007 and July 2013, with in-depth analysis of 2 periods: August 2011 to July 2012 and August 2012 to July 2013. The numbers of validated WBAs per trainee per year increased more than 7-fold, from median 6 per trainee in 2007 to 2008, to 39 in 2011 to 2012, and 44 in 2012 to 2013. In the period 2011 to 2012, 58.4% of core trainees completed the recommended 40 WBAs, with only 38.1% of specialty trainees achieving 40 validated WBAs. In the period 2012 to 2013, these proportions increased to 67.7% and 57.0% for core and specialty trainees, respectively. Core trainees completed more WBAs per year than specialty trainees in the same training region. London core trainees completed the highest numbers of WBAs in both the periods 2011 to 2012 (median 67) and 2012 to 2013 (median 74). There was a peak in WBAs completed by London specialty trainees in the period 2012 to 2013 (median 63). The most validated WBAs were completed by ST1/CT1 (specialty surgical training year, core surgical training year), with a gradual decrease in median WBAs to ST4, followed by a plateau; in the period 2012 to 2013, there was an increase in WBAs at ST8. Core surgical trainees complete ~50% "operative" (procedure-based assessment/direct observation of procedural skills) and ~50% "nonoperative" assessments (case-based discussion/clinical evaluation exercise). During specialty training, procedure-based assessments represented ~46% of WBAs, direct observation of procedural skills 11.2%, case-based discussion ~23%, and clinical evaluation exercise ~15%. UK surgical trainees are, on an average, undertaking 1 WBA per week. Variation exists in use of WBAs between training regions. Core trainees tend to use the spectrum of WBAs more frequently than their senior colleagues do. Further work is required to examine the role of WBAs in assessment, and engagement and training of trainers in processes and validation of WBAs. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Real-Time MRI-Guided Cardiac Cryo-Ablation: A Feasibility Study.
Kholmovski, Eugene G; Coulombe, Nicolas; Silvernagel, Joshua; Angel, Nathan; Parker, Dennis; Macleod, Rob; Marrouche, Nassir; Ranjan, Ravi
2016-05-01
MRI-based ablation provides an attractive capability of seeing ablation-related tissue changes in real time. Here we describe a real-time MRI-based cardiac cryo-ablation system. Studies were performed in canine model (n = 4) using MR-compatible cryo-ablation devices built for animal use: focal cryo-catheter with 8 mm tip and 28 mm diameter cryo-balloon. The main steps of MRI-guided cardiac cryo-ablation procedure (real-time navigation, confirmation of tip-tissue contact, confirmation of vessel occlusion, real-time monitoring of a freeze zone formation, and intra-procedural assessment of lesions) were validated in a 3 Tesla clinical MRI scanner. The MRI compatible cryo-devices were advanced to the right atrium (RA) and right ventricle (RV) and their position was confirmed by real-time MRI. Specifically, contact between catheter tip and myocardium and occlusion of superior vena cava (SVC) by the balloon was visually validated. Focal cryo-lesions were created in the RV septum. Circumferential ablation of SVC-RA junction with no gaps was achieved using the cryo-balloon. Real-time visualization of freeze zone formation was achieved in all studies when lesions were successfully created. The ablations and presence of collateral damage were confirmed by T1-weighted and late gadolinium enhancement MRI and gross pathological examination. This study confirms the feasibility of a MRI-based cryo-ablation system in performing cardiac ablation procedures. The system allows real-time catheter navigation, confirmation of catheter tip-tissue contact, validation of vessel occlusion by cryo-balloon, real-time monitoring of a freeze zone formation, and intra-procedural assessment of ablations including collateral damage. © 2016 Wiley Periodicals, Inc.
A posteriori model validation for the temporal order of directed functional connectivity maps.
Beltz, Adriene M; Molenaar, Peter C M
2015-01-01
A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data).
Taylor, Diana; Upadhyay, Ushma D; Fjerstad, Mary; Battistelli, Molly F; Weitz, Tracy A; Paul, Maureen E
2017-07-01
To develop and validate standardized criteria for assessing abortion-related incidents (adverse events, morbidities, near misses) for first-trimester aspiration abortion procedures and to demonstrate the utility of a standardized framework [the Procedural Abortion Incident Reporting & Surveillance (PAIRS) Framework] for estimating serious abortion-related adverse events. As part of a California-based study of early aspiration abortion provision conducted between 2007 and 2013, we developed and validated a standardized framework for defining and monitoring first-trimester (≤14weeks) aspiration abortion morbidity and adverse events using multiple methods: a literature review, framework criteria testing with empirical data, repeated expert reviews and data-based revisions to the framework. The final framework distinguishes incidents resulting from procedural abortion care (adverse events) from morbidity related to pregnancy, the abortion process and other nonabortion related conditions. It further classifies incidents by diagnosis (confirmatory data, etiology, risk factors), management (treatment type and location), timing (immediate or delayed), seriousness (minor or major) and outcome. Empirical validation of the framework using data from 19,673 women receiving aspiration abortions revealed almost an equal proportion of total adverse events (n=205, 1.04%) and total abortion- or pregnancy-related morbidity (n=194, 0.99%). The majority of adverse events were due to retained products of conception (0.37%), failed attempted abortion (0.15%) and postabortion infection (0.17%). Serious or major adverse events were rare (n=11, 0.06%). Distinguishing morbidity diagnoses from adverse events using a standardized, empirically tested framework confirms the very low frequency of serious adverse events related to clinic-based abortion care. The PAIRS Framework provides a useful set of tools to systematically classify and monitor abortion-related incidents for first-trimester aspiration abortion procedures. Standardization will assist healthcare providers, researchers and policymakers to anticipate morbidity and prevent abortion adverse events, improve care metrics and enhance abortion quality. Copyright © 2017 Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-24
...; Comment Request; Web-Based Assessment of the Clinical Studies Support Center (CSSC) Summary: Under the... current valid OMB control number. Proposed Collection: Title: Web-Based Assessment of the Clinical Studies... Operations and Procedures (MOP); coordinating meeting space and logistics for in-person meetings, Web...
Validating Trial-Based Functional Analyses in Mainstream Primary School Classrooms
ERIC Educational Resources Information Center
Austin, Jennifer L.; Groves, Emily A.; Reynish, Lisa C.; Francis, Laura L.
2015-01-01
There is growing evidence to support the use of trial-based functional analyses, particularly in classroom settings. However, there currently are no evaluations of this procedure with typically developing children. Furthermore, it is possible that refinements may be needed to adapt trial-based analyses to mainstream classrooms. This study was…
Measuring cardiac waste: the premier cardiac waste measures.
Lowe, Timothy J; Partovian, Chohreh; Kroch, Eugene; Martin, John; Bankowitz, Richard
2014-01-01
The authors developed 8 measures of waste associated with cardiac procedures to assist hospitals in comparing their performance with peer facilities. Measure selection was based on review of the research literature, clinical guidelines, and consultation with key stakeholders. Development and validation used the data from 261 hospitals in a split-sample design. Measures were risk adjusted using Premier's CareScience methodologies or mean peer value based on Medicare Severity Diagnosis-Related Group assignment. High variability was found in resource utilization across facilities. Validation of the measures using item-to-total correlations (range = 0.27-0.78), Cronbach α (.88), and Spearman rank correlation (0.92) showed high reliability and discriminatory power. Because of the level of variability observed among hospitals, this study suggests that there is opportunity for facilities to design successful waste reduction programs targeting cardiac-device procedures.
A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection
Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B
2015-01-01
Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050
Automated Assessment of the Quality of Depression Websites
Tang, Thanh Tin; Hawking, David; Christensen, Helen
2005-01-01
Background Since health information on the World Wide Web is of variable quality, methods are needed to assist consumers to identify health websites containing evidence-based information. Manual assessment tools may assist consumers to evaluate the quality of sites. However, these tools are poorly validated and often impractical. There is a need to develop better consumer tools, and in particular to explore the potential of automated procedures for evaluating the quality of health information on the web. Objective This study (1) describes the development of an automated quality assessment procedure (AQA) designed to automatically rank depression websites according to their evidence-based quality; (2) evaluates the validity of the AQA relative to human rated evidence-based quality scores; and (3) compares the validity of Google PageRank and the AQA as indicators of evidence-based quality. Method The AQA was developed using a quality feedback technique and a set of training websites previously rated manually according to their concordance with statements in the Oxford University Centre for Evidence-Based Mental Health’s guidelines for treating depression. The validation phase involved 30 websites compiled from the DMOZ, Yahoo! and LookSmart Depression Directories by randomly selecting six sites from each of the Google PageRank bands of 0, 1-2, 3-4, 5-6 and 7-8. Evidence-based ratings from two independent raters (based on concordance with the Oxford guidelines) were then compared with scores derived from the automated AQA and Google algorithms. There was no overlap in the websites used in the training and validation phases of the study. Results The correlation between the AQA score and the evidence-based ratings was high and significant (r=0.85, P<.001). Addition of a quadratic component improved the fit, the combined linear and quadratic model explaining 82 percent of the variance. The correlation between Google PageRank and the evidence-based score was lower than that for the AQA. When sites with zero PageRanks were included the association was weak and non-significant (r=0.23, P=.22). When sites with zero PageRanks were excluded, the correlation was moderate (r=.61, P=.002). Conclusions Depression websites of different evidence-based quality can be differentiated using an automated system. If replicable, generalizable to other health conditions and deployed in a consumer-friendly form, the automated procedure described here could represent an important advance for consumers of Internet medical information. PMID:16403723
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricci, P., E-mail: paolo.ricci@epfl.ch; Riva, F.; Theiler, C.
In the present work, a Verification and Validation procedure is presented and applied showing, through a practical example, how it can contribute to advancing our physics understanding of plasma turbulence. Bridging the gap between plasma physics and other scientific domains, in particular, the computational fluid dynamics community, a rigorous methodology for the verification of a plasma simulation code is presented, based on the method of manufactured solutions. This methodology assesses that the model equations are correctly solved, within the order of accuracy of the numerical scheme. The technique to carry out a solution verification is described to provide a rigorousmore » estimate of the uncertainty affecting the numerical results. A methodology for plasma turbulence code validation is also discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The Verification and Validation methodology is then applied to the study of plasma turbulence in the basic plasma physics experiment TORPEX [Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulations carried out with the GBS code [Ricci et al., Plasma Phys. Controlled Fusion 54, 124047 (2012)]. The validation procedure allows progress in the understanding of the turbulent dynamics in TORPEX, by pinpointing the presence of a turbulent regime transition, due to the competition between the resistive and ideal interchange instabilities.« less
Gijsbers, H J H; Lauret, G J; van Hofwegen, A; van Dockum, T A; Teijink, J A W; Hendriks, H J M
2016-06-01
The aim of the study was to develop quality indicators (QIs) for physiotherapy management of patients with intermittent claudication (IC) in the Netherlands. As part of an international six-step method to develop QIs, an online survey Delphi-procedure was completed. After two Delphi-rounds a validation round was performed. Twenty-six experts were recruited to participate in this study. Twenty-four experts completed two Delphi-rounds. A third round was conducted inviting 1200 qualified and registered physiotherapists of the Dutch integrated care network 'Claudicationet' to validate a draft set of quality indicators. Out of 83 potential QIs in the Dutch physiotherapy guideline on 'Intermittent claudication', consensus among the experts selected nine indicators. All nine quality indicators were validated by 300 physiotherapists. A final set of nine indicators was derived from (1) a Dutch evidence-based physiotherapy guideline, (2) an expert Delphi procedure and (3) a validation by 300 physiotherapists. This set of indicators should be validated in clinical practice. Copyright © 2015 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Validation of Antibody-Based Strategies for Diagnosis of Pediatric Celiac Disease Without Biopsy.
Wolf, Johannes; Petroff, David; Richter, Thomas; Auth, Marcus K H; Uhlig, Holm H; Laass, Martin W; Lauenstein, Peter; Krahl, Andreas; Händel, Norman; de Laffolie, Jan; Hauer, Almuthe C; Kehler, Thomas; Flemming, Gunter; Schmidt, Frank; Rodrigues, Astor; Hasenclever, Dirk; Mothes, Thomas
2017-08-01
A diagnosis of celiac disease is made based on clinical, genetic, serologic, and duodenal morphology features. Recent pediatric guidelines, based largely on retrospective data, propose omitting biopsy analysis for patients with concentrations of IgA against tissue transglutaminase (IgA-TTG) >10-fold the upper limit of normal (ULN) and if further criteria are met. A retrospective study concluded that measurements of IgA-TTG and total IgA, or IgA-TTG and IgG against deamidated gliadin (IgG-DGL) could identify patients with and without celiac disease. Patients were assigned to categories of no celiac disease, celiac disease, or biopsy required, based entirely on antibody assays. We aimed to validate the positive and negative predictive values (PPV and NPV) of these diagnostic procedures. We performed a prospective study of 898 children undergoing duodenal biopsy analysis to confirm or rule out celiac disease at 13 centers in Europe. We compared findings from serologic analysis with findings from biopsy analyses, follow-up data, and diagnoses made by the pediatric gastroenterologists (celiac disease, no celiac disease, or no final diagnosis). Assays to measure IgA-TTG, IgG-DGL, and endomysium antibodies were performed by blinded researchers, and tissue sections were analyzed by local and blinded reference pathologists. We validated 2 procedures for diagnosis: total-IgA and IgA-TTG (the TTG-IgA procedure), as well as IgG-DGL with IgA-TTG (TTG-DGL procedure). Patients were assigned to categories of no celiac disease if all assays found antibody concentrations <1-fold the ULN, or celiac disease if at least 1 assay measured antibody concentrations >10-fold the ULN. All other cases were considered to require biopsy analysis. ULN values were calculated using the cutoff levels suggested by the test kit manufacturers. HLA typing was performed for 449 participants. We used models that considered how specificity values change with prevalence to extrapolate the PPV and NPV to populations with lower prevalence of celiac disease. Of the participants, 592 were found to have celiac disease, 345 were found not to have celiac disease, and 24 had no final diagnosis. The TTG-IgA procedure identified patients with celiac disease with a PPV of 0.988 and an NPV of 0.934; the TTG-DGL procedure identified patients with celiac disease with a PPV of 0.988 and an NPV of 0.958. Based on our extrapolation model, we estimated that the PPV and NPV would remain >0.95 even at a disease prevalence as low as 4%. Tests for endomysium antibodies and HLA type did not increase the PPV of samples with levels of IgA-TTG ≥10-fold the ULN. Notably, 4.2% of pathologists disagreed in their analyses of duodenal morphology-a rate comparable to the error rate for serologic assays. In a prospective study, we validated the TTG-IgA procedure and the TTG-DGL procedure in identification of pediatric patients with or without celiac disease, without biopsy. German Clinical Trials Registry no.: DRKS00003854. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.
Banna, Jinan C; Vera Becerra, Luz E; Kaiser, Lucia L; Townsend, Marilyn S
2010-01-01
Development of outcome measures relevant to health nutrition behaviors requires a rigorous process of testing and revision. Whereas researchers often report performance of quantitative data collection to assess questionnaire validity and reliability, qualitative testing procedures are often overlooked. This report outlines a procedure for assessing face validity of a Spanish-language dietary assessment tool. Reviewing the literature produced no rigorously validated Spanish-language food behavior assessment tools for the US Department of Agriculture's food assistance and education programs. In response to this need, this study evaluated the face validity of a Spanish-language food behavior checklist adapted from a 16-item English version of a food behavior checklist shown to be valid and reliable for limited-resource English speakers. The English version was translated using rigorous methods involving initial translation by one party and creation of five possible versions. Photos were modified based on client input and new photos were taken as necessary. A sample of low-income, Spanish-speaking women completed cognitive interviews (n=20). Spanish translation experts (n=7) fluent in both languages and familiar with both cultures made minor modifications but essentially approved client preferences. The resulting checklist generated a readability score of 93, indicating low reading difficulty. The Spanish-language checklist has adequate face validity in the target population and is ready for further validation using convergent measures. At the conclusion of testing, this instrument may be used to evaluate nutrition education interventions in California. These qualitative procedures provide a framework for designing evaluation tools for low-literate audiences participating in the US Department of Agriculture food assistance and education programs. Copyright 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.
BANNA, JINAN C.; VERA BECERRA, LUZ E.; KAISER, LUCIA L.; TOWNSEND, MARILYN S.
2015-01-01
Development of outcome measures relevant to health nutrition behaviors requires a rigorous process of testing and revision. Whereas researchers often report performance of quantitative data collection to assess questionnaire validity and reliability, qualitative testing procedures are often overlooked. This report outlines a procedure for assessing face validity of a Spanish-language dietary assessment tool. Reviewing the literature produced no rigorously validated Spanish-language food behavior assessment tools for the US Department of Agriculture’s food assistance and education programs. In response to this need, this study evaluated the face validity of a Spanish-language food behavior checklist adapted from a 16-item English version of a food behavior checklist shown to be valid and reliable for limited-resource English speakers. The English version was translated using rigorous methods involving initial translation by one party and creation of five possible versions. Photos were modified based on client input and new photos were taken as necessary. A sample of low-income, Spanish-speaking women completed cognitive interviews (n=20). Spanish translation experts (n=7) fluent in both languages and familiar with both cultures made minor modifications but essentially approved client preferences. The resulting checklist generated a readability score of 93, indicating low reading difficulty. The Spanish-language checklist has adequate face validity in the target population and is ready for further validation using convergent measures. At the conclusion of testing, this instrument may be used to evaluate nutrition education interventions in California. These qualitative procedures provide a framework for designing evaluation tools for low-literate audiences participating in the US Department of Agriculture food assistance and education programs. PMID:20102831
ERIC Educational Resources Information Center
Makransky, Guido; Havmose, Philip; Vang, Maria Louison; Andersen, Tonny Elmose; Nielsen, Tine
2017-01-01
The aim of this study was to evaluate the predictive validity of a two-step admissions procedure that included a cognitive ability test followed by multiple mini-interviews (MMIs) used to assess non-cognitive skills, compared to grade-based admissions relative to subsequent drop-out rates and academic achievement after one and two years of study.…
James S. Han; Theodore Mianowski; Yi-yu Lin
1999-01-01
The efficacy of fiber length measurement techniques such as digitizing, the Kajaani procedure, and NIH Image are compared in order to determine the optimal tool. Kenaf bast fibers, aspen, and red pine fibers were collected from different anatomical parts, and the fiber lengths were compared using various analytical tools. A statistical analysis on the validity of the...
Evaluating a Dental Diagnostic Terminology in an Electronic Health Record
White, Joel M.; Kalenderian, Elsbeth; Stark, Paul C.; Ramoni, Rachel L.; Vaderhobli, Ram; Walji, Muhammad F.
2011-01-01
Standardized treatment procedure codes and terms are routinely used in dentistry. Utilization of a diagnostic terminology is common in medicine, but there is not a satisfactory or commonly standardized dental diagnostic terminology available at this time. Recent advances in dental informatics have provided an opportunity for inclusion of diagnostic codes and terms as part of treatment planning and documentation in the patient treatment history. This article reports the results of the use of a diagnostic coding system in a large dental school’s predoctoral clinical practice. A list of diagnostic codes and terms, called Z codes, was developed by dental faculty members. The diagnostic codes and terms were implemented into an electronic health record (EHR) for use in a predoctoral dental clinic. The utilization of diagnostic terms was quantified. The validity of Z code entry was evaluated by comparing the diagnostic term entered to the procedure performed, where valid diagnosis-procedure associations were determined by consensus among three calibrated academically based dentists. A total of 115,004 dental procedures were entered into the EHR during the year sampled. Of those, 43,053 were excluded from this analysis because they represent diagnosis or other procedures unrelated to treatments. Among the 71,951 treatment procedures, 27,973 had diagnoses assigned to them with an overall utilization of 38.9 percent. Of the 147 available Z codes, ninety-three were used (63.3 percent). There were 335 unique procedures provided and 2,127 procedure/diagnosis pairs captured in the EHR. Overall, 76.7 percent of the diagnoses entered were valid. We conclude that dental diagnostic terminology can be incorporated within an electronic health record and utilized in an academic clinical environment. Challenges remain in the development of terms and implementation and ease of use that, if resolved, would improve the utilization. PMID:21546594
Abdellaziz, Lobna M; Hosny, Mervat M
2011-01-01
Three simple spectrophotometric and atomic absorption spectrometric methods are developed and validated for the determination of moxifloxacin HCl in pure form and in pharmaceutical formulations. Method (A) is a kinetic method based on the oxidation of moxifloxacin HCl by Fe(3+) ion in the presence of 1,10 o-phenanthroline (o-phen). Method (B) describes spectrophotometric procedures for determination of moxifloxacin HCl based on its ability to reduce Fe (III) to Fe (II), which was rapidly converted to the corresponding stable coloured complex after reacting with 2,2' bipyridyl (bipy). The formation of the tris-complex formed in both methods (A) and (B) were carefully studied and their absorbance were measured at 510 and 520 nm respectively. Method (C) is based on the formation of ion- pair associated between the drug and bismuth (III) tetraiodide in acidic medium to form orange-red ion-pair associates. This associate can be quantitatively determined by three different procedures. The formed precipitate is either filtered off, dissolved in acetone and quantified spectrophotometrically at 462 nm (Procedure 1), or decomposed by hydrochloric acid, and the bismuth content is determined by direct atomic absorption spectrometric (Procedure 2). Also the residual unreacted metal complex in the filtrate is determined through its metal content using indirect atomic absorption spectrometric technique (procedure 3). All the proposed methods were validated according to the International Conference on Harmonization (ICH) guidelines, the three proposed methods permit the determination of moxifloxacin HCl in the range of (0.8-6, 0.8-4) for methods A and B, (16-96, 16-96 and 16-72) for procedures 1-3 in method C. The limits of detection and quantitation were calculated, the precision of the methods were satisfactory; the values of relative standard deviations did not exceed 2%. The proposed methods were successfully applied to determine the drug in its pharmaceutical formulations without interference from the common excipients. The results obtained by the proposed methods were comparable with those obtained by the reference method.
Abdellaziz, Lobna M.; Hosny, Mervat M.
2011-01-01
Three simple spectrophotometric and atomic absorption spectrometric methods are developed and validated for the determination of moxifloxacin HCl in pure form and in pharmaceutical formulations. Method (A) is a kinetic method based on the oxidation of moxifloxacin HCl by Fe3+ ion in the presence of 1,10 o-phenanthroline (o-phen). Method (B) describes spectrophotometric procedures for determination of moxifloxacin HCl based on its ability to reduce Fe (III) to Fe (II), which was rapidly converted to the corresponding stable coloured complex after reacting with 2,2′ bipyridyl (bipy). The formation of the tris-complex formed in both methods (A) and (B) were carefully studied and their absorbance were measured at 510 and 520 nm respectively. Method (C) is based on the formation of ion- pair associated between the drug and bismuth (III) tetraiodide in acidic medium to form orange—red ion-pair associates. This associate can be quantitatively determined by three different procedures. The formed precipitate is either filtered off, dissolved in acetone and quantified spectrophotometrically at 462 nm (Procedure 1), or decomposed by hydrochloric acid, and the bismuth content is determined by direct atomic absorption spectrometric (Procedure 2). Also the residual unreacted metal complex in the filtrate is determined through its metal content using indirect atomic absorption spectrometric technique (procedure 3). All the proposed methods were validated according to the International Conference on Harmonization (ICH) guidelines, the three proposed methods permit the determination of moxifloxacin HCl in the range of (0.8–6, 0.8–4) for methods A and B, (16–96, 16–96 and 16–72) for procedures 1–3 in method C. The limits of detection and quantitation were calculated, the precision of the methods were satisfactory; the values of relative standard deviations did not exceed 2%. The proposed methods were successfully applied to determine the drug in its pharmaceutical formulations without interference from the common excipients. The results obtained by the proposed methods were comparable with those obtained by the reference method. PMID:22219661
Vogl, Jochen; Paz, Boaz; Koenig, Maren; Pritzkow, Wolfgang
2013-03-01
A modified Pb-matrix separation procedure using NH4HCO3 solution as eluent has been developed and validated for determination of Pb isotope amount ratios by thermal ionization mass spectrometry. The procedure is based on chromatographic separation using the Pb·Spec resin and an in-house-prepared NH4HCO3 solution serving as eluent. The advantages of this eluent are low Pb blanks (<40 pg mL(-1)) and the property that NH4HCO3 can be easily removed by use of a heating step (>60 °C). Pb recovery is >95 % for water samples. For archaeological silver samples, however, the Pb recovery is reduced to approximately 50 %, but causes no bias in the determination of Pb isotope amount ratios. The validated procedure was used to determine lead isotope amount ratios in Trojan silver artefacts with expanded uncertainties (k = 2) <0.09 %.
Collection, quality control and delivery of ground-based magnetic data during ESA's Swarm mission
NASA Astrophysics Data System (ADS)
Macmillan, Susan; Humphries, Thomas; Flower, Simon; Swan, Anthony
2016-04-01
Ground-based magnetic data are used in a variety of ways when analysing satellite data. Selecting satellite data often involves the use of magnetic disturbance indices derived from ground-based stations and inverting satellite magnetic data for models of fields from various sources often requires ground-based data. Ground-based data can also be valuable independent data for validation purposes. We summarise data collection and quality control procedures in place at the British Geological Survey for global ground-based observatory and repeat station data. Whilst ongoing participation in the ICSU World Data System and INTERMAGNET facilitates this work, additional procedures have been specially developed for the Swarm mission. We describe these in detail.
Video-augmented feedback for procedural performance.
Wittler, Mary; Hartman, Nicholas; Manthey, David; Hiestand, Brian; Askew, Kim
2016-06-01
Resident programs must assess residents' achievement of core competencies for clinical and procedural skills. Video-augmented feedback may facilitate procedural skill acquisition and promote more accurate self-assessment. A randomized controlled study to investigate whether video-augmented verbal feedback leads to increased procedural skill and improved accuracy of self-assessment compared to verbal only feedback. Participants were evaluated during procedural training for ultrasound guided internal jugular central venous catheter (US IJ CVC) placement. All participants received feedback based on a validated 30-point checklist for US IJ CVC placement and validated 6-point procedural global rating scale. Scores in both groups improved by a mean of 9.6 points (95% CI: 7.8-11.4) on the 30-point checklist, with no difference between groups in mean score improvement on the global rating scale. In regards to self-assessment, participant self-rating diverged from faculty scoring, increasingly so after receiving feedback. Residents rated highly by faculty underestimated their skill, while those rated more poorly demonstrated increasing overestimation. Accuracy of self-assessment was not improved by addition of video. While feedback advanced the skill of the resident, video-augmented feedback did not enhance skill acquisition or improve accuracy of resident self-assessment compared to standard feedback.
Soares, Cristina M Dias; Alves, Rita C; Casal, Susana; Oliveira, M Beatriz P P; Fernandes, José Oliveira
2010-04-01
The present study describes the development and validation of a new method based on a matrix solid-phase dispersion (MSPD) sample preparation procedure followed by GC-MS for determination of acrylamide levels in coffee (ground coffee and brewed coffee) and coffee substitute samples. Samples were dispersed in C(18) sorbent and the mixture was further packed into a preconditioned custom-made ISOLUTE bilayered SPE column (C(18)/Multimode; 1 g + 1 g). Acrylamide was subsequently eluted with water, and then derivatized with bromine and quantified by GC-MS in SIM mode. The MSPD/GC-MS method presented a LOD of 5 microg/kg and a LOQ of 10 microg/kg. Intra and interday precisions ranged from 2% to 4% and 4% to 10%, respectively. To evaluate the performance of the method, 11 samples of ground and brewed coffee and coffee substitutes were simultaneously analyzed by the developed method and also by a previously validated method based in a liquid-extraction (LE) procedure, and the results were compared showing a high correlation between them.
Translating the simulation of procedural drilling techniques for interactive neurosurgical training.
Stredney, Don; Rezai, Ali R; Prevedello, Daniel M; Elder, J Bradley; Kerwin, Thomas; Hittle, Bradley; Wiet, Gregory J
2013-10-01
Through previous efforts we have developed a fully virtual environment to provide procedural training of otologic surgical technique. The virtual environment is based on high-resolution volumetric data of the regional anatomy. These volumetric data help drive an interactive multisensory, ie, visual (stereo), aural (stereo), and tactile, simulation environment. Subsequently, we have extended our efforts to support the training of neurosurgical procedural technique as part of the Congress of Neurological Surgeons simulation initiative. To deliberately study the integration of simulation technologies into the neurosurgical curriculum and to determine their efficacy in teaching minimally invasive cranial and skull base approaches. We discuss issues of biofidelity and our methods to provide objective, quantitative and automated assessment for the residents. We conclude with a discussion of our experiences by reporting preliminary formative pilot studies and proposed approaches to take the simulation to the next level through additional validation studies. We have presented our efforts to translate an otologic simulation environment for use in the neurosurgical curriculum. We have demonstrated the initial proof of principles and define the steps to integrate and validate the system as an adjuvant to the neurosurgical curriculum.
McCarthy, Ann Marie; Kleiber, Charmaine; Ataman, Kaan; Street, W. Nick; Zimmerman, M. Bridget; Ersig, Anne L.
2012-01-01
This secondary data analysis used data mining methods to develop predictive models of child risk for distress during a healthcare procedure. Data used came from a study that predicted factors associated with children’s responses to an intravenous catheter insertion while parents provided distraction coaching. From the 255 items used in the primary study, 44 predictive items were identified through automatic feature selection and used to build support vector machine regression models. Models were validated using multiple cross-validation tests and by comparing variables identified as explanatory in the traditional versus support vector machine regression. Rule-based approaches were applied to the model outputs to identify overall risk for distress. A decision tree was then applied to evidence-based instructions for tailoring distraction to characteristics and preferences of the parent and child. The resulting decision support computer application, the Children, Parents and Distraction (CPaD), is being used in research. Future use will support practitioners in deciding the level and type of distraction intervention needed by a child undergoing a healthcare procedure. PMID:22805121
NASA DOE POD NDE Capabilities Data Book
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2015-01-01
This data book contains the Directed Design of Experiments for Validating Probability of Detection (POD) Capability of NDE Systems (DOEPOD) analyses of the nondestructive inspection data presented in the NTIAC, Nondestructive Evaluation (NDE) Capabilities Data Book, 3rd ed., NTIAC DB-97-02. DOEPOD is designed as a decision support system to validate inspection system, personnel, and protocol demonstrating 0.90 POD with 95% confidence at critical flaw sizes, a90/95. The test methodology used in DOEPOD is based on the field of statistical sequential analysis founded by Abraham Wald. Sequential analysis is a method of statistical inference whose characteristic feature is that the number of observations required by the procedure is not determined in advance of the experiment. The decision to terminate the experiment depends, at each stage, on the results of the observations previously made. A merit of the sequential method, as applied to testing statistical hypotheses, is that test procedures can be constructed which require, on average, a substantially smaller number of observations than equally reliable test procedures based on a predetermined number of observations.
Colomer, Fernando Llavador; Espinós-Morató, Héctor; Iglesias, Enrique Mantilla; Pérez, Tatiana Gómez; Campos-Candel, Andreu; Lozano, Caterina Coll
2012-08-01
A monitoring program based on an indirect method was conducted to assess the approximation of the olfactory impact in several wastewater treatment plants (in the present work, only one is shown). The method uses H2S passive sampling using Palmes-type diffusion tubes impregnated with silver nitrate and fluorometric analysis employing fluorescein mercuric acetate. The analytical procedure was validated in the exposure chamber. Exposure periods ofat least 4 days are recommended. The quantification limit of the procedure is 0.61 ppb for a 5-day sampling, which allows the H2S immission (ground concentration) level to be measured within its low odor threshold, from 0.5 to 300 ppb. Experimental results suggest an exposure time greater than 4 days, while recovery efficiency of the procedure, 93.0+/-1.8%, seems not to depend on the amount of H2S collected by the samplers within their application range. The repeatability, expressed as relative standard deviation, is lower than 7%, which is within the limits normally accepted for this type of sampler. Statistical comparison showed that this procedure and the reference method provide analogous accuracy. The proposed procedure was applied in two experimental campaigns, one intensive and the other extensive, and concentrations within the H2S low odor threshold were quantified at each sampling point. From these results, it can be concluded that the procedure shows good potential for monitoring the olfactory impact around facilities where H2S emissions are dominant.
Colomer, Fernando Llavador; Espinós-Morató, Héctor; Iglesias, Enrique Mantilla; Pérez, Tatiana Gómez; Campos-Candel, Andreu; Coll Lozano, Caterina
2012-08-01
A monitoring program based on an indirect method was conducted to assess the approximation of the olfactory impact in several wastewater treatment plants (in the present work, only one is shown). The method uses H 2 S passive sampling using Palmes-type diffusion tubes impregnated with silver nitrate and fluorometric analysis employing fluorescein mercuric acetate. The analytical procedure was validated in the exposure chamber. Exposure periods of at least 4 days are recommended. The quantification limit of the procedure is 0.61 ppb for a 5-day sampling, which allows the H 2 S immission (ground concentration) level to be measured within its low odor threshold, from 0.5 to 300 ppb. Experimental results suggest an exposure time greater than 4 days, while recovery efficiency of the procedure, 93.0 ± 1.8%, seems not to depend on the amount of H 2 S collected by the samplers within their application range. The repeatability, expressed as relative standard deviation, is lower than 7%, which is within the limits normally accepted for this type of sampler. Statistical comparison showed that this procedure and the reference method provide analogous accuracy. The proposed procedure was applied in two experimental campaigns, one intensive and the other extensive, and concentrations within the H 2 S low odor threshold were quantified at each sampling point. From these results, it can be concluded that the procedure shows good potential for monitoring the olfactory impact around facilities where H 2 S emissions are dominant. [Box: see text].
Evaluation of a virtual-reality-based simulator using passive haptic feedback for knee arthroscopy.
Fucentese, Sandro F; Rahm, Stefan; Wieser, Karl; Spillmann, Jonas; Harders, Matthias; Koch, Peter P
2015-04-01
The aim of this work is to determine face validity and construct validity of a new virtual-reality-based simulator for diagnostic and therapeutic knee arthroscopy. The study tests a novel arthroscopic simulator based on passive haptics. Sixty-eight participants were grouped into novices, intermediates, and experts. All participants completed two exercises. In order to establish face validity, all participants filled out a questionnaire concerning different aspects of simulator realism, training capacity, and different statements using a seven-point Likert scale (range 1-7). Construct validity was tested by comparing various simulator metric values between novices and experts. Face validity could be established: overall realism was rated with a mean value of 5.5 points. Global training capacity scored a mean value of 5.9. Participants considered the simulator as useful for procedural training of diagnostic and therapeutic arthroscopy. In the foreign body removal exercise, experts were overall significantly faster in the whole procedure (6 min 24 s vs. 8 min 24 s, p < 0.001), took less time to complete the diagnostic tour (2 min 49 s vs. 3 min 32 s, p = 0.027), and had a shorter camera path length (186 vs. 246 cm, p = 0.006). The simulator achieved high scores in terms of realism. It was regarded as a useful training tool, which is also capable of differentiating between varying levels of arthroscopic experience. Nevertheless, further improvements of the simulator especially in the field of therapeutic arthroscopy are desirable. In general, the findings support that virtual-reality-based simulation using passive haptics has the potential to complement conventional training of knee arthroscopy skills. II.
NASA Astrophysics Data System (ADS)
Breden, Maxime; Castelli, Roberto
2018-05-01
In this paper, we present and apply a computer-assisted method to study steady states of a triangular cross-diffusion system. Our approach consist in an a posteriori validation procedure, that is based on using a fixed point argument around a numerically computed solution, in the spirit of the Newton-Kantorovich theorem. It allows to prove the existence of various non homogeneous steady states for different parameter values. In some situations, we obtain as many as 13 coexisting steady states. We also apply the a posteriori validation procedure to study the linear stability of the obtained steady states, proving that many of them are in fact unstable.
Prediction of resource volumes at untested locations using simple local prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2006-01-01
This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.
Mechanistic design concepts for conventional flexible pavements
NASA Astrophysics Data System (ADS)
Elliott, R. P.; Thompson, M. R.
1985-02-01
Mechanical design concepts for convetional flexible pavement (asphalt concrete (AC) surface plus granular base/subbase) for highways are proposed and validated. The procedure is based on ILLI-PAVE, a stress dependent finite element computer program, coupled with appropriate transfer functions. Two design criteria are considered: AC flexural fatigue cracking and subgrade rutting. Algorithms were developed relating pavement response parameters (stresses, strains, deflections) to AC thickness, AC moduli, granular layer thickness, and subgrade moduli. Extensive analyses of the AASHO Road Test flexible pavement data are presented supporting the validity of the proposed concepts.
Validation of the procedures. [integrated multidisciplinary optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Mantay, Wayne R.
1989-01-01
Validation strategies are described for procedures aimed at improving the rotor blade design process through a multidisciplinary optimization approach. Validation of the basic rotor environment prediction tools and the overall rotor design are discussed.
A posteriori model validation for the temporal order of directed functional connectivity maps
Beltz, Adriene M.; Molenaar, Peter C. M.
2015-01-01
A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data). PMID:26379489
41 CFR 60-3.6 - Use of selection procedures which have not been validated.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) General Principles § 60-3.6 Use of selection procedures which have not been validated. A. Use of alternate... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Use of selection procedures which have not been validated. 60-3.6 Section 60-3.6 Public Contracts and Property Management...
de Person, Marine; Hazotte, Aurélie; Elfakir, Claire; Lafosse, Michel
2005-07-22
A new procedure based on hydrophilic interaction chromatography coupled to tandem mass spectrometry (ionisation process by pneumatically assisted electrospray in negative ion mode), is developed and validated for the simultaneous determination of underivatised taurine and methionine in beverages rich in carbohydrates such as energy drinks. No initial clean-up procedure and no sample derivatisation are required. Satisfactory analysis was obtained on an Astec apHera NH2 (150 mm x 4.6 mm; 5 microm) column with methanol-water (60/40) as mobile phase. The method was validated in terms of specificity, detection limits, linearity, accuracy, precision and stability, using threonine as internal standard. The potential effects of matrix and endogenous amino acid content were also examined. The limits of detection in the beverage varied from 20 microg L(-1) for taurine to 50 micro L(-1) for methionine.
Fault Injection Validation of a Safety-Critical TMR Sysem
NASA Astrophysics Data System (ADS)
Irrera, Ivano; Madeira, Henrique; Zentai, Andras; Hergovics, Beata
2016-08-01
Digital systems and their software are the core technology for controlling and monitoring industrial systems in practically all activity domains. Functional safety standards such as the European standard EN 50128 for railway applications define the procedures and technical requirements for the development of software for railway control and protection systems. The validation of such systems is a highly demanding task. In this paper we discuss the use of fault injection techniques, which have been used extensively in several domains, particularly in the space domain, to complement the traditional procedures to validate a SIL (Safety Integrity Level) 4 system for railway signalling, implementing a TMR (Triple Modular Redundancy) architecture. The fault injection tool is based on JTAG technology. The results of our injection campaign showed a high degree of tolerance to most of the injected faults, but several cases of unexpected behaviour have also been observed, helping understanding worst-case scenarios.
Evaluation of Three Pain Assessment Scales Used for Ventilated Neonates.
Huang, Xiao-Zhi; Li, Li; Zhou, Jun; He, Fang; Zhong, Chun-Xia; Wang, Bin
2018-06-26
To compare and evaluate the reliability, validity, feasibility, clinical utility, and nurses' preference of the Premature Infant Pain Profile-Revised (PIPP-R), the Neonatal Pain, Agitation, and Sedation Scale (N-PASS), and the Neonatal Infant Acute Pain Assessment Scale (NIAPAS) used for procedural pain in ventilated neonates. Procedural pain is a common phenomenon but is undermanaged and underassessed in hospitalized neonates. Information for clinician selecting pain measurements to improve neonatal care and outcomes are still limited. A prospective observational study and adheres to the relevant EQUATOR guidelines. A total of 1080 pain assessments were made at 90 neonates by two nurses independently, using three scales viewing three phases of videotaped painful (arterial blood sampling) and non-painful procedures (diaper change). Internal consistency, inter-rater reliability, discriminant validity, concurrent validity and convergent validity of scales were analyzed. Feasibility, clinical utility, and nurses' preference of scales were also investigated. All three scales showed excellent inter-raters coefficients (from 0.991 to 0.992) and good internal consistency (0.733 for the PIPP-R, 0.837 for the N-PASS and 0.836 for the NIAPAS, respectively). Scores of painful and nonpainful procedures on the three scales changed significantly across the phases. There was a strong correlation between the three scales with adequate limits of agreement. The mean scores of the N-PASS for feasibility and utility were significantly higher than those of the NIAPAS, but not significantly higher than those of the PIPP-R. The N-PASS was mostly preferred by 55.9% of the nurses, followed by the NIAPAS (23.5%) and the PIPP-R (20.6%). The three scales are all reliable and valid, but the N-PASS and the NIAPAS performs better in reliability. The N-PASS appears to be a better choice for frontier nurses to assess procedural pain in ventilated neonates based on its good feasibility, utility, and nurses' preference. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
41 CFR 60-3.6 - Use of selection procedures which have not been validated.
Code of Federal Regulations, 2011 CFR
2011-07-01
... validation techniques contemplated by these guidelines. In such circumstances, the user should utilize... a formal and scored selection procedure is used which has an adverse impact, the validation... user cannot or need not follow the validation techniques anticipated by these guidelines, the user...
41 CFR 60-3.6 - Use of selection procedures which have not been validated.
Code of Federal Regulations, 2014 CFR
2014-07-01
... validation techniques contemplated by these guidelines. In such circumstances, the user should utilize... a formal and scored selection procedure is used which has an adverse impact, the validation... user cannot or need not follow the validation techniques anticipated by these guidelines, the user...
41 CFR 60-3.6 - Use of selection procedures which have not been validated.
Code of Federal Regulations, 2012 CFR
2012-07-01
... validation techniques contemplated by these guidelines. In such circumstances, the user should utilize... a formal and scored selection procedure is used which has an adverse impact, the validation... user cannot or need not follow the validation techniques anticipated by these guidelines, the user...
41 CFR 60-3.6 - Use of selection procedures which have not been validated.
Code of Federal Regulations, 2013 CFR
2013-07-01
... validation techniques contemplated by these guidelines. In such circumstances, the user should utilize... a formal and scored selection procedure is used which has an adverse impact, the validation... user cannot or need not follow the validation techniques anticipated by these guidelines, the user...
Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods
NASA Astrophysics Data System (ADS)
Perumal, Muthiah; Tayfur, Gokmen; Rao, C. Madhusudana; Gurarslan, Gurhan
2017-03-01
Two variants of the Muskingum flood routing method formulated for accounting nonlinearity of the channel routing process are investigated in this study. These variant methods are: (1) The three-parameter conceptual Nonlinear Muskingum (NLM) method advocated by Gillin 1978, and (2) The Variable Parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price in 2013. The VPMM method does not require rigorous calibration and validation procedures as required in the case of NLM method due to established relationships of its parameters with flow and channel characteristics based on hydrodynamic principles. The parameters of the conceptual nonlinear storage equation used in the NLM method were calibrated using the Artificial Intelligence Application (AIA) techniques, such as the Genetic Algorithm (GA), the Differential Evolution (DE), the Particle Swarm Optimization (PSO) and the Harmony Search (HS). The calibration was carried out on a given set of hypothetical flood events obtained by routing a given inflow hydrograph in a set of 40 km length prismatic channel reaches using the Saint-Venant (SV) equations. The validation of the calibrated NLM method was investigated using a different set of hypothetical flood hydrographs obtained in the same set of channel reaches used for calibration studies. Both the sets of solutions obtained in the calibration and validation cases using the NLM method were compared with the corresponding solutions of the VPMM method based on some pertinent evaluation measures. The results of the study reveal that the physically based VPMM method is capable of accounting for nonlinear characteristics of flood wave movement better than the conceptually based NLM method which requires the use of tedious calibration and validation procedures.
A long-term validation of the modernised DC-ARC-OES solid-sample method.
Flórián, K; Hassler, J; Förster, O
2001-12-01
The validation procedure based on ISO 17025 standard has been used to study and illustrate both the longterm stability of the calibration process of the DC-ARC solid sample spectrometric method and the main validation criteria of the method. In the calculation of the validation characteristics depending on the linearity(calibration), also the fulfilment of predetermining criteria such as normality and homoscedasticity was checked. In order to decide whether there are any trends in the time-variation of the analytical signal or not, also the Neumann test of trend was applied and evaluated. Finally, a comparison with similar validation data of the ETV-ICP-OES method was carried out.
Pernambuco, Leandro; Espelt, Albert; Magalhães, Hipólito Virgílio; Lima, Kenio Costa de
2017-06-08
to present a guide with recommendations for translation, adaptation, elaboration and process of validation of tests in Speech and Language Pathology. the recommendations were based on international guidelines with a focus on the elaboration, translation, cross-cultural adaptation and validation process of tests. the recommendations were grouped into two Charts, one of them with procedures for translation and transcultural adaptation and the other for obtaining evidence of validity, reliability and measures of accuracy of the tests. a guide with norms for the organization and systematization of the process of elaboration, translation, cross-cultural adaptation and validation process of tests in Speech and Language Pathology was created.
ERIC Educational Resources Information Center
Hatcher, Tim; Colton, Sharon
2007-01-01
Purpose: The purpose of this article is to highlight the results of the online Delphi research project; in particular the procedures used to establish an online and innovative process of content validation and obtaining "rich" and descriptive information using the internet and current e-learning technologies. The online Delphi was proven to be an…
Effectiveness of internet-based affect induction procedures: A systematic review and meta-analysis.
Ferrer, Rebecca A; Grenen, Emily G; Taber, Jennifer M
2015-12-01
Procedures used to induce affect in a laboratory are effective and well-validated. Given recent methodological and technological advances in Internet research, it is important to determine whether affect can be effectively induced using Internet methodology. We conducted a meta-analysis and systematic review of prior research that has used Internet-based affect induction procedures, and examined potential moderators of the effectiveness of affect induction procedures. Twenty-six studies were included in final analyses, with 89 independent effect sizes. Affect induction procedures effectively induced general positive affect, general negative affect, fear, disgust, anger, sadness, and guilt, but did not significantly induce happiness. Contamination of other nontarget affect did not appear to be a major concern. Video inductions resulted in greater effect sizes. Overall, results indicate that affect can be effectively induced in Internet studies, suggesting an important venue for the acceleration of affective science. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zdarek, J.; Pecinka, L.
Leak-before-break (LBB) analysis of WWER type reactors in the Czech and Sloval Republics is summarized in this paper. Legislative bases, required procedures, and validation and verification of procedures are discussed. A list of significant issues identified during the application of LBB analysis is presented. The results of statistical evaluation of crack length characteristics are presented and compared for the WWER 440 Type 230 and 213 reactors and for the WWER 1000 Type 302, 320 and 338 reactors.
Development of a virtual reality training curriculum for phacoemulsification surgery.
Spiteri, A V; Aggarwal, R; Kersey, T L; Sira, M; Benjamin, L; Darzi, A W; Bloom, P A
2014-01-01
Training within a proficiency-based virtual reality (VR) curriculum may reduce errors during real surgical procedures. This study used a scientific methodology to develop a VR training curriculum for phacoemulsification surgery (PS). Ten novice-(n) (performed <10 cataract operations), 10 intermediate-(i) (50-200), and 10 experienced-(e) (>500) surgeons were recruited. Construct validity was defined as the ability to differentiate between the three levels of experience, based on the simulator-derived metrics for two abstract modules (four tasks) and three procedural modules (five tasks) on a high-fidelity VR simulator. Proficiency measures were based on the performance of experienced surgeons. Abstract modules demonstrated a 'ceiling effect' with construct validity established between groups (n) and (i) but not between groups (i) and (e)-Forceps 1 (46, 87, and 95; P<0.001). Increasing difficulty of task showed significantly reduced performance in (n) but minimal difference for (i) and (e)-Anti-tremor 4 (0, 51, and 59; P<0.001), Forceps 4 (11, 73, and 94; P<0.001). Procedural modules were found to be construct valid between groups (n) and (i) and between groups (i) and (e)-Lens-cracking (0, 22, and 51; P<0.05) and Phaco-quadrants (16, 53, and 87; P<0.05). This was also the case with Capsulorhexis (0, 19, and 63; P<0.05) with the performance decreasing in the (n) and (i) group but improving in the (e) group (0, 55, and 73; P<0.05) and (0, 48, and 76; P<0.05) as task difficulty increased. Experienced/intermediate benchmark skill levels are defined allowing the development of a proficiency-based VR training curriculum for PS for novices using a structured scientific methodology.
A Fourier-based textural feature extraction procedure
NASA Technical Reports Server (NTRS)
Stromberg, W. D.; Farr, T. G.
1986-01-01
A procedure is presented to discriminate and characterize regions of uniform image texture. The procedure utilizes textural features consisting of pixel-by-pixel estimates of the relative emphases of annular regions of the Fourier transform. The utility and derivation of the features are described through presentation of a theoretical justification of the concept followed by a heuristic extension to a real environment. Two examples are provided that validate the technique on synthetic images and demonstrate its applicability to the discrimination of geologic texture in a radar image of a tropical vegetated area.
Dohrenbusch, R
2009-06-01
Chronic pain accompanied by disability and handicap is a frequent symptom necessitating medical assessment. Current guidelines for the assessment of malingering suggest discrimination between explanatory demonstration, aggravation and simulation. However, this distinction has not clearly been put into operation and validated. The necessity of assessment strategies based on general principles of psychological assessment and testing is emphasized. Standardized and normalized psychological assessment methods and symptom validation techniques should be used in the assessment of subjects with chronic pain problems. An adaptive procedure for assessing the validity of complaints is suggested to minimize effort and costs.
ERIC Educational Resources Information Center
Lloyd, Blair P.; Wehby, Joseph H.; Weaver, Emily S.; Goldman, Samantha E.; Harvey, Michelle N.; Sherlock, Daniel R.
2015-01-01
Although functional analysis (FA) remains the standard for identifying the function of problem behavior for students with developmental disabilities, traditional FA procedures are typically costly in terms of time, resources, and perceived risks. Preliminary research suggests that trial-based FA may be a less costly alternative. The purpose of…
Slice-thickness evaluation in CT and MRI: an alternative computerised procedure.
Acri, G; Tripepi, M G; Causa, F; Testagrossa, B; Novario, R; Vermiglio, G
2012-04-01
The efficient use of computed tomography (CT) and magnetic resonance imaging (MRI) equipment necessitates establishing adequate quality-control (QC) procedures. In particular, the accuracy of slice thickness (ST) requires scan exploration of phantoms containing test objects (plane, cone or spiral). To simplify such procedures, a novel phantom and a computerised LabView-based procedure have been devised, enabling determination of full width at half maximum (FWHM) in real time. The phantom consists of a polymethyl methacrylate (PMMA) box, diagonally crossed by a PMMA septum dividing the box into two sections. The phantom images were acquired and processed using the LabView-based procedure. The LabView (LV) results were compared with those obtained by processing the same phantom images with commercial software, and the Fisher exact test (F test) was conducted on the resulting data sets to validate the proposed methodology. In all cases, there was no statistically significant variation between the two different procedures and the LV procedure, which can therefore be proposed as a valuable alternative to other commonly used procedures and be reliably used on any CT and MRI scanner.
Shackelford, Stacy; Garofalo, Evan; Shalin, Valerie; Pugh, Kristy; Chen, Hegang; Pasley, Jason; Sarani, Babak; Henry, Sharon; Bowyer, Mark; Mackenzie, Colin F
2015-07-01
Maintaining trauma-specific surgical skills is an ongoing challenge for surgical training programs. An objective assessment of surgical skills is needed. We hypothesized that a validated surgical performance assessment tool could detect differences following a training intervention. We developed surgical performance assessment metrics based on discussion with expert trauma surgeons, video review of 10 experts and 10 novice surgeons performing three vascular exposure procedures and lower extremity fasciotomy on cadavers, and validated the metrics with interrater reliability testing by five reviewers blinded to level of expertise and a consensus conference. We tested these performance metrics in 12 surgical residents (Year 3-7) before and 2 weeks after vascular exposure skills training in the Advanced Surgical Skills for Exposure in Trauma (ASSET) course. Performance was assessed in three areas as follows: knowledge (anatomic, management), procedure steps, and technical skills. Time to completion of procedures was recorded, and these metrics were combined into a single performance score, the Trauma Readiness Index (TRI). Wilcoxon matched-pairs signed-ranks test compared pretraining/posttraining effects. Mean time to complete procedures decreased by 4.3 minutes (from 13.4 minutes to 9.1 minutes). The performance component most improved by the 1-day skills training was procedure steps, completion of which increased by 21%. Technical skill scores improved by 12%. Overall knowledge improved by 3%, with 18% improvement in anatomic knowledge. TRI increased significantly from 50% to 64% with ASSET training. Interrater reliability of the surgical performance assessment metrics was validated with single intraclass correlation coefficient of 0.7 to 0.98. A trauma-relevant surgical performance assessment detected improvements in specific procedure steps and anatomic knowledge taught during a 1-day course, quantified by the TRI. ASSET training reduced time to complete vascular control by one third. Future applications include assessing specific skills in a larger surgeon cohort, assessing military surgical readiness, and quantifying skill degradation with time since training.
ERIC Educational Resources Information Center
Smith, Karen; And Others
Procedures for validating data reported by students and parents on an application for Basic Educational Opportunity Grants were developed in 1978 for the U.S. Office of Education (OE). Validation activities include: validation of flagged Student Eligibility Reports (SERs) for students whose schools are part of the Alternate Disbursement System;…
Quantitative metabolomics of the thermophilic methylotroph Bacillus methanolicus.
Carnicer, Marc; Vieira, Gilles; Brautaset, Trygve; Portais, Jean-Charles; Heux, Stephanie
2016-06-01
The gram-positive bacterium Bacillus methanolicus MGA3 is a promising candidate for methanol-based biotechnologies. Accurate determination of intracellular metabolites is crucial for engineering this bacteria into an efficient microbial cell factory. Due to the diversity of chemical and cell properties, an experimental protocol validated on B. methanolicus is needed. Here a systematic evaluation of different techniques for establishing a reliable basis for metabolome investigations is presented. Metabolome analysis was focused on metabolites closely linked with B. methanolicus central methanol metabolism. As an alternative to cold solvent based procedures, a solvent-free quenching strategy using stainless steel beads cooled to -20 °C was assessed. The precision, the consistency of the measurements, and the extent of metabolite leakage from quenched cells were evaluated in procedures with and without cell separation. The most accurate and reliable performance was provided by the method without cell separation, as significant metabolite leakage occurred in the procedures based on fast filtration. As a biological test case, the best protocol was used to assess the metabolome of B. methanolicus grown in chemostat on methanol at two different growth rates and its validity was demonstrated. The presented protocol is a first and helpful step towards developing reliable metabolomics data for thermophilic methylotroph B. methanolicus. This will definitely help for designing an efficient methylotrophic cell factory.
Safety validation test equipment operation
NASA Astrophysics Data System (ADS)
Kurosaki, Tadaaki; Watanabe, Takashi
1992-08-01
An overview of the activities conducted on safety validation test equipment operation for materials used for NASA manned missions is presented. Safety validation tests, such as flammability, odor, offgassing, and so forth were conducted in accordance with NASA-NHB-8060.1C using test subjects common with those used by NASA, and the equipment used were qualified for their functions and performances in accordance with NASDA-CR-99124 'Safety Validation Test Qualification Procedures.' Test procedure systems were established by preparing 'Common Procedures for Safety Validation Test' as well as test procedures for flammability, offgassing, and odor tests. The test operation organization chaired by the General Manager of the Parts and Material Laboratory of NASDA (National Space Development Agency of Japan) was established, and the test leaders and operators in the organization were qualified in accordance with the specified procedures. One-hundred-one tests had been conducted so far by the Parts and Material Laboratory according to the request submitted by the manufacturers through the Space Station Group and the Safety and Product Assurance for Manned Systems Office.
ERIC Educational Resources Information Center
Marquardt, Lloyd D.; McCormick, Ernest J.
The study involved the use of a structured job analysis instrument called the Position Analysis Questionnaire (PAQ) as the direct basis for the establishment of the job component validity of aptitude tests (that is, a procedure for estimating the aptitude requirements for jobs strictly on the basis of job analysis data). The sample of jobs used…
Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Souers, Rhona J; Fatheree, Lisa A; Volmar, Keith E; Stuart, Lauren N; Nowak, Jan A; Astles, J Rex; Nakhleh, Raouf E
2017-09-01
- Laboratories must demonstrate analytic validity before any test can be used clinically, but studies have shown inconsistent practices in immunohistochemical assay validation. - To assess changes in immunohistochemistry analytic validation practices after publication of an evidence-based laboratory practice guideline. - A survey on current immunohistochemistry assay validation practices and on the awareness and adoption of a recently published guideline was sent to subscribers enrolled in one of 3 relevant College of American Pathologists proficiency testing programs and to additional nonsubscribing laboratories that perform immunohistochemical testing. The results were compared with an earlier survey of validation practices. - Analysis was based on responses from 1085 laboratories that perform immunohistochemical staining. Of 1057 responses, 65.4% (691) were aware of the guideline recommendations before this survey was sent and 79.9% (550 of 688) of those have already adopted some or all of the recommendations. Compared with the 2010 survey, a significant number of laboratories now have written validation procedures for both predictive and nonpredictive marker assays and specifications for the minimum numbers of cases needed for validation. There was also significant improvement in compliance with validation requirements, with 99% (100 of 102) having validated their most recently introduced predictive marker assay, compared with 74.9% (326 of 435) in 2010. The difficulty in finding validation cases for rare antigens and resource limitations were cited as the biggest challenges in implementing the guideline. - Dissemination of the 2014 evidence-based guideline validation practices had a positive impact on laboratory performance; some or all of the recommendations have been adopted by nearly 80% of respondents.
Cian, Francesco; Villiers, Elisabeth; Archer, Joy; Pitorri, Francesca; Freeman, Kathleen
2014-06-01
Quality control (QC) validation is an essential tool in total quality management of a veterinary clinical pathology laboratory. Cost-analysis can be a valuable technique to help identify an appropriate QC procedure for the laboratory, although this has never been reported in veterinary medicine. The aim of this study was to determine the applicability of the Six Sigma Quality Cost Worksheets in the evaluation of possible candidate QC rules identified by QC validation. Three months of internal QC records were analyzed. EZ Rules 3 software was used to evaluate candidate QC procedures, and the costs associated with the application of different QC rules were calculated using the Six Sigma Quality Cost Worksheets. The costs associated with the current and the candidate QC rules were compared, and the amount of cost savings was calculated. There was a significant saving when the candidate 1-2.5s, n = 3 rule was applied instead of the currently utilized 1-2s, n = 3 rule. The savings were 75% per year (£ 8232.5) based on re-evaluating all of the patient samples in addition to the controls, and 72% per year (£ 822.4) based on re-analyzing only the control materials. The savings were also shown to change accordingly with the number of samples analyzed and with the number of daily QC procedures performed. These calculations demonstrated the importance of the selection of an appropriate QC procedure, and the usefulness of the Six Sigma Costs Worksheet in determining the most cost-effective rule(s) when several candidate rules are identified by QC validation. © 2014 American Society for Veterinary Clinical Pathology and European Society for Veterinary Clinical Pathology.
2018-01-01
A simple, sensitive, accurate, robust headspace gas chromatographic method was developed for the quantitative determination of acetone and isopropyl alcohol in tartaric acid-based pellets of dipyridamole modified release capsules. The residual solvents acetone and isopropyl alcohol were used in the manufacturing process of the tartaric acid-based pellets of dipyridamole modified release capsules by considering the solubility of the dipyridamole and excipients in the different manufacturing stages. The method was developed and optimized by using fused silica DB-624 (30 m × 0.32 mm × 1.8 µm) column with the flame ionization detector. The method validation was carried out with regard to the guidelines for validation of analytical procedures Q2 demanded by the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH). All the validation characteristics were meeting the acceptance criteria. Hence, the developed and validated method can be applied for the intended routine analysis. PMID:29686931
Valavala, Sriram; Seelam, Nareshvarma; Tondepu, Subbaiah; Jagarlapudi, V Shanmukha Kumar; Sundarmurthy, Vivekanandan
2018-01-01
A simple, sensitive, accurate, robust headspace gas chromatographic method was developed for the quantitative determination of acetone and isopropyl alcohol in tartaric acid-based pellets of dipyridamole modified release capsules. The residual solvents acetone and isopropyl alcohol were used in the manufacturing process of the tartaric acid-based pellets of dipyridamole modified release capsules by considering the solubility of the dipyridamole and excipients in the different manufacturing stages. The method was developed and optimized by using fused silica DB-624 (30 m × 0.32 mm × 1.8 µ m) column with the flame ionization detector. The method validation was carried out with regard to the guidelines for validation of analytical procedures Q2 demanded by the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH). All the validation characteristics were meeting the acceptance criteria. Hence, the developed and validated method can be applied for the intended routine analysis.
NASA Astrophysics Data System (ADS)
Ivankovic, D.; Dadic, V.
2009-04-01
Some of oceanographic parameters have to be manually inserted into database; some (for example data from CTD probe) are inserted from various files. All this parameters requires visualization, validation and manipulation from research vessel or scientific institution, and also public presentation. For these purposes is developed web based system, containing dynamic sql procedures and java applets. Technology background is Oracle 10g relational database, and Oracle application server. Web interfaces are developed using PL/SQL stored database procedures (mod PL/SQL). Additional parts for data visualization include use of Java applets and JavaScript. Mapping tool is Google maps API (javascript) and as alternative java applet. Graph is realized as dynamically generated web page containing java applet. Mapping tool and graph are georeferenced. That means that click on some part of graph, automatically initiate zoom or marker onto location where parameter was measured. This feature is very useful for data validation. Code for data manipulation and visualization are partially realized with dynamic SQL and that allow as to separate data definition and code for data manipulation. Adding new parameter in system requires only data definition and description without programming interface for this kind of data.
NASA Technical Reports Server (NTRS)
Witek, Marcin L.; Garay, Michael J.; Diner, David J.; Smirnov, Alexander
2013-01-01
In this study, aerosol optical depths over oceans are analyzed from satellite and surface perspectives. Multiangle Imaging SpectroRadiometer (MISR) aerosol retrievals are investigated and validated primarily against Maritime Aerosol Network (MAN) observations. Furthermore, AErosol RObotic NETwork (AERONET) data from 19 island and coastal sites is incorporated in this study. The 270 MISRMAN comparison points scattered across all oceans were identified. MISR on average overestimates aerosol optical depths (AODs) by 0.04 as compared to MAN; the correlation coefficient and root-mean-square error are 0.95 and 0.06, respectively. A new screening procedure based on retrieval region characterization is proposed, which is capable of substantially reducing MISR retrieval biases. Over 1000 additional MISRAERONET comparison points are added to the analysis to confirm the validity of the method. The bias reduction is effective within all AOD ranges. Setting a clear flag fraction threshold to 0.6 reduces the bias to below 0.02, which is close to a typical ground-based measurement uncertainty. Twelve years of MISR data are analyzed with the new screening procedure. The average over ocean AOD is reduced by 0.03, from 0.15 to 0.12. The largest AOD decrease is observed in high latitudes of both hemispheres, regions with climatologically high cloud cover. It is postulated that the screening procedure eliminates spurious retrieval errors associated with cloud contamination and cloud adjacency effects. The proposed filtering method can be used for validating aerosol and chemical transport models.
A continuous dual-process model of remember/know judgments.
Wixted, John T; Mickes, Laura
2010-10-01
The dual-process theory of recognition memory holds that recognition decisions can be based on recollection or familiarity, and the remember/know procedure is widely used to investigate those 2 processes. Dual-process theory in general and the remember/know procedure in particular have been challenged by an alternative strength-based interpretation based on signal-detection theory, which holds that remember judgments simply reflect stronger memories than do know judgments. Although supported by a considerable body of research, the signal-detection account is difficult to reconcile with G. Mandler's (1980) classic "butcher-on-the-bus" phenomenon (i.e., strong, familiarity-based recognition). In this article, a new signal-detection model is proposed that does not deny either the validity of dual-process theory or the possibility that remember/know judgments can-when used in the right way-help to distinguish between memories that are largely recollection based from those that are largely familiarity based. It does, however, agree with all prior signal-detection-based critiques of the remember/know procedure, which hold that, as it is ordinarily used, the procedure mainly distinguishes strong memories from weak memories (not recollection from familiarity).
Sørensen, Hans Eibe; Slater, Stanley F
2008-08-01
Atheoretical measure purification may lead to construct deficient measures. The purpose of this paper is to provide a theoretically driven procedure for the development and empirical validation of symmetric component measures of multidimensional constructs. Particular emphasis is placed on establishing a formalized three-step procedure for achieving a posteriori content validity. Then the procedure is applied to development and empirical validation of two symmetrical component measures of market orientation, customer orientation and competitor orientation. Analysis suggests that average variance extracted is particularly critical to reliability in the respecification of multi-indicator measures. In relation to this, the results also identify possible deficiencies in using Cronbach alpha for establishing reliable and valid measures.
29 CFR 1607.7 - Use of other validity studies.
Code of Federal Regulations, 2012 CFR
2012-07-01
... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection... described in test manuals. While publishers of selection procedures have a professional obligation to...
29 CFR 1607.7 - Use of other validity studies.
Code of Federal Regulations, 2014 CFR
2014-07-01
... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection... described in test manuals. While publishers of selection procedures have a professional obligation to...
29 CFR 1607.7 - Use of other validity studies.
Code of Federal Regulations, 2013 CFR
2013-07-01
... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection... described in test manuals. While publishers of selection procedures have a professional obligation to...
Contact stresses in meshing spur gear teeth: Use of an incremental finite element procedure
NASA Technical Reports Server (NTRS)
Hsieh, Chih-Ming; Huston, Ronald L.; Oswald, Fred B.
1992-01-01
Contact stresses in meshing spur gear teeth are examined. The analysis is based upon an incremental finite element procedure that simultaneously determines the stresses in the contact region between the meshing teeth. The teeth themselves are modeled by two dimensional plain strain elements. Friction effects are included, with the friction forces assumed to obey Coulomb's law. The analysis assumes that the displacements are small and that the tooth materials are linearly elastic. The analysis procedure is validated by comparing its results with those for the classical two contacting semicylinders obtained from the Hertz method. Agreement is excellent.
Cannon, W Dilworth; Nicandri, Gregg T; Reinig, Karl; Mevis, Howard; Wittstein, Jocelyn
2014-04-02
Several virtual reality simulators have been developed to assist orthopaedic surgeons in acquiring the skills necessary to perform arthroscopic surgery. The purpose of this study was to assess the construct validity of the ArthroSim virtual reality arthroscopy simulator by evaluating whether skills acquired through increased experience in the operating room lead to improved performance on the simulator. Using the simulator, six postgraduate year-1 orthopaedic residents were compared with six postgraduate year-5 residents and with six community-based orthopaedic surgeons when performing diagnostic arthroscopy. The time to perform the procedure was recorded. To ensure that subjects did not sacrifice the quality of the procedure to complete the task in a shorter time, the simulator was programmed to provide a completeness score that indicated whether the surgeon accurately performed all of the steps of diagnostic arthroscopy in the correct sequence. The mean time to perform the procedure by each group was 610 seconds for community-based orthopaedic surgeons, 745 seconds for postgraduate year-5 residents, and 1028 seconds for postgraduate year-1 residents. Both the postgraduate year-5 residents and the community-based orthopaedic surgeons performed the procedure in significantly less time (p = 0.006) than the postgraduate year-1 residents. There was a trend toward significance (p = 0.055) in time to complete the procedure when the postgraduate year-5 residents were compared with the community-based orthopaedic surgeons. The mean level of completeness as assigned by the simulator for each group was 85% for the community-based orthopaedic surgeons, 79% for the postgraduate year-5 residents, and 71% for the postgraduate year-1 residents. As expected, these differences were not significant, indicating that the three groups had achieved an acceptable level of consistency in their performance of the procedure. Higher levels of surgeon experience resulted in improved efficiency when performing diagnostic knee arthroscopy on the simulator. Further validation studies utilizing the simulator are currently under way and the additional simulated tasks of arthroscopic meniscectomy, meniscal repair, microfracture, and loose body removal are being developed.
Cumulative impact of developments on the surrounding roadways' traffic.
DOT National Transportation Integrated Search
2011-10-01
"In order to recommend a procedure for cumulative impact study, four different travel : demand models were developed, calibrated, and validated. The base year for the models was 2005. : Two study areas were used, and the models were run for three per...
Feasibility and validity of International Classification of Diseases based case mix indices.
Yang, Che-Ming; Reinke, William
2006-10-06
Severity of illness is an omnipresent confounder in health services research. Resource consumption can be applied as a proxy of severity. The most commonly cited hospital resource consumption measure is the case mix index (CMI) and the best-known illustration of the CMI is the Diagnosis Related Group (DRG) CMI used by Medicare in the U.S. For countries that do not have DRG type CMIs, the adjustment for severity has been troublesome for either reimbursement or research purposes. The research objective of this study is to ascertain the construct validity of CMIs derived from International Classification of Diseases (ICD) in comparison with DRG CMI. The study population included 551 acute care hospitals in Taiwan and 2,462,006 inpatient reimbursement claims. The 18th version of GROUPER, the Medicare DRG classification software, was applied to Taiwan's 1998 National Health Insurance (NHI) inpatient claim data to derive the Medicare DRG CMI. The same weighting principles were then applied to determine the ICD principal diagnoses and procedures based costliness and length of stay (LOS) CMIs. Further analyses were conducted based on stratifications according to teaching status, accreditation levels, and ownership categories. The best ICD-based substitute for the DRG costliness CMI (DRGCMI) is the ICD principal diagnosis costliness CMI (ICDCMI-DC) in general and in most categories with Spearman's correlation coefficients ranging from 0.938-0.462. The highest correlation appeared in the non-profit sector. ICD procedure costliness CMI (ICDCMI-PC) outperformed ICDCMI-DC only at the medical center level, which consists of tertiary care hospitals and is more procedure intensive. The results of our study indicate that an ICD-based CMI can quite fairly approximate the DRGCMI, especially ICDCMI-DC. Therefore, substituting ICDs for DRGs in computing the CMI ought to be feasible and valid in countries that have not implemented DRGs.
Coronary heart disease index based on longitudinal electrocardiography
NASA Technical Reports Server (NTRS)
Townsend, J. C.; Cronin, J. P.
1977-01-01
A coronary heart disease index was developed from longitudinal ECG (LCG) tracings to serve as a cardiac health measure in studies of working and, essentially, asymptomatic populations, such as pilots and executives. For a given subject, the index consisted of a composite score based on the presence of LCG aberrations and weighted values previously assigned to them. The index was validated by correlating it with the known presence or absence of CHD as determined by a complete physical examination, including treadmill, resting ECG, and risk factor information. The validating sample consisted of 111 subjects drawn by a stratified-random procedure from 5000 available case histories. The CHD index was found to be significantly more valid as a sole indicator of CHD than the LCG without the use of the index. The index consistently produced higher validity coefficients in identifying CHD than did treadmill testing, resting ECG, or risk factor analysis.
Cheng, Keding; Sloan, Angela; McCorrister, Stuart; Peterson, Lorea; Chui, Huixia; Drebot, Mike; Nadon, Celine; Knox, J David; Wang, Gehua
2014-12-01
The need for rapid and accurate H typing is evident during Escherichia coli outbreak situations. This study explores the transition of MS-H, a method originally developed for rapid H antigen typing of E. coli using LC-MS/MS of flagella digest of reference strains and some clinical strains, to E. coli isolates in clinical scenario through quantitative analysis and method validation. Motile and nonmotile strains were examined in batches to simulate clinical sample scenario. Various LC-MS/MS batch run procedures and MS-H typing rules were compared and summarized through quantitative analysis of MS-H data output for a standard method development. Label-free quantitative data analysis of MS-H typing was proven very useful for examining the quality of MS-H result and the effects of some sample carryovers from motile E. coli isolates. Based on this, a refined procedure and protein identification rule specific for clinical MS-H typing was established and validated. With LC-MS/MS batch run procedure and database search parameter unique for E. coli MS-H typing, the standard procedure maintained high accuracy and specificity in clinical situations, and its potential to be used in a clinical setting was clearly established. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Martín-Sabroso, Cristina; Tavares-Fernandes, Daniel Filipe; Espada-García, Juan Ignacio; Torres-Suárez, Ana Isabel
2013-12-15
In this work a protocol to validate analytical procedures for the quantification of drug substances formulated in polymeric systems that comprise both drug entrapped into the polymeric matrix (assay:content test) and drug released from the systems (assay:dissolution test) is developed. This protocol is applied to the validation two isocratic HPLC analytical procedures for the analysis of dexamethasone phosphate disodium microparticles for parenteral administration. Preparation of authentic samples and artificially "spiked" and "unspiked" samples is described. Specificity (ability to quantify dexamethasone phosphate disodium in presence of constituents of the dissolution medium and other microparticle constituents), linearity, accuracy and precision are evaluated, in the range from 10 to 50 μg mL(-1) in the assay:content test procedure and from 0.25 to 10 μg mL(-1) in the assay:dissolution test procedure. The robustness of the analytical method to extract drug from microparticles is also assessed. The validation protocol developed allows us to conclude that both analytical methods are suitable for their intended purpose, but the lack of proportionality of the assay:dissolution analytical method should be taken into account. The validation protocol designed in this work could be applied to the validation of any analytical procedure for the quantification of drugs formulated in controlled release polymeric microparticles. Copyright © 2013 Elsevier B.V. All rights reserved.
Pugh, Carla M; Arafat, Fahd O; Kwan, Calvin; Cohen, Elaine R; Kurashima, Yo; Vassiliou, Melina C; Fried, Gerald M
2015-10-01
The aim of our study was to modify our previously developed laparoscopic ventral hernia (LVH) simulator to increase difficulty and then reassess validity and feasibility for using the simulator in a newly developed simulation-based continuing medical education course. Participants (N = 30) were practicing surgeons who signed up for a hands-on postgraduate laparoscopic hernia course. An LVH simulator, with prior validity evidence, was modified for the course to increase difficulty. Participants completed 1 of the 3 variations in hernia anatomy: incarcerated omentum, incarcerated bowel, and diffuse adhesions. During the procedure, course faculty and peer observers rated surgeon performance using Global Operative Assessment of Laparoscopic Skills-Incisional Hernia and Global Operative Assessment of Laparoscopic Skills rating scales with prior validity evidence. Rating scale reliability was reassessed for internal consistency. Peer and faculty raters' scores were compared. In addition, quality and completeness of the hernia repairs were rated. Internal consistency on the general skills performance (peer α = .96, faculty α = .94) and procedure-specific performance (peer α = .91, faculty α = .88) scores were high. Peers were more lenient than faculty raters on all LVH items in both the procedure-specific skills and general skills ratings. Overall, participants scored poorly on the quality and completeness of their hernia repairs (mean = 3.90/16, standard deviation = 2.72), suggesting a mismatch between course attendees and hernia difficulty and identifying a learning need. Simulation-based continuing medical education courses provide hands-on experiences that can positively affect clinical practice. Although our data appear to show a significant mismatch between clinical skill and simulator difficulty, these findings also underscore significant learning needs in the surgical community. Copyright © 2015 Elsevier Inc. All rights reserved.
Zhou, Caigen; Zeng, Xiaoqin; Luo, Chaomin; Zhang, Huaguang
In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.
LaBudde, Robert A; Harnly, James M
2012-01-01
A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given.
Progress toward the determination of correct classification rates in fire debris analysis.
Waddell, Erin E; Song, Emma T; Rinke, Caitlin N; Williams, Mary R; Sigman, Michael E
2013-07-01
Principal components analysis (PCA), linear discriminant analysis (LDA), and quadratic discriminant analysis (QDA) were used to develop a multistep classification procedure for determining the presence of ignitable liquid residue in fire debris and assigning any ignitable liquid residue present into the classes defined under the American Society for Testing and Materials (ASTM) E 1618-10 standard method. A multistep classification procedure was tested by cross-validation based on model data sets comprised of the time-averaged mass spectra (also referred to as total ion spectra) of commercial ignitable liquids and pyrolysis products from common building materials and household furnishings (referred to simply as substrates). Fire debris samples from laboratory-scale and field test burns were also used to test the model. The optimal model's true-positive rate was 81.3% for cross-validation samples and 70.9% for fire debris samples. The false-positive rate was 9.9% for cross-validation samples and 8.9% for fire debris samples. © 2013 American Academy of Forensic Sciences.
Toma, Tudor; Bosman, Robert-Jan; Siebes, Arno; Peek, Niels; Abu-Hanna, Ameen
2010-08-01
An important problem in the Intensive Care is how to predict on a given day of stay the eventual hospital mortality for a specific patient. A recent approach to solve this problem suggested the use of frequent temporal sequences (FTSs) as predictors. Methods following this approach were evaluated in the past by inducing a model from a training set and validating the prognostic performance on an independent test set. Although this evaluative approach addresses the validity of the specific models induced in an experiment, it falls short of evaluating the inductive method itself. To achieve this, one must account for the inherent sources of variation in the experimental design. The main aim of this work is to demonstrate a procedure based on bootstrapping, specifically the .632 bootstrap procedure, for evaluating inductive methods that discover patterns, such as FTSs. A second aim is to apply this approach to find out whether a recently suggested inductive method that discovers FTSs of organ functioning status is superior over a traditional method that does not use temporal sequences when compared on each successive day of stay at the Intensive Care Unit. The use of bootstrapping with logistic regression using pre-specified covariates is known in the statistical literature. Using inductive methods of prognostic models based on temporal sequence discovery within the bootstrap procedure is however novel at least in predictive models in the Intensive Care. Our results of applying the bootstrap-based evaluative procedure demonstrate the superiority of the FTS-based inductive method over the traditional method in terms of discrimination as well as accuracy. In addition we illustrate the insights gained by the analyst into the discovered FTSs from the bootstrap samples. Copyright 2010 Elsevier Inc. All rights reserved.
Sattler, J M
1979-05-01
Hardy, Welcher, Mellitis, and Kagan altered standard WISC administrative and scoring procedures and, from the resulting higher subtest scores, concluded that IQs based on standardized tests are inappropriate measures for inner-city children. Careful examination of their study reveals many methodological inadequacies and problematic interpretations. Three of these are as follows: (a) failure to use any external criterion to evaluate the validity of their testing-of-limits procedures; (b) the possibility of examiner and investigator bias; and (c) lack of any comparison group that might demonstrate that poor children would be helped more than others by the probes recommended. Their report creates misleading doubts about existing intelligence tests and does a disservice to inner-city children who need the benefits of the judicious use of diagnostic procedures, which include standardized intelligence tests. Consequently, their assertion concerning the inappropriateness of standardized test results for inner-city children is not only premature and misleading, but it is unwarranted as well.
Performance Evaluation of a Data Validation System
NASA Technical Reports Server (NTRS)
Wong, Edmond (Technical Monitor); Sowers, T. Shane; Santi, L. Michael; Bickford, Randall L.
2005-01-01
Online data validation is a performance-enhancing component of modern control and health management systems. It is essential that performance of the data validation system be verified prior to its use in a control and health management system. A new Data Qualification and Validation (DQV) Test-bed application was developed to provide a systematic test environment for this performance verification. The DQV Test-bed was used to evaluate a model-based data validation package known as the Data Quality Validation Studio (DQVS). DQVS was employed as the primary data validation component of a rocket engine health management (EHM) system developed under NASA's NGLT (Next Generation Launch Technology) program. In this paper, the DQVS and DQV Test-bed software applications are described, and the DQV Test-bed verification procedure for this EHM system application is presented. Test-bed results are summarized and implications for EHM system performance improvements are discussed.
Assessment and certification of neonatal incubator sensors through an inferential neural network.
de Araújo, José Medeiros; de Menezes, José Maria Pires; Moura de Albuquerque, Alberto Alexandre; da Mota Almeida, Otacílio; Ugulino de Araújo, Fábio Meneghetti
2013-11-15
Measurement and diagnostic systems based on electronic sensors have been increasingly essential in the standardization of hospital equipment. The technical standard IEC (International Electrotechnical Commission) 60601-2-19 establishes requirements for neonatal incubators and specifies the calibration procedure and validation tests for such devices using sensors systems. This paper proposes a new procedure based on an inferential neural network to evaluate and calibrate a neonatal incubator. The proposal presents significant advantages over the standard calibration process, i.e., the number of sensors is drastically reduced, and it runs with the incubator under operation. Since the sensors used in the new calibration process are already installed in the commercial incubator, no additional hardware is necessary; and the calibration necessity can be diagnosed in real time without the presence of technical professionals in the neonatal intensive care unit (NICU). Experimental tests involving the aforementioned calibration system are carried out in a commercial incubator in order to validate the proposal.
Assessment and Certification of Neonatal Incubator Sensors through an Inferential Neural Network
de Araújo Júnior, José Medeiros; de Menezes Júnior, José Maria Pires; de Albuquerque, Alberto Alexandre Moura; Almeida, Otacílio da Mota; de Araújo, Fábio Meneghetti Ugulino
2013-01-01
Measurement and diagnostic systems based on electronic sensors have been increasingly essential in the standardization of hospital equipment. The technical standard IEC (International Electrotechnical Commission) 60601-2-19 establishes requirements for neonatal incubators and specifies the calibration procedure and validation tests for such devices using sensors systems. This paper proposes a new procedure based on an inferential neural network to evaluate and calibrate a neonatal incubator. The proposal presents significant advantages over the standard calibration process, i.e., the number of sensors is drastically reduced, and it runs with the incubator under operation. Since the sensors used in the new calibration process are already installed in the commercial incubator, no additional hardware is necessary; and the calibration necessity can be diagnosed in real time without the presence of technical professionals in the neonatal intensive care unit (NICU). Experimental tests involving the aforementioned calibration system are carried out in a commercial incubator in order to validate the proposal. PMID:24248278
Cell-free measurements of brightness of fluorescently labeled antibodies
Zhou, Haiying; Tourkakis, George; Shi, Dennis; Kim, David M.; Zhang, Hairong; Du, Tommy; Eades, William C.; Berezin, Mikhail Y.
2017-01-01
Validation of imaging contrast agents, such as fluorescently labeled imaging antibodies, has been recognized as a critical challenge in clinical and preclinical studies. As the number of applications for imaging antibodies grows, these materials are increasingly being subjected to careful scrutiny. Antibody fluorescent brightness is one of the key parameters that is of critical importance. Direct measurements of the brightness with common spectroscopy methods are challenging, because the fluorescent properties of the imaging antibodies are highly sensitive to the methods of conjugation, degree of labeling, and contamination with free dyes. Traditional methods rely on cell-based assays that lack reproducibility and accuracy. In this manuscript, we present a novel and general approach for measuring the brightness using antibody-avid polystyrene beads and flow cytometry. As compared to a cell-based method, the described technique is rapid, quantitative, and highly reproducible. The proposed method requires less than ten microgram of sample and is applicable for optimizing synthetic conjugation procedures, testing commercial imaging antibodies, and performing high-throughput validation of conjugation procedures. PMID:28150730
Schellenberg, François; Wielders, Jos; Anton, Raymond; Bianchi, Vincenza; Deenmamode, Jean; Weykamp, Cas; Whitfield, John; Jeppsson, Jan-Olof; Helander, Anders
2017-02-01
Carbohydrate-deficient transferrin (CDT) is used as a biomarker of sustained high alcohol consumption. The currently available measurement procedures for CDT are based on various analytical techniques (HPLC, capillary electrophoresis, nephelometry), some differing in the definition of the analyte and using different reference intervals and cut-off values. The Working Group on Standardization of CDT (WG-CDT), initiated by the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC), has validated an HPLC candidate reference measurement procedure (cRMP) for CDT (% disialotransferrin to total transferrin based on peak areas), demonstrating that it is suitable as a reference measurement procedure (RMP) for CDT. Presented is a detailed description of the cRMP and its calibration. Practical aspects on how to treat genetic variant and so-called di-tri bridge samples are described. Results of method performance characteristics, as demanded by ISO 15189 and ISO 15193, are given, as well as the reference interval and measurement uncertainty and how to deal with that in routine use. The correlation of the cRMP with commercial CDT procedures and the performance of the cRMP in a network of laboratories are also presented. The performance of the CDT cRMP in combination with previously developed commutable calibrators allows for standardization of the currently available commercial measurement procedures for CDT. The cRMP has recently been approved by the IFCC and will be from now on be known as the IFCC-RMP for CDT, while CDT results standardized according to this RMP should be indicated as CDT IFCC . Copyright © 2016 Elsevier B.V. All rights reserved.
The coordinate coherent states approach revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miao, Yan-Gang, E-mail: miaoyg@nankai.edu.cn; Zhang, Shao-Jun, E-mail: sjzhang@mail.nankai.edu.cn
2013-02-15
We revisit the coordinate coherent states approach through two different quantization procedures in the quantum field theory on the noncommutative Minkowski plane. The first procedure, which is based on the normal commutation relation between an annihilation and creation operators, deduces that a point mass can be described by a Gaussian function instead of the usual Dirac delta function. However, we argue this specific quantization by adopting the canonical one (based on the canonical commutation relation between a field and its conjugate momentum) and show that a point mass should still be described by the Dirac delta function, which implies thatmore » the concept of point particles is still valid when we deal with the noncommutativity by following the coordinate coherent states approach. In order to investigate the dependence on quantization procedures, we apply the two quantization procedures to the Unruh effect and Hawking radiation and find that they give rise to significantly different results. Under the first quantization procedure, the Unruh temperature and Unruh spectrum are not deformed by noncommutativity, but the Hawking temperature is deformed by noncommutativity while the radiation specturm is untack. However, under the second quantization procedure, the Unruh temperature and Hawking temperature are untack but the both spectra are modified by an effective greybody (deformed) factor. - Highlights: Black-Right-Pointing-Pointer Suggest a canonical quantization in the coordinate coherent states approach. Black-Right-Pointing-Pointer Prove the validity of the concept of point particles. Black-Right-Pointing-Pointer Apply the canonical quantization to the Unruh effect and Hawking radiation. Black-Right-Pointing-Pointer Find no deformations in the Unruh temperature and Hawking temperature. Black-Right-Pointing-Pointer Provide the modified spectra of the Unruh effect and Hawking radiation.« less
Technical Approach A technology review on the status of MBTs was performed at the beginning of the project to determine MBT use in other industries. The review focused project goals and activities, which included: 1) Comparing qPCR to non-PCR-based enumeration methods to valid...
De Kesel, Pieter M M; Lambert, Willy E; Stove, Christophe P
2015-11-01
Caffeine is the probe drug of choice to assess the phenotype of the drug metabolizing enzyme CYP1A2. Typically, molar concentration ratios of paraxanthine, caffeine's major metabolite, to its precursor are determined in plasma following administration of a caffeine test dose. The aim of this study was to develop and validate an LC-MS/MS method for the determination of caffeine and paraxanthine in hair. The different steps of a hair extraction procedure were thoroughly optimized. Following a three-step decontamination procedure, caffeine and paraxanthine were extracted from 20 mg of ground hair using a solution of protease type VIII in Tris buffer (pH 7.5). Resulting hair extracts were cleaned up on Strata-X™ SPE cartridges. All samples were analyzed on a Waters Acquity UPLC® system coupled to an AB SCIEX API 4000™ triple quadrupole mass spectrometer. The final method was fully validated based on international guidelines. Linear calibration lines for caffeine and paraxanthine ranged from 20 to 500 pg/mg. Precision (%RSD) and accuracy (%bias) were below 12% and 7%, respectively. The isotopically labeled internal standards compensated for the ion suppression observed for both compounds. Relative matrix effects were below 15%RSD. The recovery of the sample preparation procedure was high (>85%) and reproducible. Caffeine and paraxanthine were stable in hair for at least 644 days. The effect of the hair decontamination procedure was evaluated as well. Finally, the applicability of the developed procedure was demonstrated by determining caffeine and paraxanthine concentrations in hair samples of ten healthy volunteers. The optimized and validated method for determination of caffeine and paraxanthine in hair proved to be reliable and may serve to evaluate the potential of hair analysis for CYP1A2 phenotyping. Copyright © 2015 Elsevier B.V. All rights reserved.
Validation project. This report describes the procedure used to generate the noise models output dataset , and then it compares that dataset to the...benchmark, the Engineer Research and Development Centers Long-Range Sound Propagation dataset . It was found that the models consistently underpredict the
Kalwitzki, T; Huter, K; Runte, R; Breuninger, K; Janatzek, S; Gronemeyer, S; Gansweid, B; Rothgang, H
2017-03-01
Introduction: In the broad-based consortium project "Reha XI - Identifying rehabilitative requirements in medical service assessments: evaluation and implementation", a comprehensive analysis of the corresponding procedures was carried out by the medical services of the German Health Insurance Funds (MDK). On the basis of this analysis, a Good Practice Standard (GPS) for assessments was drawn up and scientifically evaluated. This article discusses the findings and applicability of the GPS as the basis for a nationwide standardized procedure in Germany as required by the Second Act to Strengthen Long-Term Care (PSG II) under Vol. XI Para. 18 (6) of the German Social Welfare Code. Method: The consortium project comprised four project phases: 1. Qualitative and quantitative situation analysis of the procedures for ascertaining rehabilitative needs in care assessments carried out by the MDK; 2. Development of a Good Practice Standard (GPS) in a structured, consensus-based procedure; 3. Scientific evaluation of the validity, reliability and practicability of the assessment procedure according to the GPS in the MDK's operational practice; 4. Survey of long-term care insurance funds with respect to the appropriateness of the rehabilitation recommendations drawn up by care assessors in line with the GPS for providing a qualified recommendation for the applicant. The evaluation carried out in the third project phase was subject to methodological limitations that may have given rise to distortions in the findings. Findings: On the basis of the situation analysis, 7 major thematic areas were identified in which improvements were implemented by applying the GPS. For the evaluation of the GPS, a total of 3 247 applicants were assessed in line with the GPS; in 6.3% of the applicants, an indication for medical rehabilitation was determined. The GPS procedure showed a high degree of reliability and practicability, but the values for the validity of the assessment procedure were highly unsatisfactory. The degree of acceptance by the long-term care insurance funds with respect to the recommendations for rehabilitation following the GPS procedure was high. Conclusion: The application of a general standard across all MDKs shows marked improvements in the quality of the assessment procedure and leads more frequently to the ascertainment of an indication for medical rehabilitation. The methodological problems and the unsatisfactory findings with respect to the validity of the assessors' decisions require further scientific scrutiny. © Georg Thieme Verlag KG Stuttgart · New York.
Symbolic dynamic filtering and language measure for behavior identification of mobile robots.
Mallapragada, Goutham; Ray, Asok; Jin, Xin
2012-06-01
This paper presents a procedure for behavior identification of mobile robots, which requires limited or no domain knowledge of the underlying process. While the features of robot behavior are extracted by symbolic dynamic filtering of the observed time series, the behavior patterns are classified based on language measure theory. The behavior identification procedure has been experimentally validated on a networked robotic test bed by comparison with commonly used tools, namely, principal component analysis for feature extraction and Bayesian risk analysis for pattern classification.
2010-03-01
Reliable spiking of the airstream with metals proved to be a challenge . Based on reference method results, it is unclear whether delivery of the...etc that challenge the pollutant analyzer part of the CEMS (and as much of the whole system as possible), but which do not challenge the entire...calibration, when developed, will be acceptable as a procedure for determining RA. Such a procedure will involve challenging the entire CEMS, including the
The anatomy of floating shock fitting. [shock waves computation for flow field
NASA Technical Reports Server (NTRS)
Salas, M. D.
1975-01-01
The floating shock fitting technique is examined. Second-order difference formulas are developed for the computation of discontinuities. A procedure is developed to compute mesh points that are crossed by discontinuities. The technique is applied to the calculation of internal two-dimensional flows with arbitrary number of shock waves and contact surfaces. A new procedure, based on the coalescence of characteristics, is developed to detect the formation of shock waves. Results are presented to validate and demonstrate the versatility of the technique.
Kritikos, Nikolaos; Tsantili-Kakoulidou, Anna; Loukas, Yannis L; Dotsikas, Yannis
2015-07-17
In the current study, quantitative structure-retention relationships (QSRR) were constructed based on data obtained by a LC-(ESI)-QTOF-MS/MS method for the determination of amino acid analogues, following their derivatization via chloroformate esters. Molecules were derivatized via n-propyl chloroformate/n-propanol mediated reaction. Derivatives were acquired through a liquid-liquid extraction procedure. Chromatographic separation is based on gradient elution using methanol/water mixtures from a 70/30% composition to an 85/15% final one, maintaining a constant rate of change. The group of examined molecules was diverse, including mainly α-amino acids, yet also β- and γ-amino acids, γ-amino acid analogues, decarboxylated and phosphorylated analogues and dipeptides. Projection to latent structures (PLS) method was selected for the formation of QSRRs, resulting in a total of three PLS models with high cross-validated coefficients of determination Q(2)Y. For this reason, molecular structures were previously described through the use of descriptors. Through stratified random sampling procedures, 57 compounds were split to a training set and a test set. Model creation was based on multiple criteria including principal component significance and eigenvalue, variable importance, form of residuals, etc. Validation was based on statistical metrics Rpred(2),QextF2(2),QextF3(2) for the test set and Roy's metrics rm(Av)(2) and rm(δ)(2), assessing both predictive stability and internal validity. Based on aforementioned models, simplified equivalent were then created using a multi-linear regression (MLR) method. MLR models were also validated with the same metrics. The suggested models are considered useful for the estimation of retention times of amino acid analogues for a series of applications. Copyright © 2015 Elsevier B.V. All rights reserved.
Validation of a Novel Laparoscopic Adjustable Gastric Band Simulator
Sankaranarayanan, Ganesh; Adair, James D.; Halic, Tansel; Gromski, Mark A.; Lu, Zhonghua; Ahn, Woojin; Jones, Daniel B.; De, Suvranu
2011-01-01
Background Morbid obesity accounts for more than 90,000 deaths per year in the United States. Laparoscopic adjustable gastric banding (LAGB) is the second most common weight loss procedure performed in the US and the most common in Europe and Australia. Simulation in surgical training is a rapidly advancing field that has been adopted by many to prepare surgeons for surgical techniques and procedures. Study Aim The aim of our study was to determine face, construct and content validity for a novel virtual reality laparoscopic adjustable gastric band simulator. Methods Twenty-eight subjects were categorized into two groups (Expert and Novice), determined by their skill level in laparoscopic surgery. Experts consisted of subjects who had at least four years of laparoscopic training and operative experience. Novices consisted of subjects with medical training, but with less than four years of laparoscopic training. The subjects performed the virtual reality laparoscopic adjustable band surgery simulator. They were automatically scored, according to various tasks. The subjects then completed a questionnaire to evaluate face and content validity. Results On a 5-point Likert scale (1 – lowest score, 5 – highest score), the mean score for visual realism was 4.00 ± 0.67 and the mean score for realism of the interface and tool movements was 4.07 ± 0.77 [Face Validity]. There were significant differences in the performance of the two subject groups (Expert and Novice), based on total scores (p<0.001) [Construct Validity]. Mean scores for utility of the simulator, as addressed by the Expert group, was 4.50 ± 0.71 [Content Validity]. Conclusion We created a virtual reality laparoscopic adjustable gastric band simulator. Our initial results demonstrate excellent face, construct and content validity findings. To our knowledge, this is the first virtual reality simulator with haptic feedback for training residents and surgeons in the laparoscopic adjustable gastric banding procedure. PMID:20734069
Validation of a novel laparoscopic adjustable gastric band simulator.
Sankaranarayanan, Ganesh; Adair, James D; Halic, Tansel; Gromski, Mark A; Lu, Zhonghua; Ahn, Woojin; Jones, Daniel B; De, Suvranu
2011-04-01
Morbid obesity accounts for more than 90,000 deaths per year in the United States. Laparoscopic adjustable gastric banding (LAGB) is the second most common weight loss procedure performed in the US and the most common in Europe and Australia. Simulation in surgical training is a rapidly advancing field that has been adopted by many to prepare surgeons for surgical techniques and procedures. The aim of our study was to determine face, construct, and content validity for a novel virtual reality laparoscopic adjustable gastric band simulator. Twenty-eight subjects were categorized into two groups (expert and novice), determined by their skill level in laparoscopic surgery. Experts consisted of subjects who had at least 4 years of laparoscopic training and operative experience. Novices consisted of subjects with medical training but with less than 4 years of laparoscopic training. The subjects used the virtual reality laparoscopic adjustable band surgery simulator. They were automatically scored according to various tasks. The subjects then completed a questionnaire to evaluate face and content validity. On a 5-point Likert scale (1 = lowest score, 5 = highest score), the mean score for visual realism was 4.00 ± 0.67 and the mean score for realism of the interface and tool movements was 4.07 ± 0.77 (face validity). There were significant differences in the performances of the two subject groups (expert and novice) based on total scores (p < 0.001) (construct validity). Mean score for utility of the simulator, as addressed by the expert group, was 4.50 ± 0.71 (content validity). We created a virtual reality laparoscopic adjustable gastric band simulator. Our initial results demonstrate excellent face, construct, and content validity findings. To our knowledge, this is the first virtual reality simulator with haptic feedback for training residents and surgeons in the laparoscopic adjustable gastric banding procedure.
A Turkish Version of the Critical-Care Pain Observation Tool: Reliability and Validity Assessment.
Aktaş, Yeşim Yaman; Karabulut, Neziha
2017-08-01
The study aim was to evaluate the validity and reliability of the Critical-Care Pain Observation Tool in critically ill patients. A repeated measures design was used for the study. A convenience sample of 66 patients who had undergone open-heart surgery in the cardiovascular surgery intensive care unit in Ordu, Turkey, was recruited for the study. The patients were evaluated by using the Critical-Care Pain Observation Tool at rest, during a nociceptive procedure (suctioning), and 20 minutes after the procedure while they were conscious and intubated after surgery. The Turkish version of the Critical-Care Pain Observation Tool has shown statistically acceptable levels of validity and reliability. Inter-rater reliability was supported by moderate-to-high-weighted κ coefficients (weighted κ coefficient = 0.55 to 1.00). For concurrent validity, significant associations were found between the scores on the Critical-Care Pain Observation Tool and the Behavioral Pain Scale scores. Discriminant validity was also supported by higher scores during suctioning (a nociceptive procedure) versus non-nociceptive procedures. The internal consistency of the Critical-Care Pain Observation Tool was 0.72 during a nociceptive procedure and 0.71 during a non-nociceptive procedure. The validity and reliability of the Turkish version of the Critical-Care Pain Observation Tool was determined to be acceptable for pain assessment in critical care, especially for patients who cannot communicate verbally. Copyright © 2016 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.
Calderwood, Michael S; Huang, Susan S; Keller, Vicki; Bruce, Christina B; Kazerouni, N Neely; Janssen, Lynn
2017-09-01
OBJECTIVE To assess hospital surgical-site infection (SSI) identification and reporting following colon surgery and abdominal hysterectomy via a statewide external validation METHODS Infection preventionists (IPs) from the California Department of Public Health (CDPH) performed on-site SSI validation for surgical procedures performed in hospitals that voluntarily participated. Validation involved chart review of SSI cases previously reported by hospitals plus review of patient records flagged for review by claims codes suggestive of SSI. We assessed the sensitivity of traditional surveillance and the added benefit of claims-based surveillance. We also evaluated the positive predictive value of claims-based surveillance (ie, workload efficiency). RESULTS Upon validation review, CDPH IPs identified 239 SSIs following colon surgery at 42 hospitals and 76 SSIs following abdominal hysterectomy at 34 hospitals. For colon surgery, traditional surveillance had a sensitivity of 50% (47% for deep incisional or organ/space [DI/OS] SSI), compared to 84% (88% for DI/OS SSI) for claims-based surveillance. For abdominal hysterectomy, traditional surveillance had a sensitivity of 68% (67% for DI/OS SSI) compared to 74% (78% for DI/OS SSI) for claims-based surveillance. Claims-based surveillance was also efficient, with 1 SSI identified for every 2 patients flagged for review who had undergone abdominal hysterectomy and for every 2.6 patients flagged for review who had undergone colon surgery. Overall, CDPH identified previously unreported SSIs in 74% of validation hospitals performing colon surgery and 35% of validation hospitals performing abdominal hysterectomy. CONCLUSIONS Claims-based surveillance is a standardized approach that hospitals can use to augment traditional surveillance methods and health departments can use for external validation. Infect Control Hosp Epidemiol 2017;38:1091-1097.
Beard, J D; Marriott, J; Purdie, H; Crossley, J
2011-01-01
To compare user satisfaction and acceptability, reliability and validity of three different methods of assessing the surgical skills of trainees by direct observation in the operating theatre across a range of different surgical specialties and index procedures. A 2-year prospective, observational study in the operating theatres of three teaching hospitals in Sheffield. The assessment methods were procedure-based assessment (PBA), Objective Structured Assessment of Technical Skills (OSATS) and Non-technical Skills for Surgeons (NOTSS). The specialties were obstetrics and gynaecology (O&G) and upper gastrointestinal, colorectal, cardiac, vascular and orthopaedic surgery. Two to four typical index procedures were selected from each specialty. Surgical trainees were directly observed performing typical index procedures and assessed using a combination of two of the three methods (OSATS or PBA and NOTSS for O&G, PBA and NOTSS for the other specialties) by the consultant clinical supervisor for the case and the anaesthetist and/or scrub nurse, as well as one or more independent assessors from the research team. Information on user satisfaction and acceptability of each assessment method from both assessor and trainee perspectives was obtained from structured questionnaires. The reliability of each method was measured using generalisability theory. Aspects of validity included the internal structure of each tool and correlation between tools, construct validity, predictive validity, interprocedural differences, the effect of assessor designation and the effect of assessment on performance. Of the 558 patients who were consented, a total of 437 (78%) cases were included in the study: 51 consultant clinical supervisors, 56 anaesthetists, 39 nurses, 2 surgical care practitioners and 4 independent assessors provided 1635 assessments on 85 trainees undertaking the 437 cases. A total of 749 PBAs, 695 NOTSS and 191 OSATSs were performed. Non-O&G clinical supervisors and trainees provided mixed, but predominantly positive, responses about a range of applications of PBA. Most felt that PBA was important in surgical education, and would use it again in the future and did not feel that it added time to the operating list. The overall satisfaction of O&G clinical supervisors and trainees with OSATS was not as high, and a majority of those who used both preferred PBA. A majority of anaesthetists and nurses felt that NOTSS allowed them to rate interpersonal skills (communication, teamwork and leadership) more easily than cognitive skills (situation awareness and decision-making), that it had formative value and that it was a valuable adjunct to the assessment of technical skills. PBA demonstrated high reliability (G > 0.8 for only three assessor judgements on the same index procedure). OSATS had lower reliability (G > 0.8 for five assessor judgements on the same index procedure). Both were less reliable on a mix of procedures because of strong procedure-specific factors. A direct comparison of PBA between O&G and non-O&G cases showed a striking difference in reliability. Within O&G, a good level of reliability (G > 0.8) could not be obtained using a feasible number of assessments. Conversely, the reliability within non-O&G cases was exceptionally high, with only two assessor judgements being required. The reasons for this difference probably include the more summative purpose of assessment in O&G and the much higher proportion of O&G trainees in this study with training concerns (42% vs 4%). The reliability of NOTSS was lower than that for PBA. Reliability for the same procedure (G > 0.8) required six assessor judgements. However, as procedure-specific factors exerted a lesser influence on NOTSS, reliability on a mix of procedures could be achieved using only eight assessor judgements. NOTSS also demonstrated a valid internal structure. The strongest correlations between NOTSS and PBA or OSATS were in the 'decision-making' domain. PBA and NOTSS showed better construct validity than OSATS, the year of training and the number of recent index procedures performed being significant independent predictors of performance. There was little variation in scoring between different procedures or different designations of assessor. The results suggest that PBA is a reliable and acceptable method of assessing surgical skills, with good construct validity. Specialties that use OSATS may wish to consider changing the design or switching to PBA. Whatever workplace-based assessment method is used, the purpose, timing and frequency of assessment require detailed guidance. NOTSS is a promising tool for the assessment of non-technical skills, and surgical specialties may wish to consider its inclusion in their assessment framework. Further research is required into the use of health-care professionals other than consultant surgeons to assess trainees, the relationship between performance and experience, the educational impact of assessment and the additional value of video recording.
In-Trail Procedure Air Traffic Control Procedures Validation Simulation Study
NASA Technical Reports Server (NTRS)
Chartrand, Ryan C.; Hewitt, Katrin P.; Sweeney, Peter B.; Graff, Thomas J.; Jones, Kenneth M.
2012-01-01
In August 2007, Airservices Australia (Airservices) and the United States National Aeronautics and Space Administration (NASA) conducted a validation experiment of the air traffic control (ATC) procedures associated with the Automatic Dependant Surveillance-Broadcast (ADS-B) In-Trail Procedure (ITP). ITP is an Airborne Traffic Situation Awareness (ATSA) application designed for near-term use in procedural airspace in which ADS-B data are used to facilitate climb and descent maneuvers. NASA and Airservices conducted the experiment in Airservices simulator in Melbourne, Australia. Twelve current operational air traffic controllers participated in the experiment, which identified aspects of the ITP that could be improved (mainly in the communication and controller approval process). Results showed that controllers viewed the ITP as valid and acceptable. This paper describes the experiment design and results.
Kiefl, Johannes; Cordero, Chiara; Nicolotti, Luca; Schieberle, Peter; Reichenbach, Stephen E; Bicchi, Carlo
2012-06-22
The continuous interest in non-targeted profiling induced the development of tools for automated cross-sample analysis. Such tools were found to be selective or not comprehensive thus delivering a biased view on the qualitative/quantitative peak distribution across 2D sample chromatograms. Therefore, the performance of non-targeted approaches needs to be critically evaluated. This study focused on the development of a validation procedure for non-targeted, peak-based, GC×GC-MS data profiling. The procedure introduced performance parameters such as specificity, precision, accuracy, and uncertainty for a profiling method known as Comprehensive Template Matching. The performance was assessed by applying a three-week validation protocol based on CITAC/EURACHEM guidelines. Optimized ¹D and ²D retention times search windows, MS match factor threshold, detection threshold, and template threshold were evolved from two training sets by a semi-automated learning process. The effectiveness of proposed settings to consistently match 2D peak patterns was established by evaluating the rate of mismatched peaks and was expressed in terms of results accuracy. The study utilized 23 different 2D peak patterns providing the chemical fingerprints of raw and roasted hazelnuts (Corylus avellana L.) from different geographical origins, of diverse varieties and different roasting degrees. The validation results show that non-targeted peak-based profiling can be reliable with error rates lower than 10% independent of the degree of analytical variance. The optimized Comprehensive Template Matching procedure was employed to study hazelnut roasting profiles and in particular to find marker compounds strongly dependent on the thermal treatment, and to establish the correlation of potential marker compounds to geographical origin and variety/cultivar and finally to reveal the characteristic release of aroma active compounds. Copyright © 2012 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Neville G.W.; Heuze, Francois E.; Miller, Hamish D.S.
1993-03-01
The reference design for the underground facilities at the Waste Isolation Pilot Plant was developed using the best criteria available at initiation of the detailed design effort. These design criteria are contained in the US Department of Energy document titled Design Criteria, Waste Isolation Pilot Plant (WIPP). Revised Mission Concept-IIA (RMC-IIA), Rev. 4, dated February 1984. The validation process described in the Design Validation Final Report has resulted in validation of the reference design of the underground openings based on these criteria. Future changes may necessitate modification of the Design Criteria document and/or the reference design. Validation of the referencemore » design as presented in this report permits the consideration of future design or design criteria modifications necessitated by these changes or by experience gained at the WIPP. Any future modifications to the design criteria and/or the reference design will be governed by a DOE Standard Operation Procedure (SOP) covering underground design changes. This procedure will explain the process to be followed in describing, evaluating and approving the change.« less
Validation of the Continuum of Care Conceptual Model for Athletic Therapy
Lafave, Mark R.; Butterwick, Dale; Eubank, Breda
2015-01-01
Utilization of conceptual models in field-based emergency care currently borrows from existing standards of medical and paramedical professions. The purpose of this study was to develop and validate a comprehensive conceptual model that could account for injuries ranging from nonurgent to catastrophic events including events that do not follow traditional medical or prehospital care protocols. The conceptual model should represent the continuum of care from the time of initial injury spanning to an athlete's return to participation in their sport. Finally, the conceptual model should accommodate both novices and experts in the AT profession. This paper chronicles the content validation steps of the Continuum of Care Conceptual Model for Athletic Therapy (CCCM-AT). The stages of model development were domain and item generation, content expert validation using a three-stage modified Ebel procedure, and pilot testing. Only the final stage of the modified Ebel procedure reached a priori 80% consensus on three domains of interest: (1) heading descriptors; (2) the order of the model; (3) the conceptual model as a whole. Future research is required to test the use of the CCCM-AT in order to understand its efficacy in teaching and practice within the AT discipline. PMID:26464897
40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 14 2010-07-01 2010-07-01 false Alternative Validation Procedure for EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. D Appendix D to Part 63—Alternative Validation...
40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 14 2011-07-01 2011-07-01 false Alternative Validation Procedure for EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. D Appendix D to Part 63—Alternative Validation...
40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 15 2014-07-01 2014-07-01 false Alternative Validation Procedure for EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. D Appendix D to Part 63—Alternative Validation...
40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 15 2013-07-01 2013-07-01 false Alternative Validation Procedure for EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. D Appendix D to Part 63—Alternative Validation...
40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 15 2012-07-01 2012-07-01 false Alternative Validation Procedure for EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. D Appendix D to Part 63—Alternative Validation...
Bonino, Angela Yarnell; Leibold, Lori J
2017-01-23
Collecting reliable behavioral data from toddlers and preschoolers is challenging. As a result, there are significant gaps in our understanding of human auditory development for these age groups. This paper describes an observer-based procedure for measuring hearing sensitivity with a two-interval, two-alternative forced-choice paradigm. Young children are trained to perform a play-based, motor response (e.g., putting a block in a bucket) whenever they hear a target signal. An experimenter observes the child's behavior and makes a judgment about whether the signal was presented during the first or second observation interval; the experimenter is blinded to the true signal interval, so this judgment is based solely on the child's behavior. These procedures were used to test 2 to 4 year-olds (n = 33) with no known hearing problems. The signal was a 1,000 Hz warble tone presented in quiet, and the signal level was adjusted to estimate a threshold corresponding to 71%-correct detection. A valid threshold was obtained for 82% of children. These results indicate that the two-interval procedure is both feasible and reliable for use with toddlers and preschoolers. The two-interval, observer-based procedure described in this paper is a powerful tool for evaluating hearing in young children because it guards against response bias on the part of the experimenter.
Project W-314 specific test and evaluation plan for transfer line SN-633 (241-AX-B to 241-AY-02A)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hays, W.H.
1998-03-20
The purpose of this Specific Test and Evaluation Plan (STEP) is to provide a detailed written plan for the systematic testing of modifications made by the addition of the SN-633 transfer line by the W-314 Project. The STEP develops the outline for test procedures that verify the system`s performance to the established Project design criteria. The STEP is a lower tier document based on the W-314 Test and Evaluation Plan (TEP). This STEP encompasses all testing activities required to demonstrate compliance to the project design criteria as it relates to the addition of transfer line SN-633. The Project Design Specificationsmore » (PDS) identify the specific testing activities required for the Project. Testing includes Validations and Verifications (e.g., Commercial Grade Item Dedication activities), Factory Acceptance Tests (FATs), installation tests and inspections, Construction Acceptance Tests (CATs), Acceptance Test Procedures (ATPs), Pre-Operational Test Procedures (POTPs), and Operational Test Procedures (OTPs). It should be noted that POTPs are not required for testing of the transfer line addition. The STEP will be utilized in conjunction with the TEP for verification and validation.« less
Shortt, Samuel E D; Shaw, Ralph A; Elliott, David; Mackillop, William J
2004-06-01
Provincial governments require timely, economical methods to monitor surgical waiting periods. Although use of prospective procedure-specific registers would be the ideal method, a less elaborate system has been proposed that is based on physician billing data. This study assessed the validity of using the date of the last service billed prior to surgery as a proxy for the beginning of the post-referral, pre-surgical waiting period. We examined charts for 31,824 elective surgical encounters between 1992 and 1996 at an Ontario teaching hospital. The date of the last service before surgery (the last billing date) was compared with the date of the consultant's letter indicating a decision to book surgery (i.e., to begin waiting). Several surgical specialties (but excluding cardiac, orthopedic and gynecologic) had a close correlation between the dates of the last pre-surgery visit and those of the actual decision to place the patient on the waiting list. Similar results were found for 12 of 15 individually studied procedures, including some orthopedic and gynecological procedures. Used judiciously, billing data is a timely, inexpensive and generally accurate method by which provincial governments could monitor trends in waiting times for appropriately selected surgical procedures.
Johnson, Sheena Joanne; Guediri, Sara M; Kilkenny, Caroline; Clough, Peter J
2011-12-01
This study developed and validated a virtual reality (VR) simulator for use by interventional radiologists. Research in the area of skill acquisition reports practice as essential to become a task expert. Studies on simulation show skills learned in VR can be successfully transferred to a real-world task. Recently, with improvements in technology, VR simulators have been developed to allow complex medical procedures to be practiced without risking the patient. Three studies are reported. In Study I, 35 consultant interventional radiologists took part in a cognitive task analysis to empirically establish the key competencies of the Seldinger procedure. In Study 2, 62 participants performed one simulated procedure, and their performance was compared by expertise. In Study 3, the transferability of simulator training to a real-world procedure was assessed with 14 trainees. Study I produced 23 key competencies that were implemented as performance measures in the simulator. Study 2 showed the simulator had both face and construct validity, although some issues were identified. Study 3 showed the group that had undergone simulator training received significantly higher mean performance ratings on a subsequent patient procedure. The findings of this study support the centrality of validation in the successful design of simulators and show the utility of simulators as a training device. The studies show the key elements of a validation program for a simulator. In addition to task analysis and face and construct validities, the authors highlight the importance of transfer of training in validation studies.
Vision based flight procedure stereo display system
NASA Astrophysics Data System (ADS)
Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng
2008-03-01
A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.
El-Housseiny, Azza A; Alsadat, Farah A; Alamoudi, Najlaa M; El Derwi, Douaa A; Farsi, Najat M; Attar, Moaz H; Andijani, Basil M
2016-04-14
Early recognition of dental fear is essential for the effective delivery of dental care. This study aimed to test the reliability and validity of the Arabic version of the Children's Fear Survey Schedule-Dental Subscale (CFSS-DS). A school-based sample of 1546 children was randomly recruited. The Arabic version of the CFSS-DS was completed by children during class time. The scale was tested for internal consistency and test-retest reliability. To test criterion validity, children's behavior was assessed using the Frankl scale during dental examination, and results were compared with children's CFSS-DS scores. To test the scale's construct validity, scores on "fear of going to the dentist soon" were correlated with CFSS-DS scores. Factor analysis was also used. The Arabic version of the CFSS-DS showed high reliability regarding both test-retest reliability (intraclass correlation = 0.83, p < 0.001) and internal consistency (Cronbach's α = 0.88). It showed good criterion validity: children with negative behavior had significantly higher fear scores (t = 13.67, p < 0.001). It also showed moderate construct validity (Spearman's rho correlation, r = 0.53, p < 0.001). Factor analysis identified the following factors: "fear of invasive dental procedures," "fear of less invasive dental procedures" and "fear of strangers." The Arabic version of the CFSS-DS is a reliable and valid measure of dental fear in Arabic-speaking children. Pediatric dentists and researchers may use this validated version of the CFSS-DS to measure dental fear in Arabic-speaking children.
Translating the Simulation of Procedural Drilling Techniques for Interactive Neurosurgical Training
Stredney, Don; Rezai, Ali R.; Prevedello, Daniel M.; Elder, J. Bradley; Kerwin, Thomas; Hittle, Bradley; Wiet, Gregory J.
2014-01-01
Background Through previous and concurrent efforts, we have developed a fully virtual environment to provide procedural training of otologic surgical technique. The virtual environment is based on high-resolution volumetric data of the regional anatomy. This volumetric data helps drive an interactive multi-sensory, i.e., visual (stereo), aural (stereo), and tactile simulation environment. Subsequently, we have extended our efforts to support the training of neurosurgical procedural technique as part of the CNS simulation initiative. Objective The goal of this multi-level development is to deliberately study the integration of simulation technologies into the neurosurgical curriculum and to determine their efficacy in teaching minimally invasive cranial and skull base approaches. Methods We discuss issues of biofidelity as well as our methods to provide objective, quantitative automated assessment for the residents. Results We conclude with a discussion of our experiences by reporting on preliminary formative pilot studies and proposed approaches to take the simulation to the next level through additional validation studies. Conclusion We have presented our efforts to translate an otologic simulation environment for use in the neurosurgical curriculum. We have demonstrated the initial proof of principles and define the steps to integrate and validate the system as an adjuvant to the neurosurgical curriculum. PMID:24051887
Systematic, Cooperative Evaluation.
ERIC Educational Resources Information Center
Nassif, Paula M.
Evaluation procedures based on a systematic evaluation methodology, decision-maker validity, new measurement and design techniques, low cost, and a high level of cooperation on the part of the school staff were used in the assessment of a public school mathematics program for grades 3-8. The mathematics curriculum was organized into Spirals which…
Quest: The Interactive Test Analysis System.
ERIC Educational Resources Information Center
Adams, Raymond J.; Khoo, Siek-Toon
The Quest program offers a comprehensive test and questionnaire analysis environment by providing a data analyst (a computer program) with access to the most recent developments in Rasch measurement theory, as well as a range of traditional analysis procedures. This manual helps the user use Quest to construct and validate variables based on…
Adelborg, Kasper; Sundbøll, Jens; Munch, Troels; Frøslev, Trine; Sørensen, Henrik Toft; Bøtker, Hans Erik; Schmidt, Morten
2016-01-01
Objective Danish medical registries are widely used for cardiovascular research, but little is known about the data quality of cardiac interventions. We computed positive predictive values (PPVs) of codes for cardiac examinations, procedures and surgeries registered in the Danish National Patient Registry during 2010–2012. Design Population-based validation study. Setting We randomly sampled patients from 1 university hospital and 2 regional hospitals in the Central Denmark Region. Participants 1239 patients undergoing different cardiac interventions. Main outcome measure PPVs with medical record review as reference standard. Results A total of 1233 medical records (99% of the total sample) were available for review. PPVs ranged from 83% to 100%. For examinations, the PPV was overall 98%, reflecting PPVs of 97% for echocardiography, 97% for right heart catheterisation and 100% for coronary angiogram. For procedures, the PPV was 98% overall, with PPVs of 98% for thrombolysis, 92% for cardioversion, 100% for radiofrequency ablation, 98% for percutaneous coronary intervention, and 100% for both cardiac pacemakers and implantable cardiac defibrillators. For cardiac surgery, the overall PPVs was 99%, encompassing PPVs of 100% for mitral valve surgery, 99% for aortic valve surgery, 98% for coronary artery bypass graft surgery, and 100% for heart transplantation. The accuracy of coding was consistent within age, sex, and calendar year categories, and the agreement between independent reviewers was high (99%). Conclusions Cardiac examinations, procedures and surgeries have high PPVs in the Danish National Patient Registry. PMID:27940630
Wightman, Jade; Julio, Flávia; Virués-Ortega, Javier
2014-05-01
Experimental functional analysis is an assessment methodology to identify the environmental factors that maintain problem behavior in individuals with developmental disabilities and in other populations. Functional analysis provides the basis for the development of reinforcement-based approaches to treatment. This article reviews the procedures, validity, and clinical implementation of the methodological variations of functional analysis and function-based interventions. We present six variations of functional analysis methodology in addition to the typical functional analysis: brief functional analysis, single-function tests, latency-based functional analysis, functional analysis of precursors, and trial-based functional analysis. We also present the three general categories of function-based interventions: extinction, antecedent manipulation, and differential reinforcement. Functional analysis methodology is a valid and efficient approach to the assessment of problem behavior and the selection of treatment strategies.
A diagnosis-based clinical decision rule for spinal pain part 2: review of the literature
Murphy, Donald R; Hurwitz, Eric L; Nelson, Craig F
2008-01-01
Background Spinal pain is a common and often disabling problem. The research on various treatments for spinal pain has, for the most part, suggested that while several interventions have demonstrated mild to moderate short-term benefit, no single treatment has a major impact on either pain or disability. There is great need for more accurate diagnosis in patients with spinal pain. In a previous paper, the theoretical model of a diagnosis-based clinical decision rule was presented. The approach is designed to provide the clinician with a strategy for arriving at a specific working diagnosis from which treatment decisions can be made. It is based on three questions of diagnosis. In the current paper, the literature on the reliability and validity of the assessment procedures that are included in the diagnosis-based clinical decision rule is presented. Methods The databases of Medline, Cinahl, Embase and MANTIS were searched for studies that evaluated the reliability and validity of clinic-based diagnostic procedures for patients with spinal pain that have relevance for questions 2 (which investigates characteristics of the pain source) and 3 (which investigates perpetuating factors of the pain experience). In addition, the reference list of identified papers and authors' libraries were searched. Results A total of 1769 articles were retrieved, of which 138 were deemed relevant. Fifty-one studies related to reliability and 76 related to validity. One study evaluated both reliability and validity. Conclusion Regarding some aspects of the DBCDR, there are a number of studies that allow the clinician to have a reasonable degree of confidence in his or her findings. This is particularly true for centralization signs, neurodynamic signs and psychological perpetuating factors. There are other aspects of the DBCDR in which a lesser degree of confidence is warranted, and in which further research is needed. PMID:18694490
ERIC Educational Resources Information Center
Albanese, Mark A.; Jacobs, Richard M.
1990-01-01
The reliability and validity of a procedure to measure diagnostic-reasoning and problem-solving skills taught in predoctoral orthodontic education were studied using 68 second year dental students. The procedure includes stimulus material and 33 multiple-choice items. It is a feasible way of assessing problem-solving skills in dentistry education…
Shen, Xing-Rong; Chai, Jing; Feng, Rui; Liu, Tong-Zhu; Tong, Gui-Xian; Cheng, Jing; Li, Kai-Chun; Xie, Shao-Yu; Shi, Yong; Wang, De-Bin
2014-01-01
The big gap between efficacy of population level prevention and expectations due to heterogeneity and complexity of cancer etiologic factors calls for selective yet personalized interventions based on effective risk assessment. This paper documents our research protocol aimed at refining and validating a two-stage and web- based cancer risk assessment tool, from a tentative one in use by an ongoing project, capable of identifying individuals at elevated risk for one or more types of the 80% leading cancers in rural China with adequate sensitivity and specificity and featuring low cost, easy application and cultural and technical sensitivity for farmers and village doctors. The protocol adopted a modified population-based case control design using 72, 000 non-patients as controls, 2, 200 cancer patients as cases, and another 600 patients as cases for external validation. Factors taken into account comprised 8 domains including diet and nutrition, risk behaviors, family history, precancerous diseases, related medical procedures, exposure to environment hazards, mood and feelings, physical activities and anthropologic and biologic factors. Modeling stresses explored various methodologies like empirical analysis, logistic regression, neuro-network analysis, decision theory and both internal and external validation using concordance statistics, predictive values, etc..
Isupov, Inga; McInnes, Matthew D F; Hamstra, Stan J; Doherty, Geoffrey; Gupta, Ashish; Peddle, Susan; Jibri, Zaid; Rakhra, Kawan; Hibbert, Rebecca M
2017-04-01
The purpose of this study is to develop a tool to assess the procedural competence of radiology trainees, with sources of evidence gathered from five categories to support the construct validity of tool: content, response process, internal structure, relations to other variables, and consequences. A pilot form for assessing procedural competence among radiology residents, known as the RAD-Score tool, was developed by evaluating published literature and using a modified Delphi procedure involving a group of local content experts. The pilot version of the tool was tested by seven radiology department faculty members who evaluated procedures performed by 25 residents at one institution between October 2014 and June 2015. Residents were evaluated while performing multiple procedures in both clinical and simulation settings. The main outcome measure was the percentage of residents who were considered ready to perform procedures independently, with testing conducted to determine differences between levels of training. A total of 105 forms (for 52 procedures performed in a clinical setting and 53 procedures performed in a simulation setting) were collected for a variety of procedures (eight vascular or interventional, 42 body, 12 musculoskeletal, 23 chest, and 20 breast procedures). A statistically significant difference was noted in the percentage of trainees who were rated as being ready to perform a procedure independently (in postgraduate year [PGY] 2, 12% of residents; in PGY3, 61%; in PGY4, 85%; and in PGY5, 88%; p < 0.05); this difference persisted in the clinical and simulation settings. User feedback and psychometric analysis were used to create a final version of the form. This prospective study describes the successful development of a tool for assessing the procedural competence of radiology trainees with high levels of construct validity in multiple domains. Implementation of the tool in the radiology residency curriculum is planned and can play an instrumental role in the transition to competency-based radiology training.
A Virtual Reality Training Curriculum for Laparoscopic Colorectal Surgery.
Beyer-Berjot, Laura; Berdah, Stéphane; Hashimoto, Daniel A; Darzi, Ara; Aggarwal, Rajesh
Training within a competency-based curriculum (CBC) outside the operating room enhances performance during real basic surgical procedures. This study aimed to design and validate a virtual reality CBC for an advanced laparoscopic procedure: sigmoid colectomy. This was a multicenter randomized study. Novice (surgeons who had performed <5 laparoscopic colorectal resections as primary operator), intermediate (between 10 and 20), and experienced surgeons (>50) were enrolled. Validity evidence for the metrics given by the virtual reality simulator, the LAP Mentor, was based on the second attempt of each task in between groups. The tasks assessed were 3 modules of a laparoscopic sigmoid colectomy (medial dissection [MD], lateral dissection [LD], and anastomosis) and a full procedure (FP). Novice surgeons were randomized to 1 of 2 groups to perform 8 further attempts of all 3 modules or FP, for learning curve analysis. Two academic tertiary care centers-division of surgery of St. Mary's campus, Imperial College Healthcare NHS Trust, London and Nord Hospital, Assistance Publique-Hôpitaux de Marseille, Aix-Marseille Université, Marseille, were involved. Novice surgeons were residents in digestive surgery at St. Mary's and Nord Hospitals. Intermediate and experienced surgeons were board-certified academic surgeons. A total of 20 novice surgeons, 7 intermediate surgeons, and 6 experienced surgeons were enrolled. Evidence for validity based on experience was identified in MD, LD, and FP for time (p = 0.005, p = 0.003, and p = 0.001, respectively), number of movements (p = 0.013, p = 0.005, and p = 0.001, respectively), and path length (p = 0.03, p = 0.017, and p = 0.001, respectively), and only for time (p = 0.03) and path length (p = 0.013) in the anastomosis module. Novice surgeons' performance significantly improved through repetition for time, movements, and path length in MD, LD, and FP. Experienced surgeons' benchmark criteria were defined for all construct metrics showing validity evidence. A CBC in laparoscopic colorectal surgery has been designed. Such training may reduce the learning curve during real colorectal resections in the operating room. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Crochet, Patrice; Aggarwal, Rajesh; Knight, Sophie; Berdah, Stéphane; Boubli, Léon; Agostini, Aubert
2017-06-01
Substantial evidence in the scientific literature supports the use of simulation for surgical education. However, curricula lack for complex laparoscopic procedures in gynecology. The objective was to evaluate the validity of a program that reproduces key specific components of a laparoscopic hysterectomy (LH) procedure until colpotomy on a virtual reality (VR) simulator and to develop an evidence-based and stepwise training curriculum. This prospective cohort study was conducted in a Marseille teaching hospital. Forty participants were enrolled and were divided into experienced (senior surgeons who had performed more than 100 LH; n = 8), intermediate (surgical trainees who had performed 2-10 LH; n = 8) and inexperienced (n = 24) groups. Baselines were assessed on a validated basic task. Participants were tested for the LH procedure on a high-fidelity VR simulator. Validity evidence was proposed as the ability to differentiate between the three levels of experience. Inexperienced subjects performed ten repetitions for learning curve analysis. Proficiency measures were based on experienced surgeons' performances. Outcome measures were simulator-derived metrics and Objective Structured Assessment of Technical Skills (OSATS) scores. Quantitative analysis found significant inter-group differences between experienced intermediate and inexperienced groups for time (1369, 2385 and 3370 s; p < 0.001), number of movements (2033, 3195 and 4056; p = 0.001), path length (3390, 4526 and 5749 cm; p = 0.002), idle time (357, 654 and 747 s; p = 0.001), respect for tissue (24, 40 and 84; p = 0.01) and number of bladder injuries (0.13, 0 and 4.27; p < 0.001). Learning curves plateaued at the 2nd to 6th repetition. Further qualitative analysis found significant inter-group OSATS score differences at first repetition (22, 15 and 8, respectively; p < 0.001) and second repetition (25.5, 19.5 and 14; p < 0.001). The VR program for LH accrued validity evidence and allowed the development of a training curriculum using a structured scientific methodology.
Microsurgery Workout: A Novel Simulation Training Curriculum Based on Nonliving Models.
Rodriguez, Jose R; Yañez, Ricardo; Cifuentes, Ignacio; Varas, Julian; Dagnino, Bruno
2016-10-01
Currently, there are no valid training programs based solely on nonliving models. The authors aimed to develop and validate a microsurgery training program based on nonliving models and assess the transfer of skills to a live rat model. Postgraduate year-3 general surgery residents were assessed in a 17-session program, performing arterial and venous end-to-end anastomosis on ex vivo chicken models. Procedures were recorded and rated by two blinded experts using validated global and specific scales (objective structured assessment of technical skills) and a validated checklist. Operating times and patency rates were assessed. Hand-motion analysis was used to measure economy of movements. After training, residents performed an arterial and venous end-to-end anastomosis on live rats. Results were compared to six experienced surgeons in the same models. Values of p < 0.05 were considered statistically significant. Learning curves were achieved. Ten residents improved their median global and specific objective structured assessment of technical skills scores for artery [10 (range, 8 to 10) versus 28 (range, 27 to 29), p < 0.05; and 8 (range, 7 to 9) versus 28 (range, 27 to 28), p < 0.05] and vein [8 (range, 8 to 11) versus 28 (range, 27 to 28), p < 0.05; and 8 (range, 7 to 9) versus 28 (range, 27 to 29), p < 0.05]. Checklist scores also improved for both procedures (p < 0.05). Trainees were slower and less efficient than experienced surgeons (p < 0.05). In the living rat, patency rates at 30 minutes were 100 percent and 50 percent for artery and vein, respectively. Significant acquisition of microsurgical skills was achieved by trainees to a level similar to that of experienced surgeons. Acquired skills were transferred to a more complex live model.
Lindemann, Ulrich; Zijlstra, Wiebren; Aminian, Kamiar; Chastin, Sebastien F M; de Bruin, Eling D; Helbostad, Jorunn L; Bussmann, Johannes B J
2014-01-10
Physical activity is an important determinant of health and well-being in older persons and contributes to their social participation and quality of life. Hence, assessment tools are needed to study this physical activity in free-living conditions. Wearable motion sensing technology is used to assess physical activity. However, there is a lack of harmonisation of validation protocols and applied statistics, which make it hard to compare available and future studies. Therefore, the aim of this paper is to formulate recommendations for assessing the validity of sensor-based activity monitoring in older persons with focus on the measurement of body postures and movements. Validation studies of body-worn devices providing parameters on body postures and movements were identified and summarized and an extensive inter-active process between authors resulted in recommendations about: information on the assessed persons, the technical system, and the analysis of relevant parameters of physical activity, based on a standardized and semi-structured protocol. The recommended protocols can be regarded as a first attempt to standardize validity studies in the area of monitoring physical activity.
NASA Astrophysics Data System (ADS)
Battistini, Alessandro; Rosi, Ascanio; Segoni, Samuele; Catani, Filippo; Casagli, Nicola
2017-04-01
Landslide inventories are basic data for large scale landslide modelling, e.g. they are needed to calibrate and validate rainfall thresholds, physically based models and early warning systems. The setting up of landslide inventories with traditional methods (e.g. remote sensing, field surveys and manual retrieval of data from technical reports and local newspapers) is time consuming. The objective of this work is to automatically set up a landslide inventory using a state-of-the art semantic engine based on data mining on online news (Battistini et al., 2013) and to evaluate if the automatically generated inventory can be used to validate a regional scale landslide warning system based on rainfall-thresholds. The semantic engine scanned internet news in real time in a 50 months test period. At the end of the process, an inventory of approximately 900 landslides was set up for the Tuscany region (23,000 km2, Italy). The inventory was compared with the outputs of the regional landslide early warning system based on rainfall thresholds, and a good correspondence was found: e.g. 84% of the events reported in the news is correctly identified by the model. In addition, the cases of not correspondence were forwarded to the rainfall threshold developers, which used these inputs to update some of the thresholds. On the basis of the results obtained, we conclude that automatic validation of landslide models using geolocalized landslide events feedback is possible. The source of data for validation can be obtained directly from the internet channel using an appropriate semantic engine. We also automated the validation procedure, which is based on a comparison between forecasts and reported events. We verified that our approach can be automatically used for a near real time validation of the warning system and for a semi-automatic update of the rainfall thresholds, which could lead to an improvement of the forecasting effectiveness of the warning system. In the near future, the proposed procedure could operate in continuous time and could allow for a periodic update of landslide hazard models and landslide early warning systems with minimum human intervention. References: Battistini, A., Segoni, S., Manzo, G., Catani, F., Casagli, N. (2013). Web data mining for automatic inventory of geohazards at national scale. Applied Geography, 43, 147-158.
Chen, Chun-Hung; Li, Cheng-Chang; Chou, Chuan-Yu; Chen, Shu-Hwa
2009-08-01
This project was designed to improve the low validity rate for nurses responsible to operate single door autoclave sterilizers in the operating room. By investigating the current status, we found that the nursing staff validity rate of cognition on the autoclave sterilizer was 85%, and the practice operating check validity rate was only 80%. Such was due to a lack of in-service education. Problems with operation included: 1. Unsafe behaviors - not following standard procedure, lacking relevant operating knowledge and absence of a check form; 2. Unsafe environment - the conveying steam piping was typically not covered and lacked operation marks. Recommended improvement measures included: 1. holding in-service education; 2. generating an operation procedure flow chart; 3. implementing obstacle eliminating procedures; 4. covering piping to prevent fire and burns; 5. performing regular checks to ensure all procedures are followed. Following intervention, nursing staff cognition rose from 85% to 100%, while the operation validity rate rose from 80% to 100%. These changes ensure a safer operating room environment, and helps facilities move toward a zero accident rate in the healthcare environment.
IACOANGELI, Maurizio; NOCCHI, Niccolò; NASI, Davide; DI RIENZO, Alessandro; DOBRAN, Mauro; GLADI, Maurizio; COLASANTI, Roberto; ALVARO, Lorenzo; POLONARA, Gabriele; SCERRATI, Massimo
2016-01-01
The most important target of minimally invasive surgery is to obtain the best therapeutic effect with the least iatrogenic injury. In this background, a pivotal role in contemporary neurosurgery is played by the supraorbital key-hole approach proposed by Perneczky for anterior cranial base surgery. In this article, it is presented as a possible valid alternative to the traditional craniotomies in anterior cranial fossa meningiomas removal. From January 2008 to January 2012 at our department 56 patients underwent anterior cranial base meningiomas removal. Thirty-three patients were submitted to traditional approaches while 23 to supraorbital key-hole technique. A clinical and neuroradiological pre- and postoperative evaluation were performed, with attention to eventual complications, length of surgical procedure, and hospitalization. Compared to traditional approaches the supraorbital key-hole approach was associated neither to a greater range of postoperative complications nor to a longer surgical procedure and hospitalization while permitting the same lesion control. With this technique, minimization of brain exposition and manipulation with reduction of unwanted iatrogenic injuries, neurovascular structures preservation, and a better aesthetic result are possible. The supraorbital key-hole approach according to Perneckzy could represent a valid alternative to traditional approaches in anterior cranial base meningiomas surgery. PMID:26804334
NASA Astrophysics Data System (ADS)
Song, Yang; Liu, Zhigang; Wang, Hongrui; Lu, Xiaobing; Zhang, Jing
2015-10-01
Due to the intrinsic nonlinear characteristics and complex structure of the high-speed catenary system, a modelling method is proposed based on the analytical expressions of nonlinear cable and truss elements. The calculation procedure for solving the initial equilibrium state is proposed based on the Newton-Raphson iteration method. The deformed configuration of the catenary system as well as the initial length of each wire can be calculated. Its accuracy and validity of computing the initial equilibrium state are verified by comparison with the separate model method, absolute nodal coordinate formulation and other methods in the previous literatures. Then, the proposed model is combined with a lumped pantograph model and a dynamic simulation procedure is proposed. The accuracy is guaranteed by the multiple iterative calculations in each time step. The dynamic performance of the proposed model is validated by comparison with EN 50318, the results of the finite element method software and SIEMENS simulation report, respectively. At last, the influence of the catenary design parameters (such as the reserved sag and pre-tension) on the dynamic performance is preliminarily analysed by using the proposed model.
Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van
2018-04-01
In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Giovanna, Vessia; Luca, Pisano; Carmela, Vennari; Mauro, Rossi; Mario, Parise
2016-01-01
This paper proposes an automated method for the selection of rainfall data (duration, D, and cumulated, E), responsible for shallow landslide initiation. The method mimics an expert person identifying D and E from rainfall records through a manual procedure whose rules are applied according to her/his judgement. The comparison between the two methods is based on 300 D-E pairs drawn from temporal rainfall data series recorded in a 30 days time-lag before the landslide occurrence. Statistical tests, employed on D and E samples considered both paired and independent values to verify whether they belong to the same population, show that the automated procedure is able to replicate the expert pairs drawn by the expert judgment. Furthermore, a criterion based on cumulated distribution functions (CDFs) is proposed to select the most related D-E pairs to the expert one among the 6 drawn from the coded procedure for tracing the empirical rainfall threshold line.
The Simplified Aircraft-Based Paired Approach With the ALAS Alerting Algorithm
NASA Technical Reports Server (NTRS)
Perry, Raleigh B.; Madden, Michael M.; Torres-Pomales, Wilfredo; Butler, Ricky W.
2013-01-01
This paper presents the results of an investigation of a proposed concept for closely spaced parallel runways called the Simplified Aircraft-based Paired Approach (SAPA). This procedure depends upon a new alerting algorithm called the Adjacent Landing Alerting System (ALAS). This study used both low fidelity and high fidelity simulations to validate the SAPA procedure and test the performance of the new alerting algorithm. The low fidelity simulation enabled a determination of minimum approach distance for the worst case over millions of scenarios. The high fidelity simulation enabled an accurate determination of timings and minimum approach distance in the presence of realistic trajectories, communication latencies, and total system error for 108 test cases. The SAPA procedure and the ALAS alerting algorithm were applied to the 750-ft parallel spacing (e.g., SFO 28L/28R) approach problem. With the SAPA procedure as defined in this paper, this study concludes that a 750-ft application does not appear to be feasible, but preliminary results for 1000-ft parallel runways look promising.
Konge, L; Vilmann, P; Clementsen, P; Annema, J T; Ringsted, C
2012-10-01
Fine-needle aspiration (FNA) guided by endoscopic ultrasonography (EUS) is important in mediastinal staging of non-small cell lung cancer (NSCLC). Training standards and implementation strategies of this technique are currently under discussion. The aim of this study was to explore the reliability and validity of a newly developed EUS Assessment Tool (EUSAT) designed to measure competence in EUS - FNA for mediastinal staging of NSCLC. A total of 30 patients with proven or suspected NSCLC underwent EUS - FNA for mediastinal staging by three trainees and three experienced physicians. Their performances were assessed prospectively by three experts in EUS under direct observation and again 2 months later in a blinded fashion using digital video-recordings. Based on the assessments, intra-rater reliability, inter-rater reliability, and construct validity were explored. The intra-rater reliability was good (Cronbach's α = 0.80), but comparison of results based on direct observations and blinded video-recordings indicated a significant bias favoring consultants (P = 0.022). Inter-rater reliability was very good (Cronbach's α = 0.93). However, one rater assessing five procedures or two raters each assessing four procedures were necessary to secure a generalizability coefficient of 0.80. The assessment tool demonstrated construct validity by discriminating between trainees and experienced physicians (P = 0.034). Competency in mediastinal staging of NSCLC using EUS and EUS - FNA can be assessed in a reliable and valid way using the EUSAT assessment tool. Measuring and defining competency and training requirements could improve EUS quality and benefit patient care. © Georg Thieme Verlag KG Stuttgart · New York.
Quantitative impedance measurements for eddy current model validation
NASA Astrophysics Data System (ADS)
Khan, T. A.; Nakagawa, N.
2000-05-01
This paper reports on a series of laboratory-based impedance measurement data, collected by the use of a quantitatively accurate, mechanically controlled measurement station. The purpose of the measurement is to validate a BEM-based eddy current model against experiment. We have therefore selected two "validation probes," which are both split-D differential probes. Their internal structures and dimensions are extracted from x-ray CT scan data, and thus known within the measurement tolerance. A series of measurements was carried out, using the validation probes and two Ti-6Al-4V block specimens, one containing two 1-mm long fatigue cracks, and the other containing six EDM notches of a range of sizes. Motor-controlled XY scanner performed raster scans over the cracks, with the probe riding on the surface with a spring-loaded mechanism to maintain the lift off. Both an impedance analyzer and a commercial EC instrument were used in the measurement. The probes were driven in both differential and single-coil modes for the specific purpose of model validation. The differential measurements were done exclusively by the eddyscope, while the single-coil data were taken with both the impedance analyzer and the eddyscope. From the single-coil measurements, we obtained the transfer function to translate the voltage output of the eddyscope into impedance values, and then used it to translate the differential measurement data into impedance results. The presentation will highlight the schematics of the measurement procedure, a representative of raw data, explanation of the post data-processing procedure, and then a series of resulting 2D flaw impedance results. A noise estimation will be given also, in order to quantify the accuracy of these measurements, and to be used in probability-of-detection estimation.—This work was supported by the NSF Industry/University Cooperative Research Program.
DOT National Transportation Integrated Search
2006-01-01
A previous study developed a procedure for microscopic simulation model calibration and validation and evaluated the procedure via two relatively simple case studies using three microscopic simulation models. Results showed that default parameters we...
Moreno Londoño, Ana Maria; Schulz, Peter J
2014-04-01
Health literacy has been recognized as an important factor influencing health behaviors and health outcomes. However, its definition is still evolving, and the tools available for its measurement are limited in scope. Based on the conceptualization of health literacy within the Health Empowerment Model, the present study developed and validated a tool to assess patient's health knowledge use, within the context of asthma self-management. A review of scientific literature on asthma self-management, and several interviews with pulmonologists and asthma patients were conducted. From these, 19 scenarios with 4 response options each were drafted and assembled in a scenario-based questionnaire. Furthermore, a three round Delphi procedure was carried out, to validate the tool with the participation of 12 specialists in lung diseases. The face and content validity of the tool were achieved by face-to-face interviews with 2 pulmonologists and 5 patients. Consensus among the specialists on the adequacy of the response options was achieved after the three round Delphi procedure. The final tool has a 0.97 intra-class correlation coefficient (ICC), indicating a strong level of agreement among experts on the ratings of the response options. The ICC for single scenarios, range from 0.92 to 0.99. The newly developed tool provides a final score representing patient's health knowledge use, based on the specialist's consensus. This tool contributes to enriching the measurement of a more advanced health literacy dimension.
NASA Technical Reports Server (NTRS)
Sundstrom, J. L.
1980-01-01
The techniques required to produce and validate six detailed task timeline scenarios for crew workload studies are described. Specific emphasis is given to: general aviation single pilot instrument flight rules operations in a high density traffic area; fixed path metering and spacing operations; and comparative workload operation between the forward and aft-flight decks of the NASA terminal control vehicle. The validation efforts also provide a cursory examination of the resultant demand workload based on the operating procedures depicted in the detailed task scenarios.
Model-Based Verification and Validation of Spacecraft Avionics
NASA Technical Reports Server (NTRS)
Khan, Mohammed Omair
2012-01-01
Our simulation was able to mimic the results of 30 tests on the actual hardware. This shows that simulations have the potential to enable early design validation - well before actual hardware exists. Although simulations focused around data processing procedures at subsystem and device level, they can also be applied to system level analysis to simulate mission scenarios and consumable tracking (e.g. power, propellant, etc.). Simulation engine plug-in developments are continually improving the product, but handling time for time-sensitive operations (like those of the remote engineering unit and bus controller) can be cumbersome.
Continuing challenges for computer-based neuropsychological tests.
Letz, Richard
2003-08-01
A number of issues critical to the development of computer-based neuropsychological testing systems that remain continuing challenges to their widespread use in occupational and environmental health are reviewed. Several computer-based neuropsychological testing systems have been developed over the last 20 years, and they have contributed substantially to the study of neurologic effects of a number of environmental exposures. However, many are no longer supported and do not run on contemporary personal computer operating systems. Issues that are continuing challenges for development of computer-based neuropsychological tests in environmental and occupational health are discussed: (1) some current technological trends that generally make test development more difficult; (2) lack of availability of usable speech recognition of the type required for computer-based testing systems; (3) implementing computer-based procedures and tasks that are improvements over, not just adaptations of, their manually-administered predecessors; (4) implementing tests of a wider range of memory functions than the limited range now available; (5) paying more attention to motivational influences that affect the reliability and validity of computer-based measurements; and (6) increasing the usability of and audience for computer-based systems. Partial solutions to some of these challenges are offered. The challenges posed by current technological trends are substantial and generally beyond the control of testing system developers. Widespread acceptance of the "tablet PC" and implementation of accurate small vocabulary, discrete, speaker-independent speech recognition would enable revolutionary improvements to computer-based testing systems, particularly for testing memory functions not covered in existing systems. Dynamic, adaptive procedures, particularly ones based on item-response theory (IRT) and computerized-adaptive testing (CAT) methods, will be implemented in new tests that will be more efficient, reliable, and valid than existing test procedures. These additional developments, along with implementation of innovative reporting formats, are necessary for more widespread acceptance of the testing systems.
Silva, F G A; de Moura, M F S F; Dourado, N; Xavier, J; Pereira, F A M; Morais, J J L; Dias, M I R; Lourenço, P J; Judas, F M
2017-08-01
Fracture characterization of human cortical bone under mode II loading was analyzed using a miniaturized version of the end-notched flexure test. A data reduction scheme based on crack equivalent concept was employed to overcome uncertainties on crack length monitoring during the test. The crack tip shear displacement was experimentally measured using digital image correlation technique to determine the cohesive law that mimics bone fracture behavior under mode II loading. The developed procedure was validated by finite element analysis using cohesive zone modeling considering a trapezoidal with bilinear softening relationship. Experimental load-displacement curves, resistance curves and crack tip shear displacement versus applied displacement were used to validate the numerical procedure. The excellent agreement observed between the numerical and experimental results reveals the appropriateness of the proposed test and procedure to characterize human cortical bone fracture under mode II loading. The proposed methodology can be viewed as a novel valuable tool to be used in parametric and methodical clinical studies regarding features (e.g., age, diseases, drugs) influencing bone shear fracture under mode II loading.
NASA Astrophysics Data System (ADS)
Yi, Dake; Wang, TzuChiang
2018-06-01
In the paper, a new procedure is proposed to investigate three-dimensional fracture problems of a thin elastic plate with a long through-the-thickness crack under remote uniform tensile loading. The new procedure includes a new analytical method and high accurate finite element simulations. In the part of theoretical analysis, three-dimensional Maxwell stress functions are employed in order to derive three-dimensional crack tip fields. Based on the theoretical analysis, an equation which can describe the relationship among the three-dimensional J-integral J( z), the stress intensity factor K( z) and the tri-axial stress constraint level T z ( z) is derived first. In the part of finite element simulations, a fine mesh including 153360 elements is constructed to compute the stress field near the crack front, J( z) and T z ( z). Numerical results show that in the plane very close to the free surface, the K field solution is still valid for in-plane stresses. Comparison with the numerical results shows that the analytical results are valid.
Certification of highly complex safety-related systems.
Reinert, D; Schaefer, M
1999-01-01
The BIA has now 15 years of experience with the certification of complex electronic systems for safety-related applications in the machinery sector. Using the example of machining centres this presentation will show the systematic procedure for verifying and validating control systems using Application Specific Integrated Circuits (ASICs) and microcomputers for safety functions. One section will describe the control structure of machining centres with control systems using "integrated safety." A diverse redundant architecture combined with crossmonitoring and forced dynamization is explained. In the main section the steps of the systematic certification procedure are explained showing some results of the certification of drilling machines. Specification reviews, design reviews with test case specification, statistical analysis, and walk-throughs are the analytical measures in the testing process. Systematic tests based on the test case specification, Electro Magnetic Interference (EMI), and environmental testing, and site acceptance tests on the machines are the testing measures for validation. A complex software driven system is always undergoing modification. Most of the changes are not safety-relevant but this has to be proven. A systematic procedure for certifying software modifications is presented in the last section of the paper.
Mean estimation in highly skewed samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pederson, S P
The problem of inference for the mean of a highly asymmetric distribution is considered. Even with large sample sizes, usual asymptotics based on normal theory give poor answers, as the right-hand tail of the distribution is often under-sampled. This paper attempts to improve performance in two ways. First, modifications of the standard confidence interval procedure are examined. Second, diagnostics are proposed to indicate whether or not inferential procedures are likely to be valid. The problems are illustrated with data simulated from an absolute value Cauchy distribution. 4 refs., 2 figs., 1 tab.
Hosten, Bernard; Moreau, Ludovic; Castaings, Michel
2007-06-01
The paper presents a Fourier transform-based signal processing procedure for quantifying the reflection and transmission coefficients and mode conversion of guided waves diffracted by defects in plates made of viscoelastic materials. The case of the S(0) Lamb wave mode incident on a notch in a Perspex plate is considered. The procedure is applied to numerical data produced by a finite element code that simulates the propagation of attenuated guided modes and their diffraction by the notch, including mode conversion. Its validity and precision are checked by the way of the energy balance computation and by comparison with results obtained using an orthogonality relation-based processing method.
How to develop a Standard Operating Procedure for sorting unfixed cells
Schmid, Ingrid
2012-01-01
Written Standard Operating Procedures (SOPs) are an important tool to assure that recurring tasks in a laboratory are performed in a consistent manner. When the procedure covered in the SOP involves a high-risk activity such as sorting unfixed cells using a jet-in-air sorter, safety elements are critical components of the document. The details on sort sample handling, sorter set-up, validation, operation, troubleshooting, and maintenance, personal protective equipment (PPE), and operator training, outlined in the SOP are to be based on careful risk assessment of the procedure. This review provides background information on the hazards associated with sorting of unfixed cells and the process used to arrive at the appropriate combination of facility design, instrument placement, safety equipment, and practices to be followed. PMID:22381383
[Selection of medical students : Measurement of cognitive abilities and psychosocial competencies].
Schwibbe, Anja; Lackamp, Janina; Knorr, Mirjana; Hissbach, Johanna; Kadmon, Martina; Hampe, Wolfgang
2018-02-01
The German Constitutional Court is currently reviewing whether the actual study admission process in medicine is compatible with the constitutional right of freedom of profession, since applicants without an excellent GPA usually have to wait for seven years. If the admission system is changed, politicians would like to increase the influence of psychosocial criteria on selection as specified by the Masterplan Medizinstudium 2020.What experiences have been made with the actual selection procedures? How could Situational Judgement Tests contribute to the validity of future selection procedures to German medical schools?High school GPA is the best predictor of study performance, but is more and more under discussion due to the lack of comparability between states and schools and the growing number of applicants with top grades. Aptitude and knowledge tests, especially in the natural sciences, show incremental validity in predicting study performance. The measurement of psychosocial competencies with traditional interviews shows rather low reliability and validity. The more reliable multiple mini-interviews are superior in predicting practical study performance. Situational judgement tests (SJTs) used abroad are regarded as reliable and valid; the correlation of a German SJT piloted in Hamburg with the multiple mini-interview is cautiously encouraging.A model proposed by the Medizinischer Fakultätentag and the Bundesvertretung der Medizinstudierenden considers these results. Student selection is proposed to be based on a combination of high school GPA (40%) and a cognitive test (40%) as well as an SJT (10%) and job experience (10%). Furthermore, the faculties still have the option to carry out specific selection procedures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bly, Aaron; Oxstrand, Johanna; Le Blanc, Katya L
2015-02-01
Most activities that involve human interaction with systems in a nuclear power plant are guided by procedures. Traditionally, the use of procedures has been a paper-based process that supports safe operation of the nuclear power industry. However, the nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. Advances in digital technology make computer-based procedures (CBPs) a valid option that provides further enhancement of safety by improving human performance related to procedure use. The transition from paper-based procedures (PBPs) to CBPs creates a need for a computer-based proceduremore » system (CBPS). A CBPS needs to have the ability to perform logical operations in order to adjust to the inputs received from either users or real time data from plant status databases. Without the ability for logical operations the procedure is just an electronic copy of the paper-based procedure. In order to provide the CBPS with the information it needs to display the procedure steps to the user, special care is needed in the format used to deliver all data and instructions to create the steps. The procedure should be broken down into basic elements and formatted in a standard method for the CBPS. One way to build the underlying data architecture is to use an Extensible Markup Language (XML) schema, which utilizes basic elements to build each step in the smart procedure. The attributes of each step will determine the type of functionality that the system will generate for that step. The CBPS will provide the context for the step to deliver referential information, request a decision, or accept input from the user. The XML schema needs to provide all data necessary for the system to accurately perform each step without the need for the procedure writer to reprogram the CBPS. The research team at the Idaho National Laboratory has developed a prototype CBPS for field workers as well as the underlying data structure for such CBPS. The objective of the research effort is to develop guidance on how to design both the user interface and the underlying schema. This paper will describe the result and insights gained from the research activities conducted to date.« less
Whose Ethics, Whose Accountability? A Debate about University Research Ethics Committees
ERIC Educational Resources Information Center
Hoecht, Andreas
2011-01-01
Research ethics approval procedures and research ethics committees (RECs) are now well-established in most Western Universities. RECs base their judgements on an ethics code that has been developed by the health and biomedical sciences research community and that is widely considered to be universally valid regardless of discipline. On the other…
The Judgement Processes Involved in the Moderation of Teacher-Assessed Projects
ERIC Educational Resources Information Center
Crisp, Victoria
2017-01-01
Classroom-based assessments have the potential to enhance validity by facilitating the assessment of important skills that are difficult to assess in written examinations. Such assessments tend to be marked by teachers. To ensure consistent marking standards, quality assurance procedures are needed. In the context of continued debate over the…
Confidence Analyses of Self-Interpretation and Self-Description in Depressive Behaviour
ERIC Educational Resources Information Center
Rothuber, Helfried; Leibetseder, Max; Mitterauer, Bernhard
2014-01-01
The present paper represents an investigation in the procedure to validate a new questionnaire (Salzburg Subjective Behavioural Analysis, SSBA). This questionnaire is based on a new approach to the diagnosis of depressive behaviour. It is hypothesized that a patient suffering from a depressive disorder loses the ability to produce one or more…
Development and Face Validation of Strategies for Improving Consultation Skills
ERIC Educational Resources Information Center
Lefroy, Janet; Thomas, Adam; Harrison, Chris; Williams, Stephen; O'Mahony, Fidelma; Gay, Simon; Kinston, Ruth; McKinley, R. K.
2014-01-01
While formative workplace based assessment can improve learners' skills, it often does not because the procedures used do not facilitate feedback which is sufficiently specific to scaffold improvement. Provision of pre-formulated strategies to address predicted learning needs has potential to improve the quality and automate the provision of…
Fleury, Christopher M; Schwitzer, Jonathan A; Hung, Rex W; Baker, Stephen B
2018-01-01
Before creation and validation of the FACE-Q by Pusic et al., adverse event types and incidences following facial cosmetic procedures were objectively measured and reported by physicians, potentially leading to misrepresentation of the true patient experience. This article analyzes and compares adverse event data from both FACE-Q and recent review articles, incorporating patient-reported adverse event data to improve patient preparation for facial cosmetic procedures. FACE-Q adverse event data were extracted from peer-reviewed validation articles for face lift, rhinoplasty, and blepharoplasty, and these data were compared against adverse effect risk data published in recent Continuing Medical Education/Maintenance of Certification and other articles regarding the same procedures. The patient-reported adverse event data sets and the physician-reported adverse event data sets do contain overlapping elements, but each data set also contains unique elements. The data sets represent differing viewpoints. Furthermore, patient-reported outcomes from the FACE-Q provided incidence data that were otherwise previously not reported. In the growing facial cosmetic surgery industry, patient perspective is critical as a determinant of success; therefore, incorporation of evidence-based patient-reported outcome data will not only improve patient expectations and overall experience, but will also reveal adverse event incidences that were previously unknown. Given that there is incomplete overlap between patient-reported and physician-reported adverse events, presentation of both data sets in the consultation setting will improve patient preparation. Furthermore, use of validated tools such as the FACE-Q will allow surgeons to audit themselves critically.
Flores, María Isabel Alarcón; Romero-González, Roberto; Frenich, Antonia Garrido; Vidal, José Luis Martínez
2011-07-01
A new method has been developed and validated for the simultaneous analysis of different phytohormones (auxins, cytokinins and gibberellins) in vegetables. The compounds were extracted using a QuEChERS-based method (acronym of quick, easy, cheap, effective, rugged and safe). The separation and determination of the selected phytohormones were carried out by ultra high-performance liquid chromatography coupled to tandem mass spectrometry (UHPLC-MS/MS), using electrospray ionization source (ESI) in positive and negative ion modes. The method was validated and mean recoveries were evaluated at three concentration levels (50, 100 and 250 μg/kg), ranging from 75 to 110% at the three levels assayed. Intra- and interday precisions, expressed as relative standard deviations (RSDs), were lower than 20 and 25%, respectively. Limits of quantification (LOQs) were equal or lower than 10 μg/kg. The developed procedure was applied to seven courgette samples, and naphthylacetic acid, naphthylacetamide and benzyladenine were found in some of the analysed samples. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Validity of diagnoses, procedures, and laboratory data in Japanese administrative data.
Yamana, Hayato; Moriwaki, Mutsuko; Horiguchi, Hiromasa; Kodan, Mariko; Fushimi, Kiyohide; Yasunaga, Hideo
2017-10-01
Validation of recorded data is a prerequisite for studies that utilize administrative databases. The present study evaluated the validity of diagnoses and procedure records in the Japanese Diagnosis Procedure Combination (DPC) data, along with laboratory test results in the newly-introduced Standardized Structured Medical Record Information Exchange (SS-MIX) data. Between November 2015 and February 2016, we conducted chart reviews of 315 patients hospitalized between April 2014 and March 2015 in four middle-sized acute-care hospitals in Shizuoka, Kochi, Fukuoka, and Saga Prefectures and used them as reference standards. The sensitivity and specificity of DPC data in identifying 16 diseases and 10 common procedures were identified. The accuracy of SS-MIX data for 13 laboratory test results was also examined. The specificity of diagnoses in the DPC data exceeded 96%, while the sensitivity was below 50% for seven diseases and variable across diseases. When limited to primary diagnoses, the sensitivity and specificity were 78.9% and 93.2%, respectively. The sensitivity of procedure records exceeded 90% for six procedures, and the specificity exceeded 90% for nine procedures. Agreement between the SS-MIX data and the chart reviews was above 95% for all 13 items. The validity of diagnoses and procedure records in the DPC data and laboratory results in the SS-MIX data was high in general, supporting their use in future studies. Copyright © 2017 The Authors. Production and hosting by Elsevier B.V. All rights reserved.
Depolarization Lidar Determination Of Cloud-Base Microphysical Properties
NASA Astrophysics Data System (ADS)
Donovan, D. P.; Klein Baltink, H.; Henzing, J. S.; de Roode, S.; Siebesma, A. P.
2016-06-01
The links between multiple-scattering induced depolarization and cloud microphysical properties (e.g. cloud particle number density, effective radius, water content) have long been recognised. Previous efforts to use depolarization information in a quantitative manner to retrieve cloud microphysical cloud properties have also been undertaken but with limited scope and, arguably, success. In this work we present a retrieval procedure applicable to liquid stratus clouds with (quasi-)linear LWC profiles and (quasi-)constant number density profiles in the cloud-base region. This set of assumptions allows us to employ a fast and robust inversion procedure based on a lookup-table approach applied to extensive lidar Monte-Carlo multiple-scattering calculations. An example validation case is presented where the results of the inversion procedure are compared with simultaneous cloud radar observations. In non-drizzling conditions it was found, in general, that the lidar- only inversion results can be used to predict the radar reflectivity within the radar calibration uncertainty (2-3 dBZ). Results of a comparison between ground-based aerosol number concentration and lidar-derived cloud base number considerations are also presented. The observed relationship between the two quantities is seen to be consistent with the results of previous studies based on aircraft-based in situ measurements.
A procedure to estimate proximate analysis of mixed organic wastes.
Zaher, U; Buffiere, P; Steyer, J P; Chen, S
2009-04-01
In waste materials, proximate analysis measuring the total concentration of carbohydrate, protein, and lipid contents from solid wastes is challenging, as a result of the heterogeneous and solid nature of wastes. This paper presents a new procedure that was developed to estimate such complex chemical composition of the waste using conventional practical measurements, such as chemical oxygen demand (COD) and total organic carbon. The procedure is based on mass balance of macronutrient elements (carbon, hydrogen, nitrogen, oxygen, and phosphorus [CHNOP]) (i.e., elemental continuity), in addition to the balance of COD and charge intensity that are applied in mathematical modeling of biological processes. Knowing the composition of such a complex substrate is crucial to study solid waste anaerobic degradation. The procedure was formulated to generate the detailed input required for the International Water Association (London, United Kingdom) Anaerobic Digestion Model number 1 (IWA-ADM1). The complex particulate composition estimated by the procedure was validated with several types of food wastes and animal manures. To make proximate analysis feasible for validation, the wastes were classified into 19 types to allow accurate extraction and proximate analysis. The estimated carbohydrates, proteins, lipids, and inerts concentrations were highly correlated to the proximate analysis; correlation coefficients were 0.94, 0.88, 0.99, and 0.96, respectively. For most of the wastes, carbohydrate was the highest fraction and was estimated accurately by the procedure over an extended range with high linearity. For wastes that are rich in protein and fiber, the procedure was even more consistent compared with the proximate analysis. The new procedure can be used for waste characterization in solid waste treatment design and optimization.
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Validating Coherence Measurements Using Aligned and Unaligned Coherence Functions
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2006-01-01
This paper describes a novel approach based on the use of coherence functions and statistical theory for sensor validation in a harsh environment. By the use of aligned and unaligned coherence functions and statistical theory one can test for sensor degradation, total sensor failure or changes in the signal. This advanced diagnostic approach and the novel data processing methodology discussed provides a single number that conveys this information. This number as calculated with standard statistical procedures for comparing the means of two distributions is compared with results obtained using Yuen's robust statistical method to create confidence intervals. Examination of experimental data from Kulite pressure transducers mounted in a Pratt & Whitney PW4098 combustor using spectrum analysis methods on aligned and unaligned time histories has verified the effectiveness of the proposed method. All the procedures produce good results which demonstrates how robust the technique is.
Stochastic Petri Net extension of a yeast cell cycle model.
Mura, Ivan; Csikász-Nagy, Attila
2008-10-21
This paper presents the definition, solution and validation of a stochastic model of the budding yeast cell cycle, based on Stochastic Petri Nets (SPN). A specific family of SPNs is selected for building a stochastic version of a well-established deterministic model. We describe the procedure followed in defining the SPN model from the deterministic ODE model, a procedure that can be largely automated. The validation of the SPN model is conducted with respect to both the results provided by the deterministic one and the experimental results available from literature. The SPN model catches the behavior of the wild type budding yeast cells and a variety of mutants. We show that the stochastic model matches some characteristics of budding yeast cells that cannot be found with the deterministic model. The SPN model fine-tunes the simulation results, enriching the breadth and the quality of its outcome.
Benchmarking gate-based quantum computers
NASA Astrophysics Data System (ADS)
Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans
2017-11-01
With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.
NASA Technical Reports Server (NTRS)
Sams, Clarence; Crucian, Brian; Stowe, Raymond; Pierson, Duane; Mehta, Satish; Morukov, Boris; Uchakin, Peter; Nehlsen-Cannarella, Sandra
2008-01-01
Validation of Procedures for Monitoring Crew Member Immune Function - Short Duration Biological Investigation (Integrated Immune-SDBI) will assess the clinical risks resulting from the adverse effects of space flight on the human immune system and will validate a flightcompatible immune monitoring strategy. Immune system changes will be monitored by collecting and analyzing blood, urine and saliva samples from crewmembers before, during and after space flight.
NASA Astrophysics Data System (ADS)
Huda, C.; Hudha, M. N.; Ain, N.; Nandiyanto, A. B. D.; Abdullah, A. G.; Widiaty, I.
2018-01-01
Computer programming course is theoretical. Sufficient practice is necessary to facilitate conceptual understanding and encouraging creativity in designing computer programs/animation. The development of tutorial video in an Android-based blended learning is needed for students’ guide. Using Android-based instructional material, students can independently learn anywhere and anytime. The tutorial video can facilitate students’ understanding about concepts, materials, and procedures of programming/animation making in detail. This study employed a Research and Development method adapting Thiagarajan’s 4D model. The developed Android-based instructional material and tutorial video were validated by experts in instructional media and experts in physics education. The expert validation results showed that the Android-based material was comprehensive and very feasible. The tutorial video was deemed feasible as it received average score of 92.9%. It was also revealed that students’ conceptual understanding, skills, and creativity in designing computer program/animation improved significantly.
Development of code evaluation criteria for assessing predictive capability and performance
NASA Technical Reports Server (NTRS)
Lin, Shyi-Jang; Barson, S. L.; Sindir, M. M.; Prueger, G. H.
1993-01-01
Computational Fluid Dynamics (CFD), because of its unique ability to predict complex three-dimensional flows, is being applied with increasing frequency in the aerospace industry. Currently, no consistent code validation procedure is applied within the industry. Such a procedure is needed to increase confidence in CFD and reduce risk in the use of these codes as a design and analysis tool. This final contract report defines classifications for three levels of code validation, directly relating the use of CFD codes to the engineering design cycle. Evaluation criteria by which codes are measured and classified are recommended and discussed. Criteria for selecting experimental data against which CFD results can be compared are outlined. A four phase CFD code validation procedure is described in detail. Finally, the code validation procedure is demonstrated through application of the REACT CFD code to a series of cases culminating in a code to data comparison on the Space Shuttle Main Engine High Pressure Fuel Turbopump Impeller.
The (Un)Certainty of Selectivity in Liquid Chromatography Tandem Mass Spectrometry
NASA Astrophysics Data System (ADS)
Berendsen, Bjorn J. A.; Stolker, Linda A. M.; Nielen, Michel W. F.
2013-01-01
We developed a procedure to determine the "identification power" of an LC-MS/MS method operated in the MRM acquisition mode, which is related to its selectivity. The probability of any compound showing the same precursor ion, product ions, and retention time as the compound of interest is used as a measure of selectivity. This is calculated based upon empirical models constructed from three very large compound databases. Based upon the final probability estimation, additional measures to assure unambiguous identification can be taken, like the selection of different or additional product ions. The reported procedure in combination with criteria for relative ion abundances results in a powerful technique to determine the (un)certainty of the selectivity of any LC-MS/MS analysis and thus the risk of false positive results. Furthermore, the procedure is very useful as a tool to validate method selectivity.
Zhang, Jie; Wei, Shimin; Ayres, David W; Smith, Harold T; Tse, Francis L S
2011-09-01
Although it is well known that automation can provide significant improvement in the efficiency of biological sample preparation in quantitative LC-MS/MS analysis, it has not been widely implemented in bioanalytical laboratories throughout the industry. This can be attributed to the lack of a sound strategy and practical procedures in working with robotic liquid-handling systems. Several comprehensive automation assisted procedures for biological sample preparation and method validation were developed and qualified using two types of Hamilton Microlab liquid-handling robots. The procedures developed were generic, user-friendly and covered the majority of steps involved in routine sample preparation and method validation. Generic automation procedures were established as a practical approach to widely implement automation into the routine bioanalysis of samples in support of drug-development programs.
Ghazi, Ahmed; Campbell, Timothy; Melnyk, Rachel; Feng, Changyong; Andrusco, Alex; Stone, Jonathan; Erturk, Erdal
2017-12-01
The restriction of resident hours with an increasing focus on patient safety and a reduced caseload has impacted surgical training. A complex and complication prone procedure such as percutaneous nephrolithotomy (PCNL) with a steep learning curve may create an unsafe environment for hands-on resident training. In this study, we validate a high fidelity, inanimate PCNL model within a full-immersion simulation environment. Anatomically correct models of the human pelvicaliceal system, kidney, and relevant adjacent structures were created using polyvinyl alcohol hydrogels and three-dimensional-printed injection molds. All steps of a PCNL were simulated including percutaneous renal access, nephroscopy, and lithotripsy. Five experts (>100 caseload) and 10 novices (<20 caseload) from both urology (full procedure) and interventional radiology (access only) departments completed the simulation. Face and content validity were calculated using model ratings for similarity to the real procedure and usefulness as a training tool. Differences in performance among groups with various levels of experience using clinically relevant procedural metrics were used to calculate construct validity. The model was determined to have an excellent face and content validity with an average score of 4.5/5.0 and 4.6/5.0, respectively. There were significant differences between novice and expert operative metrics including mean fluoroscopy time, the number of percutaneous access attempts, and number of times the needle was repositioned. Experts achieved better stone clearance with fewer procedural complications. We demonstrated the face, content, and construct validity of an inanimate, full task trainer for PCNL. Construct validity between experts and novices was demonstrated using incorporated procedural metrics, which permitted the accurate assessment of performance. While hands-on training under supervision remains an integral part of any residency, this full-immersion simulation provides a comprehensive tool for surgical skills development and evaluation before hands-on exposure.
Zerbini, Francesca; Zanella, Ilaria; Fraccascia, Davide; König, Enrico; Irene, Carmela; Frattini, Luca F; Tomasi, Michele; Fantappiè, Laura; Ganfini, Luisa; Caproni, Elena; Parri, Matteo; Grandi, Alberto; Grandi, Guido
2017-04-24
The exploitation of the CRISPR/Cas9 machinery coupled to lambda (λ) recombinase-mediated homologous recombination (recombineering) is becoming the method of choice for genome editing in E. coli. First proposed by Jiang and co-workers, the strategy has been subsequently fine-tuned by several authors who demonstrated, by using few selected loci, that the efficiency of mutagenesis (number of mutant colonies over total number of colonies analyzed) can be extremely high (up to 100%). However, from published data it is difficult to appreciate the robustness of the technology, defined as the number of successfully mutated loci over the total number of targeted loci. This information is particularly relevant in high-throughput genome editing, where repetition of experiments to rescue missing mutants would be impractical. This work describes a "brute force" validation activity, which culminated in the definition of a robust, simple and rapid protocol for single or multiple gene deletions. We first set up our own version of the CRISPR/Cas9 protocol and then we evaluated the mutagenesis efficiency by changing different parameters including sequence of guide RNAs, length and concentration of donor DNAs, and use of single stranded and double stranded donor DNAs. We then validated the optimized conditions targeting 78 "dispensable" genes. This work led to the definition of a protocol, featuring the use of double stranded synthetic donor DNAs, which guarantees mutagenesis efficiencies consistently higher than 10% and a robustness of 100%. The procedure can be applied also for simultaneous gene deletions. This work defines for the first time the robustness of a CRISPR/Cas9-based protocol based on a large sample size. Since the technical solutions here proposed can be applied to other similar procedures, the data could be of general interest for the scientific community working on bacterial genome editing and, in particular, for those involved in synthetic biology projects requiring high throughput procedures.
Sadeghi, Fahimeh; Navidpour, Latifeh; Bayat, Sima; Afshar, Minoo
2013-01-01
A green, simple, and stability-indicating RP-HPLC method was developed for the determination of diltiazem in topical preparations. The separation was based on a C18 analytical column using a mobile phase consisted of ethanol: phosphoric acid solution (pH = 2.5) (35 : 65, v/v). Column temperature was set at 50°C and quantitation was achieved with UV detection at 240 nm. In forced degradation studies, the drug was subjected to oxidation, hydrolysis, photolysis, and heat. The method was validated for specificity, selectivity, linearity, precision, accuracy, and robustness. The applied procedure was found to be linear in diltiazem concentration range of 0.5–50 μg/mL (r 2 = 0.9996). Precision was evaluated by replicate analysis in which % relative standard deviation (RSD) values for areas were found below 2.0. The recoveries obtained (99.25%–101.66%) ensured the accuracy of the developed method. The degradation products as well as the pharmaceutical excipients were well resolved from the pure drug. The expanded uncertainty (5.63%) of the method was also estimated from method validation data. Accordingly, the proposed validated and sustainable procedure was proved to be suitable for routine analyzing and stability studies of diltiazem in pharmaceutical preparations. PMID:24163778
Williams, Mark R; McKeown, Andrew; Dexter, Franklin; Miner, James R; Sessler, Daniel I; Vargo, John; Turk, Dennis C; Dworkin, Robert H
2016-01-01
Successful procedural sedation represents a spectrum of patient- and clinician-related goals. The absence of a gold-standard measure of the efficacy of procedural sedation has led to a variety of outcomes being used in clinical trials, with the consequent lack of consistency among measures, making comparisons among trials and meta-analyses challenging. We evaluated which existing measures have undergone psychometric analysis in a procedural sedation setting and whether the validity of any of these measures support their use across the range of procedures for which sedation is indicated. Numerous measures were found to have been used in clinical research on procedural sedation across a wide range of procedures. However, reliability and validity have been evaluated for only a limited number of sedation scales, observer-rated pain/discomfort scales, and satisfaction measures in only a few categories of procedures. Typically, studies only examined 1 or 2 aspects of scale validity. The results are likely unique to the specific clinical settings they were tested in. Certain scales, for example, those requiring motor stimulation, are unsuitable to evaluate sedation for procedures where movement is prohibited (e.g., magnetic resonance imaging scans). Further work is required to evaluate existing measures for procedures for which they were not developed. Depending on the outcomes of these efforts, it might ultimately be necessary to consider measures of sedation efficacy to be procedure specific.
Validation of Mission Plans Through Simulation
NASA Astrophysics Data System (ADS)
St-Pierre, J.; Melanson, P.; Brunet, C.; Crabtree, D.
2002-01-01
The purpose of a spacecraft mission planning system is to automatically generate safe and optimized mission plans for a single spacecraft, or more functioning in unison. The system verifies user input syntax, conformance to commanding constraints, absence of duty cycle violations, timing conflicts, state conflicts, etc. Present day constraint-based systems with state-based predictive models use verification rules derived from expert knowledge. A familiar solution found in Mission Operations Centers, is to complement the planning system with a high fidelity spacecraft simulator. Often a dedicated workstation, the simulator is frequently used for operator training and procedure validation, and may be interfaced to actual control stations with command and telemetry links. While there are distinct advantages to having a planning system offer realistic operator training using the actual flight control console, physical verification of data transfer across layers and procedure validation, experience has revealed some drawbacks and inefficiencies in ground segment operations: With these considerations, two simulation-based mission plan validation projects are under way at the Canadian Space Agency (CSA): RVMP and ViSION. The tools proposed in these projects will automatically run scenarios and provide execution reports to operations planning personnel, prior to actual command upload. This can provide an important safeguard for system or human errors that can only be detected with high fidelity, interdependent spacecraft models running concurrently. The core element common to these projects is a spacecraft simulator, built with off-the- shelf components such as CAE's Real-Time Object-Based Simulation Environment (ROSE) technology, MathWork's MATLAB/Simulink, and Analytical Graphics' Satellite Tool Kit (STK). To complement these tools, additional components were developed, such as an emulated Spacecraft Test and Operations Language (STOL) interpreter and CCSDS TM/TC encoders and decoders. This paper discusses the use of simulation in the context of space mission planning, describes the projects under way and proposes additional venues of investigation and development.
Shortt, Samuel E.D.; Shaw, Ralph A.; Elliott, David; Mackillop, William J.
2004-01-01
Background Provincial governments require timely, economical methods to monitor surgical waiting periods. Although use of prospective procedure-specific registers would be the ideal method, a less elaborate system has been proposed that is based on physician billing data. This study assessed the validity of using the date of the last service billed prior to surgery as a proxy for the beginning of the post-referral, pre-surgical waiting period. Method We examined charts for 31 824 elective surgical encounters between 1992 and 1996 at an Ontario teaching hospital. The date of the last service before surgery (the last billing date) was compared with the date of the consultant's letter indicating a decision to book surgery (i.e., to begin waiting). Results Several surgical specialties (but excluding cardiac, orthopedic and gynecologic) had a close correlation between the dates of the last pre-surgery visit and those of the actual decision to place the patient on the waiting list. Similar results were found for 12 of 15 individually studied procedures, including some orthopedic and gynecological procedures. Conclusion Used judiciously, billing data is a timely, inexpensive and generally accurate method by which provincial governments could monitor trends in waiting times for appropriately selected surgical procedures. PMID:15264378
Guidance law development for aeroassisted transfer vehicles using matched asymptotic expansions
NASA Technical Reports Server (NTRS)
Calise, Anthony J.; Melamed, Nahum
1993-01-01
This report addresses and clarifies a number of issues related to the Matched Asymptotic Expansion (MAE) analysis of skip trajectories, or any class of problems that give rise to inner layers that are not associated directly with satisfying boundary conditions. The procedure for matching inner and outer solutions, and using the composite solution to satisfy boundary conditions is developed and rigorously followed to obtain a set of algebraic equations for the problem of inclination change with minimum energy loss. A detailed evaluation of the zeroth order guidance algorithm for aeroassisted orbit transfer is performed. It is shown that by exploiting the structure of the MAE solution procedure, the original problem, which requires the solution of a set of 20 implicit algebraic equations, can be reduced to a problem of 6 implicit equations in 6 unknowns. A solution that is near optimal, requires a minimum of computation, and thus can be implemented in real time and on-board the vehicle, has been obtained. Guidance law implementation entails treating the current state as a new initial state and repetitively solving the zeroth order MAE problem to obtain the feedback controls. Finally, a general procedure is developed for constructing a MAE solution up to first order, of the Hamilton-Jacobi-Bellman equation based on the method of characteristics. The development is valid for a class of perturbation problems whose solution exhibits two-time-scale behavior. A regular expansion for problems of this type is shown to be inappropriate since it is not valid over a narrow range of the independent variable. That is, it is not uniformly valid. Of particular interest here is the manner in which matching and boundary conditions are enforced when the expansion is carried out to first order. Two cases are distinguished-one where the left boundary condition coincides with, or lies to the right of, the singular region, and another one where the left boundary condition lies to the left of the singular region. A simple example is used to illustrate the procedure where the obtained solution is uniformly valid to O(Epsilon(exp 2)). The potential application of this procedure to aeroassisted plane change is also described and partially evaluated.
Chung, Cecilia P; Rohan, Patricia; Krishnaswami, Shanthi; McPheeters, Melissa L
2013-12-30
To review the evidence supporting the validity of billing, procedural, or diagnosis code, or pharmacy claim-based algorithms used to identify patients with rheumatoid arthritis (RA) in administrative and claim databases. We searched the MEDLINE database from 1991 to September 2012 using controlled vocabulary and key terms related to RA and reference lists of included studies were searched. Two investigators independently assessed the full text of studies against pre-determined inclusion criteria and extracted the data. Data collected included participant and algorithm characteristics. Nine studies reported validation of computer algorithms based on International Classification of Diseases (ICD) codes with or without free-text, medication use, laboratory data and the need for a diagnosis by a rheumatologist. These studies yielded positive predictive values (PPV) ranging from 34 to 97% to identify patients with RA. Higher PPVs were obtained with the use of at least two ICD and/or procedure codes (ICD-9 code 714 and others), the requirement of a prescription of a medication used to treat RA, or requirement of participation of a rheumatologist in patient care. For example, the PPV increased from 66 to 97% when the use of disease-modifying antirheumatic drugs and the presence of a positive rheumatoid factor were required. There have been substantial efforts to propose and validate algorithms to identify patients with RA in automated databases. Algorithms that include more than one code and incorporate medications or laboratory data and/or required a diagnosis by a rheumatologist may increase the PPV. Copyright © 2013 Elsevier Ltd. All rights reserved.
Pizzini, Matias; Robinson, Ashley; Yanez, Dania; Hanney, William J.
2013-01-01
Purpose/Aim: This purpose of this study was to investigate the reliability, minimal detectable change (MDC), and concurrent validity of active spinal mobility measurements using a gravity‐based bubble inclinometer and iPhone® application. Materials/Methods: Two investigators each used a bubble inclinometer and an iPhone® with inclinometer application to measure total thoracolumbo‐pelvic flexion, isolated lumbar flexion, total thoracolumbo‐pelvic extension, and thoracolumbar lateral flexion in 30 asymptomatic participants using a blinded repeated measures design. Results: The procedures used in this investigation for measuring spinal mobility yielded good intrarater and interrater reliability with Intraclass Correlation Coefficients (ICC) for bubble inclinometry ≥ 0.81 and the iPhone® ≥ 0.80. The MDC90 for the interrater analysis ranged from 4° to 9°. The concurrent validity between bubble inclinometry and the iPhone® application was good with ICC values of ≥ 0.86. The 95% level of agreement indicates that although these measuring instruments are equivalent individual differences of up to 18° may exist when using these devices interchangeably. Conclusions: The bubble inclinometer and iPhone® possess good intrarater and interrater reliability as well as concurrent validity when strict measurement procedures are adhered to. This study provides preliminary evidence to suggest that smart phone applications may offer clinical utility comparable to inclinometry for quantifying spinal mobility. Clinicians should be aware of the potential disagreement when using these devices interchangeably. Level of Evidence: 2b (Observational study of reliability) PMID:23593551
Kolber, Morey J; Pizzini, Matias; Robinson, Ashley; Yanez, Dania; Hanney, William J
2013-04-01
PURPOSEAIM: This purpose of this study was to investigate the reliability, minimal detectable change (MDC), and concurrent validity of active spinal mobility measurements using a gravity-based bubble inclinometer and iPhone® application. MATERIALSMETHODS: Two investigators each used a bubble inclinometer and an iPhone® with inclinometer application to measure total thoracolumbo-pelvic flexion, isolated lumbar flexion, total thoracolumbo-pelvic extension, and thoracolumbar lateral flexion in 30 asymptomatic participants using a blinded repeated measures design. The procedures used in this investigation for measuring spinal mobility yielded good intrarater and interrater reliability with Intraclass Correlation Coefficients (ICC) for bubble inclinometry ≥ 0.81 and the iPhone® ≥ 0.80. The MDC90 for the interrater analysis ranged from 4° to 9°. The concurrent validity between bubble inclinometry and the iPhone® application was good with ICC values of ≥ 0.86. The 95% level of agreement indicates that although these measuring instruments are equivalent individual differences of up to 18° may exist when using these devices interchangeably. The bubble inclinometer and iPhone® possess good intrarater and interrater reliability as well as concurrent validity when strict measurement procedures are adhered to. This study provides preliminary evidence to suggest that smart phone applications may offer clinical utility comparable to inclinometry for quantifying spinal mobility. Clinicians should be aware of the potential disagreement when using these devices interchangeably. 2b (Observational study of reliability).
Network-Based Method for Identifying Co-Regeneration Genes in Bone, Dentin, Nerve and Vessel Tissues
Pan, Hongying; Zhang, Yu-Hang; Feng, Kaiyan; Kong, XiangYin; Cai, Yu-Dong
2017-01-01
Bone and dental diseases are serious public health problems. Most current clinical treatments for these diseases can produce side effects. Regeneration is a promising therapy for bone and dental diseases, yielding natural tissue recovery with few side effects. Because soft tissues inside the bone and dentin are densely populated with nerves and vessels, the study of bone and dentin regeneration should also consider the co-regeneration of nerves and vessels. In this study, a network-based method to identify co-regeneration genes for bone, dentin, nerve and vessel was constructed based on an extensive network of protein–protein interactions. Three procedures were applied in the network-based method. The first procedure, searching, sought the shortest paths connecting regeneration genes of one tissue type with regeneration genes of other tissues, thereby extracting possible co-regeneration genes. The second procedure, testing, employed a permutation test to evaluate whether possible genes were false discoveries; these genes were excluded by the testing procedure. The last procedure, screening, employed two rules, the betweenness ratio rule and interaction score rule, to select the most essential genes. A total of seventeen genes were inferred by the method, which were deemed to contribute to co-regeneration of at least two tissues. All these seventeen genes were extensively discussed to validate the utility of the method. PMID:28974058
Chen, Lei; Pan, Hongying; Zhang, Yu-Hang; Feng, Kaiyan; Kong, XiangYin; Huang, Tao; Cai, Yu-Dong
2017-10-02
Bone and dental diseases are serious public health problems. Most current clinical treatments for these diseases can produce side effects. Regeneration is a promising therapy for bone and dental diseases, yielding natural tissue recovery with few side effects. Because soft tissues inside the bone and dentin are densely populated with nerves and vessels, the study of bone and dentin regeneration should also consider the co-regeneration of nerves and vessels. In this study, a network-based method to identify co-regeneration genes for bone, dentin, nerve and vessel was constructed based on an extensive network of protein-protein interactions. Three procedures were applied in the network-based method. The first procedure, searching, sought the shortest paths connecting regeneration genes of one tissue type with regeneration genes of other tissues, thereby extracting possible co-regeneration genes. The second procedure, testing, employed a permutation test to evaluate whether possible genes were false discoveries; these genes were excluded by the testing procedure. The last procedure, screening, employed two rules, the betweenness ratio rule and interaction score rule, to select the most essential genes. A total of seventeen genes were inferred by the method, which were deemed to contribute to co-regeneration of at least two tissues. All these seventeen genes were extensively discussed to validate the utility of the method.
Fleiszer, David; Hoover, Michael L; Posel, Nancy; Razek, Tarek; Bergman, Simon
Undergraduate medical students at a large academic trauma center are required to manage a series of online virtual trauma patients as a mandatory exercise during their surgical rotation. Clinical reasoning during undergraduate medical education can be difficult to assess. The purpose of the study was to determine whether we could use components of the students' virtual patient management to measure changes in their clinical reasoning over the course of the clerkship year. In order to accomplish this, we decided to determine if the use of scoring rubrics could change the traditional subjective assessment to a more objective evaluation. Two groups of students, one at the beginning of clerkship (Juniors) and one at the end of clerkship (Seniors), were chosen. Each group was given the same virtual patient case, a clinical scenario based on the Advanced Trauma Life Support (ATLS) Primary Trauma Survey, which had to be completed during their trauma rotation. The learner was required to make several key patient management choices based on their clinical reasoning, which would take them along different routes through the case. At the end of the case they had to create a summary report akin to sign-off. These summaries were graded independently by two domain "Experts" using a traditional subjective surgical approach to assessment and by two "Non-Experts" using two internally validated scoring rubrics. One rubric assessed procedural or domain knowledge (Procedural Rubric), while the other rubric highlighted semantic qualifiers (Semantic Rubric). Each of the rubrics was designed to reflect established components of clinical reasoning. Student's t-tests were used to compare the rubric scores for the two groups and Cohen's d was used to determine effect size. Kendall's τ was used to compare the difference between the two groups based on the "Expert's" subjective assessment. Inter-rater reliability (IRR) was determined using Cronbach's alpha. The Seniors did better than the Juniors with respect to "Procedural" issues but not for "Semantic" issues using the rubrics as assessed by the "Non-Experts". The average Procedural rubric score for the Senior group was 59% ± 13% while for the junior group, it was 51% ± 12% (t (80) = 2.715; p = 0.008; Cohen's d = 1.53). The average Semantic rubric score for the Senior group was 31% ± 15% while for the Junior group, it was 28% ± 14% (t (80) = 1.010; p = .316, ns). There was no statistical difference in the marks given to the Senior versus Junior groups by the "Experts" (Kendall's τ = 0.182, p = 0.07). The IRR between the "Non-Experts" using the rubrics was higher than the IRR of the "Experts" using the traditional surgical approach to assessment. The Cronbach's alpha for the Procedural and Semantic rubrics was 0.94 and 0.97, respectively, indicating very high IRR. The correlation between the Procedural rubric scores and "Experts" assessment was approximately r = 0.78, and that between the Semantic rubric and the "Experts" assessment was roughly r = 0.66, indicating high concurrent validity for the Procedural rubric and moderately high validity for the Semantic rubric. Clinical reasoning, as measured by some of its "procedural" features, improves over the course of the clerkship year. Rubrics can be created to objectively assess the summary statement of an online interactive trauma VP for "procedural" issues but not for "semantic" issues. Using IRR as a measure, the quality of assessment is improved using the rubrics. The "Procedural" rubric appears to measure changes in clinical reasoning over the course of 3rd-year undergraduate clinical studies. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Trouli, Marianna N; Vernon, Howard T; Kakavelakis, Kyriakos N; Antonopoulou, Maria D; Paganas, Aristofanis N; Lionis, Christos D
2008-07-22
Neck pain is a highly prevalent condition resulting in major disability. Standard scales for measuring disability in patients with neck pain have a pivotal role in research and clinical settings. The Neck Disability Index (NDI) is a valid and reliable tool, designed to measure disability in activities of daily living due to neck pain. The purpose of our study was the translation and validation of the NDI in a Greek primary care population with neck complaints. The original version of the questionnaire was used. Based on international standards, the translation strategy comprised forward translations, reconciliation, backward translation and pre-testing steps. The validation procedure concerned the exploration of internal consistency (Cronbach alpha), test-retest reliability (Intraclass Correlation Coefficient, Bland and Altman method), construct validity (exploratory factor analysis) and responsiveness (Spearman correlation coefficient, Standard Error of Measurement and Minimal Detectable Change) of the questionnaire. Data quality was also assessed through completeness of data and floor/ceiling effects. The translation procedure resulted in the Greek modified version of the NDI. The latter was culturally adapted through the pre-testing phase. The validation procedure raised a large amount of missing data due to low applicability, which were assessed with two methods. Floor or ceiling effects were not observed. Cronbach alpha was calculated as 0.85, which was interpreted as good internal consistency. Intraclass correlation coefficient was found to be 0.93 (95% CI 0.84-0.97), which was considered as very good test-retest reliability. Factor analysis yielded one factor with Eigenvalue 4.48 explaining 44.77% of variance. The Spearman correlation coefficient (0.3; P = 0.02) revealed some relation between the change score in the NDI and Global Rating of Change (GROC). The SEM and MDC were calculated as 0.64 and 1.78 respectively. The Greek version of the NDI measures disability in patients with neck pain in a reliable, valid and responsive manner. It is considered a useful tool for research and clinical settings in Greek Primary Health Care.
Trouli, Marianna N; Vernon, Howard T; Kakavelakis, Kyriakos N; Antonopoulou, Maria D; Paganas, Aristofanis N; Lionis, Christos D
2008-01-01
Background Neck pain is a highly prevalent condition resulting in major disability. Standard scales for measuring disability in patients with neck pain have a pivotal role in research and clinical settings. The Neck Disability Index (NDI) is a valid and reliable tool, designed to measure disability in activities of daily living due to neck pain. The purpose of our study was the translation and validation of the NDI in a Greek primary care population with neck complaints. Methods The original version of the questionnaire was used. Based on international standards, the translation strategy comprised forward translations, reconciliation, backward translation and pre-testing steps. The validation procedure concerned the exploration of internal consistency (Cronbach alpha), test-retest reliability (Intraclass Correlation Coefficient, Bland and Altman method), construct validity (exploratory factor analysis) and responsiveness (Spearman correlation coefficient, Standard Error of Measurement and Minimal Detectable Change) of the questionnaire. Data quality was also assessed through completeness of data and floor/ceiling effects. Results The translation procedure resulted in the Greek modified version of the NDI. The latter was culturally adapted through the pre-testing phase. The validation procedure raised a large amount of missing data due to low applicability, which were assessed with two methods. Floor or ceiling effects were not observed. Cronbach alpha was calculated as 0.85, which was interpreted as good internal consistency. Intraclass correlation coefficient was found to be 0.93 (95% CI 0.84–0.97), which was considered as very good test-retest reliability. Factor analysis yielded one factor with Eigenvalue 4.48 explaining 44.77% of variance. The Spearman correlation coefficient (0.3; P = 0.02) revealed some relation between the change score in the NDI and Global Rating of Change (GROC). The SEM and MDC were calculated as 0.64 and 1.78 respectively. Conclusion The Greek version of the NDI measures disability in patients with neck pain in a reliable, valid and responsive manner. It is considered a useful tool for research and clinical settings in Greek Primary Health Care. PMID:18647393
NASA Astrophysics Data System (ADS)
Guchhait, Shyamal; Banerjee, Biswanath
2018-04-01
In this paper, a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates is developed from partially measured free vibration sig-natures. It has been reported in many research articles that the mode shape curvatures are much more sensitive compared to mode shape themselves to localize inhomogeneity. Complying with this idea, an identification procedure is framed as an optimization problem where the proposed cost function measures the error in constitutive relation due to incompatible curvature/strain and moment/stress fields. Unlike standard constitutive equation error based procedure wherein a solution of a couple system is unavoidable in each iteration, we generate these incompatible fields via two linear solves. A simple, yet effective, penalty based approach is followed to incorporate measured data. The penalization parameter not only helps in incorporating corrupted measurement data weakly but also acts as a regularizer against the ill-posedness of the inverse problem. Explicit linear update formulas are then developed for anisotropic linear elastic material. Numerical examples are provided to show the applicability of the proposed technique. Finally, an experimental validation is also provided.
NASA Technical Reports Server (NTRS)
Glass, David E.; Robinson, James C.
1990-01-01
A procedure is presented to allow the use of temperature dependent mechanical properties in the Engineering Analysis Language (EAL) System for solid structural elements. This is accomplished by including a modular runstream in the main EAL runstream. The procedure is applicable for models with multiple materials and with anisotropic properties, and can easily be incorporated into an existing EAL runstream. The procedure (which is applicable for EAL elastic solid elements) is described in detail, followed by a description of the validation of the routine. A listing of the EAL runstream used to validate the procedure is included in the Appendix.
Creating an open access cal/val repository via the LACO-Wiki online validation platform
NASA Astrophysics Data System (ADS)
Perger, Christoph; See, Linda; Dresel, Christopher; Weichselbaum, Juergen; Fritz, Steffen
2017-04-01
There is a major gap in the amount of in-situ data available on land cover and land use, either as field-based ground truth information or from image interpretation, both of which are used for the calibration and validation (cal/val) of products derived from Earth Observation. Although map producers generally publish their confusion matrices and the accuracy measures associated with their land cover and land use products, the cal/val data (also referred to as reference data) are rarely shared in an open manner. Although there have been efforts in compiling existing reference datasets and making them openly available, e.g. through the GOFC/GOLD (Global Observation for Forest Cover and Land Dynamics) portal or the European Commission's Copernicus Reference Data Access (CORDA), this represents a tiny fraction of the reference data collected and stored locally around the world. Moreover, the validation of land cover and land use maps is usually undertaken with tools and procedures specific to a particular institute or organization due to the lack of standardized validation procedures; thus, there are currently no incentives to share the reference data more broadly with the land cover and land use community. In an effort to provide a set of standardized, online validation tools and to build an open repository of cal/val data, the LACO-Wiki online validation portal has been developed, which will be presented in this paper. The portal contains transparent, documented and reproducible validation procedures that can be applied to local as well as global products. LACO-Wiki was developed through a user consultation process that resulted in a 4-step wizard-based workflow, which supports the user from uploading the map product for validation, through to the sampling process and the validation of these samples, until the results are processed and a final report is created that includes a range of commonly reported accuracy measures. One of the design goals of LACO-Wiki has been to simplify the workflows as much as possible so that the tool can be used both professionally and in an educational or non-expert context. By using the tool for validation, the user agrees to share their validation samples and therefore contribute to an open access cal/val repository. Interest in the use of LACO-Wiki for validation of national land cover or related products has already been expressed, e.g. by national stakeholders under the umbrella of the European Environment Agency (EEA), and for global products by GOFC/GOLD and the Group on Earth Observation (GEO). Thus, LACO-Wiki has the potential to become the focal point around which an international land cover validation community could be built, and could significantly advance the state-of-the-art in land cover cal/val, particularly given recent developments in opening up of the Landsat archive and the open availability of Sentinel imagery. The platform will also offer open access to crowdsourced in-situ data, for example, from the recently developed LACO-Wiki mobile smartphone app, which can be used to collect additional validation information in the field, as well as to validation data collected via its partner platform, Geo-Wiki, where an already established community of citizen scientists collect land cover and land use data for different research applications.
Ciceri, E; Recchia, S; Dossi, C; Yang, L; Sturgeon, R E
2008-01-15
The development and validation of a method for the determination of mercury in sediments using a sector field inductively coupled plasma mass spectrometer (SF-ICP-MS) for detection is described. The utilization of isotope dilution (ID) calibration is shown to solve analytical problems related to matrix composition. Mass bias is corrected using an internal mass bias correction technique, validated against the traditional standard bracketing method. The overall analytical protocol is validated against NRCC PACS-2 marine sediment CRM. The estimated limit of detection is 12ng/g. The proposed procedure was applied to the analysis of a real sediment core sampled to a depth of 160m in Lake Como, where Hg concentrations ranged from 66 to 750ng/g.
Standards for Title VII Evaluations: Accommodation for Reality Constraints.
ERIC Educational Resources Information Center
Yap, Kim Onn
Two separate sets of minimum standards designed to guide the evaluation of bilingual projects are proposed. The first set relates to the process in which the evaluation activities are conducted. They include: validity of assessment procedures, validity and reliability of evaluation instruments, representativeness of findings, use of procedures for…
29 CFR 1607.5 - General standards for validity studies.
Code of Federal Regulations, 2014 CFR
2014-07-01
... experience on the job. J. Interim use of selection procedures. Users may continue the use of a selection... studies. A. Acceptable types of validity studies. For the purposes of satisfying these guidelines, users... which has an adverse impact and which selection procedure has an adverse impact, each user should...
29 CFR 1607.5 - General standards for validity studies.
Code of Federal Regulations, 2013 CFR
2013-07-01
... experience on the job. J. Interim use of selection procedures. Users may continue the use of a selection... studies. A. Acceptable types of validity studies. For the purposes of satisfying these guidelines, users... which has an adverse impact and which selection procedure has an adverse impact, each user should...
29 CFR 1607.5 - General standards for validity studies.
Code of Federal Regulations, 2012 CFR
2012-07-01
... experience on the job. J. Interim use of selection procedures. Users may continue the use of a selection... studies. A. Acceptable types of validity studies. For the purposes of satisfying these guidelines, users... which has an adverse impact and which selection procedure has an adverse impact, each user should...
Testing for Factorial Invariance in the Context of Construct Validation
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.
2010-01-01
This article describes the logic and procedures behind testing for factorial invariance across groups in the context of construct validation. The procedures include testing for configural, measurement, and structural invariance in the framework of multiple-group confirmatory factor analysis (CFA). The "forward" (sequential constraint imposition)…
Application and Evaluation of an Expert Judgment Elicitation Procedure for Correlations.
Zondervan-Zwijnenburg, Mariëlle; van de Schoot-Hubeek, Wenneke; Lek, Kimberley; Hoijtink, Herbert; van de Schoot, Rens
2017-01-01
The purpose of the current study was to apply and evaluate a procedure to elicit expert judgments about correlations, and to update this information with empirical data. The result is a face-to-face group elicitation procedure with as its central element a trial roulette question that elicits experts' judgments expressed as distributions. During the elicitation procedure, a concordance probability question was used to provide feedback to the experts on their judgments. We evaluated the elicitation procedure in terms of validity and reliability by means of an application with a small sample of experts. Validity means that the elicited distributions accurately represent the experts' judgments. Reliability concerns the consistency of the elicited judgments over time. Four behavioral scientists provided their judgments with respect to the correlation between cognitive potential and academic performance for two separate populations enrolled at a specific school in the Netherlands that provides special education to youth with severe behavioral problems: youth with autism spectrum disorder (ASD), and youth with diagnoses other than ASD. Measures of face-validity, feasibility, convergent validity, coherence, and intra-rater reliability showed promising results. Furthermore, the current study illustrates the use of the elicitation procedure and elicited distributions in a social science application. The elicited distributions were used as a prior for the correlation, and updated with data for both populations collected at the school of interest. The current study shows that the newly developed elicitation procedure combining the trial roulette method with the elicitation of correlations is a promising tool, and that the results of the procedure are useful as prior information in a Bayesian analysis.
[Computerized system validation of clinical researches].
Yan, Charles; Chen, Feng; Xia, Jia-lai; Zheng, Qing-shan; Liu, Daniel
2015-11-01
Validation is a documented process that provides a high degree of assurance. The computer system does exactly and consistently what it is designed to do in a controlled manner throughout the life. The validation process begins with the system proposal/requirements definition, and continues application and maintenance until system retirement and retention of the e-records based on regulatory rules. The objective to do so is to clearly specify that each application of information technology fulfills its purpose. The computer system validation (CSV) is essential in clinical studies according to the GCP standard, meeting product's pre-determined attributes of the specifications, quality, safety and traceability. This paper describes how to perform the validation process and determine relevant stakeholders within an organization in the light of validation SOPs. Although a specific accountability in the implementation of the validation process might be outsourced, the ultimate responsibility of the CSV remains on the shoulder of the business process owner-sponsor. In order to show that the compliance of the system validation has been properly attained, it is essential to set up comprehensive validation procedures and maintain adequate documentations as well as training records. Quality of the system validation should be controlled using both QC and QA means.
Testing coordinate measuring arms with a geometric feature-based gauge: in situ field trials
NASA Astrophysics Data System (ADS)
Cuesta, E.; Alvarez, B. J.; Patiño, H.; Telenti, A.; Barreiro, J.
2016-05-01
This work describes in detail the definition of a procedure for calibrating and evaluating coordinate measuring arms (AACMMs or CMAs). CMAs are portable coordinate measuring machines that have been widely accepted in industry despite their sensitivity to the skill and experience of the operator in charge of the inspection task. The procedure proposed here is based on the use of a dimensional gauge that incorporates multiple geometric features, specifically designed for evaluating the measuring technique when CMAs are used, at company facilities (workshops or laboratories) and by the usual operators who handle these devices in their daily work. After establishing the procedure and manufacturing the feature-based gauge, the research project was complemented with diverse in situ field tests performed with the collaboration of companies that use these devices in their inspection tasks. Some of the results are presented here, not only comparing different operators but also comparing different companies. The knowledge extracted from these experiments has allowed the procedure to be validated, the defects of the methodologies currently used for in situ inspections to be detected, and substantial improvements for increasing the reliability of these portable instruments to be proposed.
How to develop a standard operating procedure for sorting unfixed cells.
Schmid, Ingrid
2012-07-01
Written standard operating procedures (SOPs) are an important tool to assure that recurring tasks in a laboratory are performed in a consistent manner. When the procedure covered in the SOP involves a high-risk activity such as sorting unfixed cells using a jet-in-air sorter, safety elements are critical components of the document. The details on sort sample handling, sorter set-up, validation, operation, troubleshooting, and maintenance, personal protective equipment (PPE), and operator training, outlined in the SOP are to be based on careful risk assessment of the procedure. This review provides background information on the hazards associated with sorting of unfixed cells and the process used to arrive at the appropriate combination of facility design, instrument placement, safety equipment, and practices to be followed. Copyright © 2012 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Hennings, Sara S.; Hughes, Kay E.
This paper provides a brief description of the development of the Diagnostic Assessments of Reading with Trial Teaching Strategies (DARTTS) program by F. G. Roswell and J. S. Chall. It also describes the editorial and statistical procedures that were used to validate the program for determining students' strengths and weaknesses in important areas…
Effect of Content Knowledge on Angoff-Style Standard Setting Judgments
ERIC Educational Resources Information Center
Margolis, Melissa J.; Mee, Janet; Clauser, Brian E.; Winward, Marcia; Clauser, Jerome C.
2016-01-01
Evidence to support the credibility of standard setting procedures is a critical part of the validity argument for decisions made based on tests that are used for classification. One area in which there has been limited empirical study is the impact of standard setting judge selection on the resulting cut score. One important issue related to…
Strength validation and fire endurance of glued-laminated timber beams
E. L. Schaffer; C. M. Marx; D. A. Bender; F. E. Woeste
A previous paper presented a reliability-based model to predict the strength of glued-laminated timber beams at both room temperature and during fire exposure. This Monte Carlo simulation procedure generates strength and fire endurance (time-to-failure, TTF) data for glued- laminated beams that allow assessment of mean strength and TTF as well as their variability....
Landsat TM Classifications For SAFIS Using FIA Field Plots
William H. Cooke; Andrew J. Hartsell
2001-01-01
Wall-to-wall Landsat Thematic Mapper (TM) classification efforts in Georgia require field validation. We developed a new crown modeling procedure based on Forest Health Monitoring (FHM) data to test Forest Inventory and Analysis (FIA) data. These models simulate the proportion of tree crowns that reflect light on a FIA subplot basis. We averaged subplot crown...
ERIC Educational Resources Information Center
Marston, Doug; Pickart, Mary; Reschly, Amy; Heistad, David; Muyskens, Paul; Tindal, Gerald
2007-01-01
The importance of early literacy instruction and its role in later reading proficiency is well established; however, measures and procedures to screen and monitor proficiency in the area of early literacy are less well researched. The purpose of this study was to (a) examine the technical adequacy and validity of early curriculum-based literacy…
ERIC Educational Resources Information Center
Kang, Soyeon; O'Reilly, Mark; Lancioni, Giulio; Falcomata, Terry S.; Sigafoos, Jeff; Xu, Ziwei
2013-01-01
We reviewed 14 experimental studies comparing different preference assessments for individuals with developmental disabilities that were published in peer-reviewed journals between 1985 and 2012. Studies were summarized based on the following six variables: (a) the number of participants, (b) the type of disability, (c) the number and type of…
BATTLE (Biomarker-based Approach of Targeted Therapy for Lung Cancer Elimination)
2010-04-01
340, and 347 with phenylalanine (F); therefore, this mutant should not be phosphorylated by EGFR. If o ur hypothesis is valid, then the 4F m utant...kinase 1 (ASK1) by the adapter protein Daxx. Science 281: 1860–1863. Cohen P, Klumpp S, Schelling DL . (1989). An improved procedure for identifying and
ERIC Educational Resources Information Center
van der Lans, Rikkert M.; van de Grift, Wim J. C. M.; van Veen, Klaas
2015-01-01
This study reports on the development of a teacher evaluation instrument, based on students' observations, which exhibits cumulative ordering in terms of the complexity of teaching acts. The study integrates theory on teacher development with theory on teacher effectiveness and applies a cross-validation procedure to verify whether teaching acts…
Sibbitt, Wilmer; Sibbitt, Randy R; Michael, Adrian A; Fu, Druce I; Draeger, Hilda T; Twining, Jon M; Bankhurst, Arthur D
2006-04-01
To evaluate physician control of needle and syringe during aspiration-injection syringe procedures by comparing the new reciprocating procedure syringe to a traditional conventional syringe. Twenty-six physicians were tested for their individual ability to control the reciprocating and conventional syringes in typical aspiration-injection procedures using a novel quantitative needle-based displacement procedure model. Subsequently, the physicians performed 48 clinical aspiration-injection (arthrocentesis) procedures on 32 subjects randomized to the reciprocating or conventional syringes. Clinical outcomes included procedure time, patient pain, and operator satisfaction. Multivariate modeling methods were used to determine the experimental variables in the syringe control model most predictive of clinical outcome measures. In the model system, the reciprocating syringe significantly improved physician control of the syringe and needle, with a 66% reduction in unintended forward penetration (p < 0.001) and a 68% reduction in unintended retraction (p < 0.001). In clinical arthrocentesis, improvements were also noted: 30% reduction in procedure time (p < 0.03), 57% reduction in patient pain (p < 0.001), and a 79% increase in physician satisfaction (p < 0.001). The variables in the experimental system--unintended forward penetration, unintended retraction, and operator satisfaction--independently predicted the outcomes of procedure time, patient pain, and physician satisfaction in the clinical study (p < or = 0.001). The reciprocating syringe reduces procedure time and patient pain and improves operator satisfaction with the procedure syringe. The reciprocating syringe improves physician performance in both the validated quantitative needle-based displacement model and in real aspiration-injection syringe procedures, including arthrocentesis.
Implementation and Validation of a Laminar-to-Turbulent Transition Model in the Wind-US Code
NASA Technical Reports Server (NTRS)
Denissen, Nicholas A.; Yoder, Dennis A.; Georgiadis, Nicholas J.
2008-01-01
A bypass transition model has been implemented in the Wind-US Reynolds Averaged Navier-Stokes (RANS) solver. The model is based on the Shear Stress Transport (SST) turbulence model and was built starting from a previous SST-based transition model. Several modifications were made to enable (1) consistent solutions regardless of flow field initialization procedure and (2) fully turbulent flow beyond the transition region. This model is intended for flows where bypass transition, in which the transition process is dominated by large freestream disturbances, is the key transition mechanism as opposed to transition dictated by modal growth. Validation of the new transition model is performed for flows ranging from incompressible to hypersonic conditions.
Rodríguez-Álvarez, María Xosé; Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen; Tahoces, Pablo G
2018-03-01
Prior to using a diagnostic test in a routine clinical setting, the rigorous evaluation of its diagnostic accuracy is essential. The receiver-operating characteristic curve is the measure of accuracy most widely used for continuous diagnostic tests. However, the possible impact of extra information about the patient (or even the environment) on diagnostic accuracy also needs to be assessed. In this paper, we focus on an estimator for the covariate-specific receiver-operating characteristic curve based on direct regression modelling and nonparametric smoothing techniques. This approach defines the class of generalised additive models for the receiver-operating characteristic curve. The main aim of the paper is to offer new inferential procedures for testing the effect of covariates on the conditional receiver-operating characteristic curve within the above-mentioned class. Specifically, two different bootstrap-based tests are suggested to check (a) the possible effect of continuous covariates on the receiver-operating characteristic curve and (b) the presence of factor-by-curve interaction terms. The validity of the proposed bootstrap-based procedures is supported by simulations. To facilitate the application of these new procedures in practice, an R-package, known as npROCRegression, is provided and briefly described. Finally, data derived from a computer-aided diagnostic system for the automatic detection of tumour masses in breast cancer is analysed.
Garcia-Perez, Isabel; Angulo, Santiago; Utzinger, Jürg; Holmes, Elaine; Legido-Quigley, Cristina; Barbas, Coral
2010-07-01
Metabonomic and metabolomic studies are increasingly utilized for biomarker identification in different fields, including biology of infection. The confluence of improved analytical platforms and the availability of powerful multivariate analysis software have rendered the multiparameter profiles generated by these omics platforms a user-friendly alternative to the established analysis methods where the quality and practice of a procedure is well defined. However, unlike traditional assays, validation methods for these new multivariate profiling tools have yet to be established. We propose a validation for models obtained by CE fingerprinting of urine from mice infected with the blood fluke Schistosoma mansoni. We have analysed urine samples from two sets of mice infected in an inter-laboratory experiment where different infection methods and animal husbandry procedures were employed in order to establish the core biological response to a S. mansoni infection. CE data were analysed using principal component analysis. Validation of the scores consisted of permutation scrambling (100 repetitions) and a manual validation method, using a third of the samples (not included in the model) as a test or prediction set. The validation yielded 100% specificity and 100% sensitivity, demonstrating the robustness of these models with respect to deciphering metabolic perturbations in the mouse due to a S. mansoni infection. A total of 20 metabolites across the two experiments were identified that significantly discriminated between S. mansoni-infected and noninfected control samples. Only one of these metabolites, allantoin, was identified as manifesting different behaviour in the two experiments. This study shows the reproducibility of CE-based metabolic profiling methods for disease characterization and screening and highlights the importance of much needed validation strategies in the emerging field of metabolomics.
Ingham, Roger J
2007-07-01
This letter is a response to a recent report by J. S. Yaruss, C. Coleman, and D. Hammer (2006) that described a treatment program for preschool children who stutter. Problems with the Yaruss et al. study fall into four domains: (a) failure to provide clinicians with replicable procedures, (b) failure to collect valid and reliable speech performance data, (c) failure to control for predictable improvement in children who have been stuttering for less than 15 months, and (d) the advocacy of procedures for which there is no credible research evidence. The claims made for the efficacy of this treatment are problematic and essentially violate the principles of evidence-based practice as recommended by the American Speech-Language-Hearing Association (ASHA).
Individual Differences in Base Rate Neglect: A Fuzzy Processing Preference Index
Wolfe, Christopher R.; Fisher, Christopher R.
2013-01-01
Little is known about individual differences in integrating numeric base-rates and qualitative text in making probability judgments. Fuzzy-Trace Theory predicts a preference for fuzzy processing. We conducted six studies to develop the FPPI, a reliable and valid instrument assessing individual differences in this fuzzy processing preference. It consists of 19 probability estimation items plus 4 "M-Scale" items that distinguish simple pattern matching from “base rate respect.” Cronbach's Alpha was consistently above 0.90. Validity is suggested by significant correlations between FPPI scores and three other measurers: "Rule Based" Process Dissociation Procedure scores; the number of conjunction fallacies in joint probability estimation; and logic index scores on syllogistic reasoning. Replicating norms collected in a university study with a web-based study produced negligible differences in FPPI scores, indicating robustness. The predicted relationships between individual differences in base rate respect and both conjunction fallacies and syllogistic reasoning were partially replicated in two web-based studies. PMID:23935255
Development of a tool to support holistic generic assessment of clinical procedure skills.
McKinley, Robert K; Strand, Janice; Gray, Tracey; Schuwirth, Lambert; Alun-Jones, Tom; Miller, Helen
2008-06-01
The challenges of maintaining comprehensive banks of valid checklists make context-specific checklists for assessment of clinical procedural skills problematic. This paper reports the development of a tool which supports generic holistic assessment of clinical procedural skills. We carried out a literature review, focus groups and non-participant observation of assessments with interview of participants, participant evaluation of a pilot objective structured clinical examination (OSCE), a national modified Delphi study with prior definitions of consensus and an OSCE. Participants were volunteers from a large acute teaching trust, a teaching primary care trust and a national sample of National Health Service staff. Results In total, 86 students, trainees and staff took part in the focus groups, observation of assessments and pilot OSCE, 252 in the Delphi study and 46 candidates and 50 assessors in the final OSCE. We developed a prototype tool with 5 broad categories amongst which were distributed 38 component competencies. There was > 70% agreement (our prior definition of consensus) at the first round of the Delphi study for inclusion of all categories and themes and no consensus for inclusion of additional categories or themes. Generalisability was 0.76. An OSCE based on the instrument has a predicted reliability of 0.79 with 12 stations and 1 assessor per station or 10 stations and 2 assessors per station. This clinical procedural skills assessment tool enables reliable assessment and has content and face validity for the assessment of clinical procedural skills. We have designated it the Leicester Clinical Procedure Assessment Tool (LCAT).
Garfjeld Roberts, Patrick; Guyver, Paul; Baldwin, Mathew; Akhtar, Kash; Alvand, Abtin; Price, Andrew J; Rees, Jonathan L
2017-02-01
To assess the construct and face validity of ArthroS, a passive haptic VR simulator. A secondary aim was to evaluate the novel performance metrics produced by this simulator. Two groups of 30 participants, each divided into novice, intermediate or expert based on arthroscopic experience, completed three separate tasks on either the knee or shoulder module of the simulator. Performance was recorded using 12 automatically generated performance metrics and video footage of the arthroscopic procedures. The videos were blindly assessed using a validated global rating scale (GRS). Participants completed a survey about the simulator's realism and training utility. This new simulator demonstrated construct validity of its tasks when evaluated against a GRS (p ≤ 0.003 in all cases). Regarding it's automatically generated performance metrics, established outputs such as time taken (p ≤ 0.001) and instrument path length (p ≤ 0.007) also demonstrated good construct validity. However, two-thirds of the proposed 'novel metrics' the simulator reports could not distinguish participants based on arthroscopic experience. Face validity assessment rated the simulator as a realistic and useful tool for trainees, but the passive haptic feedback (a key feature of this simulator) is rated as less realistic. The ArthroS simulator has good task construct validity based on established objective outputs, but some of the novel performance metrics could not distinguish between surgical experience. The passive haptic feedback of the simulator also needs improvement. If simulators could offer automated and validated performance feedback, this would facilitate improvements in the delivery of training by allowing trainees to practise and self-assess.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John E.; English, Christine M.; Gesick, Joshua C.
This report documents the validation process as applied to projects awarded through Funding Opportunity Announcements (FOAs) within the U.S. Department of Energy Bioenergy Technologies Office (DOE-BETO). It describes the procedures used to protect and verify project data, as well as the systematic framework used to evaluate and track performance metrics throughout the life of the project. This report also describes the procedures used to validate the proposed process design, cost data, analysis methodologies, and supporting documentation provided by the recipients.
The intelligent OR: design and validation of a context-aware surgical working environment.
Franke, Stefan; Rockstroh, Max; Hofer, Mathias; Neumuth, Thomas
2018-05-24
Interoperability of medical devices based on standards starts to establish in the operating room (OR). Devices share their data and control functionalities. Yet, the OR technology rarely implements cooperative, intelligent behavior, especially in terms of active cooperation with the OR team. Technical context-awareness will be an essential feature of the next generation of medical devices to address the increasing demands to clinicians in information seeking, decision making, and human-machine interaction in complex surgical working environments. The paper describes the technical validation of an intelligent surgical working environment for endoscopic ear-nose-throat surgery. We briefly summarize the design of our framework for context-aware system's behavior in integrated OR and present example realizations of novel assistance functionalities. In a study on patient phantoms, twenty-four procedures were implemented in the proposed intelligent surgical working environment based on recordings of real interventions. Subsequently, the whole processing pipeline for context-awareness from workflow recognition to the final system's behavior is analyzed. Rule-based behavior that considers multiple perspectives on the procedure can partially compensate recognition errors. A considerable robustness could be achieved with a reasonable quality of the recognition. Overall, reliable reactive as well as proactive behavior of the surgical working environment can be implemented in the proposed environment. The obtained validation results indicate the suitability of the overall approach. The setup is a reliable starting point for a subsequent evaluation of the proposed context-aware assistance. The major challenge for future work will be to implement the complex approach in a cross-vendor setting.
Austin, S Bryn; Gordon, Allegra R; Kennedy, Grace A; Sonneville, Kendrin R; Blossom, Jeffrey; Blood, Emily A
2013-12-06
Cosmetic procedures have proliferated rapidly over the past few decades, with over $11 billion spent on cosmetic surgeries and other minimally invasive procedures and another $2.9 billion spent on U.V. indoor tanning in 2012 in the United States alone. While research interest is increasing in tandem with the growth of the industry, methods have yet to be developed to identify and geographically locate the myriad types of businesses purveying cosmetic procedures. Geographic location of cosmetic-procedure businesses is a critical element in understanding the public health impact of this industry; however no studies we are aware of have developed valid and feasible methods for spatial analyses of these types of businesses. The aim of this pilot validation study was to establish the feasibility of identifying businesses offering surgical and minimally invasive cosmetic procedures and to characterize the spatial distribution of these businesses. We developed and tested three methods for creating a geocoded list of cosmetic-procedure businesses in Boston (MA) and Seattle (WA), USA, comparing each method on sensitivity and staff time required per confirmed cosmetic-procedure business. Methods varied substantially. Our findings represent an important step toward enabling rigorous health-linked spatial analyses of the health implications of this little-understood industry.
Austin, S. Bryn; Gordon, Allegra R.; Kennedy, Grace A.; Sonneville, Kendrin R.; Blossom, Jeffrey; Blood, Emily A.
2013-01-01
Cosmetic procedures have proliferated rapidly over the past few decades, with over $11 billion spent on cosmetic surgeries and other minimally invasive procedures and another $2.9 billion spent on U.V. indoor tanning in 2012 in the United States alone. While research interest is increasing in tandem with the growth of the industry, methods have yet to be developed to identify and geographically locate the myriad types of businesses purveying cosmetic procedures. Geographic location of cosmetic-procedure businesses is a critical element in understanding the public health impact of this industry; however no studies we are aware of have developed valid and feasible methods for spatial analyses of these types of businesses. The aim of this pilot validation study was to establish the feasibility of identifying businesses offering surgical and minimally invasive cosmetic procedures and to characterize the spatial distribution of these businesses. We developed and tested three methods for creating a geocoded list of cosmetic-procedure businesses in Boston (MA) and Seattle (WA), USA, comparing each method on sensitivity and staff time required per confirmed cosmetic-procedure business. Methods varied substantially. Our findings represent an important step toward enabling rigorous health-linked spatial analyses of the health implications of this little-understood industry. PMID:24322394
NASA Technical Reports Server (NTRS)
Wilson, T. G.; Lee, F. C. Y.; Burns, W. W., III; Owen, H. A., Jr.
1974-01-01
A procedure is developed for classifying dc-to-square-wave two-transistor parallel inverters used in power conditioning applications. The inverters are reduced to equivalent RLC networks and are then grouped with other inverters with the same basic equivalent circuit. Distinction between inverter classes is based on the topology characteristics of the equivalent circuits. Information about one class can then be extended to another class using the basic oscillation theory and the concept of duality. Oscillograms from test circuits confirm the validity of the procedure adopted.
Osorio, Victoria; Schriks, Merijn; Vughs, Dennis; de Voogt, Pim; Kolkman, Annemieke
2018-08-15
A novel sample preparation procedure relying on Solid Phase Extraction (SPE) combining different sorbent materials on a sequential-based cartridge was optimized and validated for the enrichment of 117 widely diverse contaminants of emerging concern (CECs) from surface waters (SW) and further combined chemical and biological analysis on subsequent extracts. A liquid chromatography coupled to high resolution tandem mass spectrometry LC-(HR)MS/MS protocol was optimized and validated for the quantitative analysis of organic CECs in SW extracts. A battery of in vitro CALUX bioassays for the assessment of endocrine, metabolic and genotoxic interference and oxidative stress were performed on the same SW extracts. Satisfactory recoveries ([70-130]%) and precision (< 30%) were obtained for the majority of compounds tested. Internal standard calibration curves used for quantification of CECs, achieved the linearity criteria (r 2 > 0.99) over three orders of magnitude. Instrumental limits of detection and method limits of quantification were of [1-96] pg injected and [0.1-58] ng/L, respectively; while corresponding intra-day and inter-day precision did not exceed 11% and 20%. The developed procedure was successfully applied for the combined chemical and toxicological assessment of SW intended for drinking water supply. Levels of compounds varied from < 10 ng/L to < 500 ng/L. Endocrine (i.e. estrogenic and anti-androgenic) and metabolic interference responses were observed. Given the demonstrated reliability of the validated sample preparation method, the authors propose its integration in an effect-directed analysis procedure for a proper evaluation of SW quality and hazard assessment of CECs. Copyright © 2018 Elsevier B.V. All rights reserved.
Model Development for Risk Assessment of Driving on Freeway under Rainy Weather Conditions
Cai, Xiaonan; Wang, Chen; Chen, Shengdi; Lu, Jian
2016-01-01
Rainy weather conditions could result in significantly negative impacts on driving on freeways. However, due to lack of enough historical data and monitoring facilities, many regions are not able to establish reliable risk assessment models to identify such impacts. Given the situation, this paper provides an alternative solution where the procedure of risk assessment is developed based on drivers’ subjective questionnaire and its performance is validated by using actual crash data. First, an ordered logit model was developed, based on questionnaire data collected from Freeway G15 in China, to estimate the relationship between drivers’ perceived risk and factors, including vehicle type, rain intensity, traffic volume, and location. Then, weighted driving risk for different conditions was obtained by the model, and further divided into four levels of early warning (specified by colors) using a rank order cluster analysis. After that, a risk matrix was established to determine which warning color should be disseminated to drivers, given a specific condition. Finally, to validate the proposed procedure, actual crash data from Freeway G15 were compared with the safety prediction based on the risk matrix. The results show that the risk matrix obtained in the study is able to predict driving risk consistent with actual safety implications, under rainy weather conditions. PMID:26894434
When is good, good enough? Methodological pragmatism for sustainable guideline development.
Browman, George P; Somerfield, Mark R; Lyman, Gary H; Brouwers, Melissa C
2015-03-06
Continuous escalation in methodological and procedural rigor for evidence-based processes in guideline development is associated with increasing costs and production delays that threaten sustainability. While health research methodologists are appropriately responsible for promoting increasing rigor in guideline development, guideline sponsors are responsible for funding such processes. This paper acknowledges that other stakeholders in addition to methodologists should be more involved in negotiating trade-offs between methodological procedures and efficiency in guideline production to produce guidelines that are 'good enough' to be trustworthy and affordable under specific circumstances. The argument for reasonable methodological compromise to meet practical circumstances is consistent with current implicit methodological practice. This paper proposes a conceptual tool as a framework to be used by different stakeholders in negotiating, and explicitly reporting, reasonable compromises for trustworthy as well as cost-worthy guidelines. The framework helps fill a transparency gap in how methodological choices in guideline development are made. The principle, 'when good is good enough' can serve as a basis for this approach. The conceptual tool 'Efficiency-Validity Methodological Continuum' acknowledges trade-offs between validity and efficiency in evidence-based guideline development and allows for negotiation, guided by methodologists, of reasonable methodological compromises among stakeholders. Collaboration among guideline stakeholders in the development process is necessary if evidence-based guideline development is to be sustainable.
Spatial regression test for ensuring temperature data quality in southern Spain
NASA Astrophysics Data System (ADS)
Estévez, J.; Gavilán, P.; García-Marín, A. P.
2018-01-01
Quality assurance of meteorological data is crucial for ensuring the reliability of applications and models that use such data as input variables, especially in the field of environmental sciences. Spatial validation of meteorological data is based on the application of quality control procedures using data from neighbouring stations to assess the validity of data from a candidate station (the station of interest). These kinds of tests, which are referred to in the literature as spatial consistency tests, take data from neighbouring stations in order to estimate the corresponding measurement at the candidate station. These estimations can be made by weighting values according to the distance between the stations or to the coefficient of correlation, among other methods. The test applied in this study relies on statistical decision-making and uses a weighting based on the standard error of the estimate. This paper summarizes the results of the application of this test to maximum, minimum and mean temperature data from the Agroclimatic Information Network of Andalusia (southern Spain). This quality control procedure includes a decision based on a factor f, the fraction of potential outliers for each station across the region. Using GIS techniques, the geographic distribution of the errors detected has been also analysed. Finally, the performance of the test was assessed by evaluating its effectiveness in detecting known errors.
21 CFR 1270.31 - Written procedures.
Code of Federal Regulations, 2011 CFR
2011-04-01
... procedures prepared and followed for all significant steps in the infectious disease testing process under... procedures prepared, validated, and followed for prevention of infectious disease contamination or cross...
21 CFR 1270.31 - Written procedures.
Code of Federal Regulations, 2012 CFR
2012-04-01
... procedures prepared and followed for all significant steps in the infectious disease testing process under... procedures prepared, validated, and followed for prevention of infectious disease contamination or cross...
21 CFR 1270.31 - Written procedures.
Code of Federal Regulations, 2014 CFR
2014-04-01
... procedures prepared and followed for all significant steps in the infectious disease testing process under... procedures prepared, validated, and followed for prevention of infectious disease contamination or cross...
21 CFR 1270.31 - Written procedures.
Code of Federal Regulations, 2013 CFR
2013-04-01
... procedures prepared and followed for all significant steps in the infectious disease testing process under... procedures prepared, validated, and followed for prevention of infectious disease contamination or cross...
An Australasian model license reassessment procedure for identifying potentially unsafe drivers.
Fildes, Brian N; Charlton, Judith; Pronk, Nicola; Langford, Jim; Oxley, Jennie; Koppel, Sjaanie
2008-08-01
Most licensing jurisdictions in Australia currently employ age-based assessment programs as a means to manage older driver safety, yet available evidence suggests that these programs have no safety benefits. This paper describes a community referral-based model license re assessment procedure for identifying and assessing potentially unsafe drivers. While the model was primarily developed for assessing older driver fitness to drive, it could be applicable to other forms of driver impairment associated with increased crash risk. It includes a three-tier process of assessment, involving the use of validated and relevant assessment instruments. A case is argued that this process is a more systematic, transparent and effective process for managing older driver safety and thus more likely to be widely acceptable to the target community and licensing authorities than age-based practices.
NASA Astrophysics Data System (ADS)
Fontchastagner, Julien; Lubin, Thierry; Mezani, Smaïl; Takorabet, Noureddine
2018-03-01
This paper presents a design optimization of an axial-flux eddy-current magnetic coupling. The design procedure is based on a torque formula derived from a 3D analytical model and a population algorithm method. The main objective of this paper is to determine the best design in terms of magnets volume in order to transmit a torque between two movers, while ensuring a low slip speed and a good efficiency. The torque formula is very accurate and computationally efficient, and is valid for any slip speed values. Nevertheless, in order to solve more realistic problems, and then, take into account the thermal effects on the torque value, a thermal model based on convection heat transfer coefficients is also established and used in the design optimization procedure. Results show the effectiveness of the proposed methodology.
Fobil, Julius N.; Kumoji, Robert; Armah, Henry B.; Aryee, Eunice; Bilson, Francis; Carboo, Derick; Rodrigues, Frederick K.; Meyer, Christian G.; May, Juergen; Kraemer, Alexander
2011-01-01
The study of cause of death certification remains a largely neglected field in many developing countries, including Ghana. Yet, mortality information is crucial for establishing mortality patterns over time and for estimating mortality attributed to specific causes. In Ghana, autopsies remain the appropriate option for determining the cause of deaths occurring in homes and those occurring within 48 hours after admission into health facilities. Although these organ-based autopsies may generate convincing results and are considered the gold standard tools for ascertainments of causes of death, procedural and practical constraints could limit the extent to which autopsy results can be accepted and/or trusted. The objective of our study was to identify and characterise the procedural and practical constraints as well as to assess their potential effects on autopsy outcomes in Ghana. We interviewed 10 Ghanaian pathologists and collected and evaluated procedural manuals and operational procedures for the conduct of autopsies. A characterisation of the operational constraints and the Delphi analysis of their potential influence on the quality of mortality data led to a quantification of the validity threats as moderate (average expert panel score = 1) in the generality of the autopsy operations in Ghana. On the basis of the impressions of the expert panel, it was concluded that mortality data generated from autopsies in urban settings in Ghana were of sufficiently high quality to guarantee valid use in health analysis. PMID:28299049
Jenkins, Kathy J; Koch Kupiec, Jennifer; Owens, Pamela L; Romano, Patrick S; Geppert, Jeffrey J; Gauvreau, Kimberlee
2016-05-20
The National Quality Forum previously approved a quality indicator for mortality after congenital heart surgery developed by the Agency for Healthcare Research and Quality (AHRQ). Several parameters of the validated Risk Adjustment for Congenital Heart Surgery (RACHS-1) method were included, but others differed. As part of the National Quality Forum endorsement maintenance process, developers were asked to harmonize the 2 methodologies. Parameters that were identical between the 2 methods were retained. AHRQ's Healthcare Cost and Utilization Project State Inpatient Databases (SID) 2008 were used to select optimal parameters where differences existed, with a goal to maximize model performance and face validity. Inclusion criteria were not changed and included all discharges for patients <18 years with International Classification of Diseases, Ninth Revision, Clinical Modification procedure codes for congenital heart surgery or nonspecific heart surgery combined with congenital heart disease diagnosis codes. The final model includes procedure risk group, age (0-28 days, 29-90 days, 91-364 days, 1-17 years), low birth weight (500-2499 g), other congenital anomalies (Clinical Classifications Software 217, except for 758.xx), multiple procedures, and transfer-in status. Among 17 945 eligible cases in the SID 2008, the c statistic for model performance was 0.82. In the SID 2013 validation data set, the c statistic was 0.82. Risk-adjusted mortality rates by center ranged from 0.9% to 4.1% (5th-95th percentile). Congenital heart surgery programs can now obtain national benchmarking reports by applying AHRQ Quality Indicator software to hospital administrative data, based on the harmonized RACHS-1 method, with high discrimination and face validity. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
40 CFR 761.398 - Reporting and recordkeeping.
Code of Federal Regulations, 2014 CFR
2014-07-01
... into a standard operating procedure (SOP) for reference whenever the decontamination procedure is used... new solvents and validated decontamination procedures in the Federal Register. (b) Any person may...
40 CFR 761.398 - Reporting and recordkeeping.
Code of Federal Regulations, 2011 CFR
2011-07-01
... into a standard operating procedure (SOP) for reference whenever the decontamination procedure is used... new solvents and validated decontamination procedures in the Federal Register. (b) Any person may...
40 CFR 761.398 - Reporting and recordkeeping.
Code of Federal Regulations, 2010 CFR
2010-07-01
... into a standard operating procedure (SOP) for reference whenever the decontamination procedure is used... new solvents and validated decontamination procedures in the Federal Register. (b) Any person may...
40 CFR 761.398 - Reporting and recordkeeping.
Code of Federal Regulations, 2013 CFR
2013-07-01
... into a standard operating procedure (SOP) for reference whenever the decontamination procedure is used... new solvents and validated decontamination procedures in the Federal Register. (b) Any person may...
40 CFR 761.398 - Reporting and recordkeeping.
Code of Federal Regulations, 2012 CFR
2012-07-01
... into a standard operating procedure (SOP) for reference whenever the decontamination procedure is used... new solvents and validated decontamination procedures in the Federal Register. (b) Any person may...
Procedures For Microbial-Ecology Laboratory
NASA Technical Reports Server (NTRS)
Huff, Timothy L.
1993-01-01
Microbial Ecology Laboratory Procedures Manual provides concise and well-defined instructions on routine technical procedures to be followed in microbiological laboratory to ensure safety, analytical control, and validity of results.
Use of an Objective Structured Assessment of Technical Skill After a Sports Medicine Rotation.
Dwyer, Tim; Slade Shantz, Jesse; Kulasegaram, Kulamakan Mahan; Chahal, Jaskarndip; Wasserstein, David; Schachar, Rachel; Devitt, Brian; Theodoropoulos, John; Hodges, Brian; Ogilvie-Harris, Darrell
2016-12-01
The purpose of this study was to determine if the use of an Objective Structured Assessment of Technical skill (OSATS), using dry models, would be a valid method of assessing residents' ability to perform sports medicine procedures after training in a competency-based model. Over 18 months, 27 residents (19 junior [postgraduate year (PGY) 1-3] and 8 senior [PGY 4-5]) sat the OSATS after their rotation, in addition to 14 sports medicine staff and fellows. Each resident was provided a list of 10 procedures in which they were expected to show competence. At the end of the rotation, each resident undertook an OSATS composed of 6 stations sampled from the 10 procedures using dry models-faculty used the Arthroscopic Surgical Skill Evaluation Tool (ASSET), task-specific checklists, as well as an overall 5-point global rating scale (GRS) to score each resident. Each procedure was videotaped for blinded review. The overall reliability of the OSATS (0.9) and the inter-rater reliability (0.9) were both high. A significant difference by year in training was seen for the overall GRS, the total ASSET score, and the total checklist score, as well as for each technical procedure (P < .001). Further analysis revealed a significant difference in the total ASSET score between junior (mean 18.4, 95% confidence interval [CI] 16.8 to 19.9) and senior residents (24.2, 95% CI 22.7 to 25.6), senior residents and fellows (30.1, 95% CI 28.2 to 31.9), as well as between fellows and faculty (37, 95% CI 36.1 to 27.8) (P < .05). The results of this study show that an OSATS using dry models shows evidence of validity when used to assess performance of technical procedures after a sports medicine rotation. However, junior residents were not able to perform as well as senior residents, suggesting that overall surgical experience is as important as intensive teaching. As postgraduate medical training shifts to a competency-based model, methods of assessing performance of technical procedures become necessary. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gariano, Stefano Luigi; Brunetti, Maria Teresa; Iovine, Giulio; Melillo, Massimo; Peruccacci, Silvia; Terranova, Oreste Giuseppe; Vennari, Carmela; Guzzetti, Fausto
2015-04-01
Prediction of rainfall-induced landslides can rely on empirical rainfall thresholds. These are obtained from the analysis of past rainfall events that have (or have not) resulted in slope failures. Accurate prediction requires reliable thresholds, which need to be validated before their use in operational landslide warning systems. Despite the clear relevance of validation, only a few studies have addressed the problem, and have proposed and tested robust validation procedures. We propose a validation procedure that allows for the definition of optimal thresholds for early warning purposes. The validation is based on contingency table, skill scores, and receiver operating characteristic (ROC) analysis. To establish the optimal threshold, which maximizes the correct landslide predictions and minimizes the incorrect predictions, we propose an index that results from the linear combination of three weighted skill scores. Selection of the optimal threshold depends on the scope and the operational characteristics of the early warning system. The choice is made by selecting appropriately the weights, and by searching for the optimal (maximum) value of the index. We discuss weakness in the validation procedure caused by the inherent lack of information (epistemic uncertainty) on landslide occurrence typical of large study areas. When working at the regional scale, landslides may have occurred and may have not been reported. This results in biases and variations in the contingencies and the skill scores. We introduce two parameters to represent the unknown proportion of rainfall events (above and below the threshold) for which landslides occurred and went unreported. We show that even a very small underestimation in the number of landslides can result in a significant decrease in the performance of a threshold measured by the skill scores. We show that the variations in the skill scores are different for different uncertainty of events above or below the threshold. This has consequences in the ROC analysis. We applied the proposed procedure to a catalogue of rainfall conditions that have resulted in landslides, and to a set of rainfall events that - presumably - have not resulted in landslides, in Sicily, in the period 2002-2012. First, we determined regional event duration-cumulated event (ED) rainfall thresholds for shallow landslide occurrence using 200 rainfall conditions that have resulted in 223 shallow landslides in Sicily in the period 2002-2011. Next, we validated the thresholds using 29 rainfall conditions that have triggered 42 shallow landslides in Sicily in 2012, and 1250 rainfall events that presumably have not resulted in landslides in the same year. We performed a back analysis simulating the use of the thresholds in a hypothetical landslide warning system operating in 2012.
Claassen, Cindy; Kurian, Ben; Trivedi, Madhukar H.; Grannemann, Bruce D.; Tuli, Ekta; Pipes, Ronny; Preston, Anne Marie; Flood, Ariell
2012-01-01
Purpose Missing data in clinical efficacy and effectiveness trials continue to be a major threat to the validity of study findings. The purpose of this report is to describe methods developed to ensure completion of outcome assessments with public mental health sector subjects participating in a longitudinal, repeated measures study for the treatment of major depressive disorder. We developed longitudinal assessment procedures that included telephone-based clinician interviews in order to minimize missing data commonly encountered with face-to-face assessment procedures. Methods A pre-planned, multi-step strategy was developed to ensure completeness of data collection. The procedure included obtaining multiple pieces of patient contact information at baseline, careful education of both staff and patients concerning the purpose of assessments, establishing good patient rapport, and finally being flexible and persistent with phone appointments to ensure the completion of telephone-based follow-up assessments. A well-developed administrative and organizational structure was also put in place prior to study implementation. Results The assessment completion rate for the primary outcome for 310 of 504 subjects who enrolled and completed 52 weeks (at the time of manuscript) of telephone-based follow-up assessments was 96.8%. Conclusion By utilizing telephone-based follow-up procedures and adapting our easy-to-use pre-defined multi-step approach, researchers can maximize patient data retention in longitudinal studies. PMID:18761427
Intelligent Medical Systems for Aerospace Emergency Medical Services
NASA Technical Reports Server (NTRS)
Epler, John; Zimmer, Gary
2004-01-01
The purpose of this project is to develop a portable, hands free device for emergency medical decision support to be used in remote or confined settings by non-physician providers. Phase I of the project will entail the development of a voice-activated device that will utilize an intelligent algorithm to provide guidance in establishing an airway in an emergency situation. The interactive, hands free software will process requests for assistance based on verbal prompts and algorithmic decision-making. The device will allow the CMO to attend to the patient while receiving verbal instruction. The software will also feature graphic representations where it is felt helpful in aiding in procedures. We will also develop a training program to orient users to the algorithmic approach, the use of the hardware and specific procedural considerations. We will validate the efficacy of this mode of technology application by testing in the Johns Hopkins Department of Emergency Medicine. Phase I of the project will focus on the validation of the proposed algorithm, testing and validation of the decision making tool and modifications of medical equipment. In Phase 11, we will produce the first generation software for hands-free, interactive medical decision making for use in acute care environments.
Triacylglycerol stereospecific analysis and linear discriminant analysis for milk speciation.
Blasi, Francesca; Lombardi, Germana; Damiani, Pietro; Simonetti, Maria Stella; Giua, Laura; Cossignani, Lina
2013-05-01
Product authenticity is an important topic in dairy sector. Dairy products sold for public consumption must be accurately labelled in accordance with the contained milk species. Linear discriminant analysis (LDA), a common chemometric procedure, has been applied to fatty acid% composition to classify pure milk samples (cow, ewe, buffalo, donkey, goat). All original grouped cases were correctly classified, while 90% of cross-validated grouped cases were correctly classified. Another objective of this research was the characterisation of cow-ewe milk mixtures in order to reveal a common fraud in dairy field, that is the addition of cow to ewe milk. Stereospecific analysis of triacylglycerols (TAG), a method based on chemical-enzymatic procedures coupled with chromatographic techniques, has been carried out to detect fraudulent milk additions, in particular 1, 3, 5% cow milk added to ewe milk. When only TAG composition data were used for the elaboration, 75% of original grouped cases were correctly classified, while totally correct classified samples were obtained when both total and intrapositional TAG data were used. Also the results of cross validation were better when TAG stereospecific analysis data were considered as LDA variables. In particular, 100% of cross-validated grouped cases were obtained when 5% cow milk mixtures were considered.
Wellner, Ulrich F; Klinger, Carsten; Lehmann, Kai; Buhr, Heinz; Neugebauer, Edmund; Keck, Tobias
2017-04-05
Pancreatic resections are among the most complex procedures in visceral surgery. While mortality has decreased substantially over the past decades, morbidity remains high. The volume-outcome correlation in pancreatic surgery is among the strongest in the field of surgery. The German Society for General and Visceral Surgery (DGAV) established a national registry for quality control, risk assessment and outcomes research in pancreatic surgery in Germany (DGAV SuDoQ|Pancreas). Here, we present the aims and scope of the DGAV StuDoQ|Pancreas Registry. A systematic assessment of registry quality is performed based on the recommendations of the German network for outcomes research (DNVF). The registry quality was assessed by consensus criteria of the DNVF in regard to the domains Systematics and Appropriateness, Standardization, Validity of the sampling procedure, Validity of data collection, Validity of statistical analysis and reports, and General demands for registry quality. In summary, DGAV StuDoQ|Pancreas meets most of the criteria of a high-quality clinical registry. The DGAV StuDoQ|Pancreas provides a valuable platform for quality assessment, outcomes research as well as randomized registry trials in pancreatic surgery.
Massey, Emma K; Timmerman, Lotte; Ismail, Sohal Y; Duerinckx, Nathalie; Lopes, Alice; Maple, Hannah; Mega, Inês; Papachristou, Christina; Dobbels, Fabienne
2018-01-01
Thorough psychosocial screening of donor candidates is required in order to minimize potential negative consequences and to strive for optimal safety within living donation programmes. We aimed to develop an evidence-based tool to standardize the psychosocial screening process. Key concepts of psychosocial screening were used to structure our tool: motivation and decision-making, personal resources, psychopathology, social resources, ethical and legal factors and information and risk processing. We (i) discussed how each item per concept could be measured, (ii) reviewed and rated available validated tools, (iii) where necessary developed new items, (iv) assessed content validity and (v) pilot-tested the new items. The resulting ELPAT living organ donor Psychosocial Assessment Tool (EPAT) consists of a selection of validated questionnaires (28 items in total), a semi-structured interview (43 questions) and a Red Flag Checklist. We outline optimal procedures and conditions for implementing this tool. The EPAT and user manual are available from the authors. Use of this tool will standardize the psychosocial screening procedure ensuring that no psychosocial issues are overlooked and ensure that comparable selection criteria are used and facilitate generation of comparable psychosocial data on living donor candidates. © 2017 Steunstichting ESOT.
Predicting Performance in Higher Education Using Proximal Predictors.
Niessen, A Susan M; Meijer, Rob R; Tendeiro, Jorge N
2016-01-01
We studied the validity of two methods for predicting academic performance and student-program fit that were proximal to important study criteria. Applicants to an undergraduate psychology program participated in a selection procedure containing a trial-studying test based on a work sample approach, and specific skills tests in English and math. Test scores were used to predict academic achievement and progress after the first year, achievement in specific course types, enrollment, and dropout after the first year. All tests showed positive significant correlations with the criteria. The trial-studying test was consistently the best predictor in the admission procedure. We found no significant differences between the predictive validity of the trial-studying test and prior educational performance, and substantial shared explained variance between the two predictors. Only applicants with lower trial-studying scores were significantly less likely to enroll in the program. In conclusion, the trial-studying test yielded predictive validities similar to that of prior educational performance and possibly enabled self-selection. In admissions aimed at student-program fit, or in admissions in which past educational performance is difficult to use, a trial-studying test is a good instrument to predict academic performance.
Sajnóg, Adam; Hanć, Anetta; Barałkiewicz, Danuta
2018-05-15
Analysis of clinical specimens by imaging techniques allows to determine the content and distribution of trace elements on the surface of the examined sample. In order to obtain reliable results, the developed procedure should be based not only on the properly prepared sample and performed calibration. It is also necessary to carry out all phases of the procedure in accordance with the principles of chemical metrology whose main pillars are the use of validated analytical methods, establishing the traceability of the measurement results and the estimation of the uncertainty. This review paper discusses aspects related to sampling, preparation and analysis of clinical samples by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) with emphasis on metrological aspects, i.e. selected validation parameters of the analytical method, the traceability of the measurement result and the uncertainty of the result. This work promotes the introduction of metrology principles for chemical measurement with emphasis to the LA-ICP-MS which is the comparative method that requires studious approach to the development of the analytical procedure in order to acquire reliable quantitative results. Copyright © 2018 Elsevier B.V. All rights reserved.
Bergo, Maria do Carmo Noronha Cominato
2006-01-01
Thermal washer-disinfectors represent a technology that brought about great advantages such as, establishment of protocols, standard operating procedures, reduction in occupational risk of a biological and environmental nature. The efficacy of the cleaning and disinfection obtained by automatic washer disinfectors machines in running programs with different times and temperatures determined by the different official agencies was validated according to recommendations from ISO Standards 15883-1/1999 and HTM2030 (NHS Estates, 1997) for the determining of the Minimum Lethality and DAL both theoretically and through the use with thermocouples. In order to determine the cleaning efficacy, the Soil Test, Biotrace Pro-tect and the Protein Test Kit were used. The procedure to verify the CFU count of viable microorganisms was performed before and after the thermal disinfection. This article shows that the results are in compliance with the ISO and HTM Standards. The validation steps confirmed the high efficacy level of the Medical Washer-Disinfectors. This protocol enabled the evaluation of the procedure based on evidence supported by scientific research, aiming at the support of the Supply Center multi-professional personnel with information and the possibility of developing further research.
Field analysis for explosives: TNT and RDX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elcoate, W.; Mapes, J.
The EPA has listed as hazardous many of the compounds used in the production of ammunitions and other explosive ordnance. The contamination of soil with TNT (2,4,6-trinitrotoluene), the major component of many munitions formulations and to a lesser degree RDX (hexhydro-1,3,5-trinitro-1,3,5-trizine) is a significant problem at many ammunition manufacturing facilities, depots, and ordnance disposal sites. Field test kits for explosives TNT and RDX (hexhydro-1,3,5-trinitro-1,3,5-triazine) were developed based on the methods of T.F. Jenkins and M.E. Walsh and T.F Jenkins. EnSys Environmental Products, Inc. with technical support from T.F. Jenkins took the original TNT procedure, modified it for easier field use,more » performed validation studies to ensure that it met or exceeded the method specifications for both the T.F. Jenkins and SW-846 methods, and developed an easy to use test format for the field testing of TNT. The RDX procedure has gone through the development cycle and is presently in the field validation phase. This paper describes the test protocol and performance characteristics of the TNT test procedure.« less
1975-07-01
I WIWIHIHlipi pqpv<Hi^«^Rii.i ii mmw AD-A016 282 ASSESSING THE REALIBILITY AND VALIDITY OF MULTI-ATTRIBUTE UTILITY PROCEDURES: AN...more complicated and use data from actual experiments. Example 1: Analysis of raters making Importance judgments about attributes. In MAU studies...generaluablllty of JUDGE as contrasted to ÜASC. To do this, we win reanaIyze the data for each syste™ separately. This 1. valid since the initial
42 CFR 456.655 - Validation of showings.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 4 2013-10-01 2013-10-01 false Validation of showings. 456.655 Section 456.655... Showing of an Effective Institutional Utilization Control Program § 456.655 Validation of showings. (a) The Administrator will periodically validate showings submitted under § 456.654. Validation procedures...
42 CFR 456.655 - Validation of showings.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 4 2012-10-01 2012-10-01 false Validation of showings. 456.655 Section 456.655... Showing of an Effective Institutional Utilization Control Program § 456.655 Validation of showings. (a) The Administrator will periodically validate showings submitted under § 456.654. Validation procedures...
42 CFR 456.655 - Validation of showings.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 4 2014-10-01 2014-10-01 false Validation of showings. 456.655 Section 456.655... Showing of an Effective Institutional Utilization Control Program § 456.655 Validation of showings. (a) The Administrator will periodically validate showings submitted under § 456.654. Validation procedures...
42 CFR 456.655 - Validation of showings.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 4 2011-10-01 2011-10-01 false Validation of showings. 456.655 Section 456.655... Showing of an Effective Institutional Utilization Control Program § 456.655 Validation of showings. (a) The Administrator will periodically validate showings submitted under § 456.654. Validation procedures...
Laboratory Analytical Procedures | Bioenergy | NREL
analytical procedures (LAPs) to provide validated methods for biofuels and pyrolysis bio-oils research . Biomass Compositional Analysis These lab procedures provide tested and accepted methods for performing
Hung, Andrew J; Shah, Swar H; Dalag, Leonard; Shin, Daniel; Gill, Inderbir S
2015-08-01
We developed a novel procedure specific simulation platform for robotic partial nephrectomy. In this study we prospectively evaluate its face, content, construct and concurrent validity. This hybrid platform features augmented reality and virtual reality. Augmented reality involves 3-dimensional robotic partial nephrectomy surgical videos overlaid with virtual instruments to teach surgical anatomy, technical skills and operative steps. Advanced technical skills are assessed with an embedded full virtual reality renorrhaphy task. Participants were classified as novice (no surgical training, 15), intermediate (less than 100 robotic cases, 13) or expert (100 or more robotic cases, 14) and prospectively assessed. Cohort performance was compared with the Kruskal-Wallis test (construct validity). Post-study questionnaire was used to assess the realism of simulation (face validity) and usefulness for training (content validity). Concurrent validity evaluated correlation between virtual reality renorrhaphy task and a live porcine robotic partial nephrectomy performance (Spearman's analysis). Experts rated the augmented reality content as realistic (median 8/10) and helpful for resident/fellow training (8.0-8.2/10). Experts rated the platform highly for teaching anatomy (9/10) and operative steps (8.5/10) but moderately for technical skills (7.5/10). Experts and intermediates outperformed novices (construct validity) in efficiency (p=0.0002) and accuracy (p=0.002). For virtual reality renorrhaphy, experts outperformed intermediates on GEARS metrics (p=0.002). Virtual reality renorrhaphy and in vivo porcine robotic partial nephrectomy performance correlated significantly (r=0.8, p <0.0001) (concurrent validity). This augmented reality simulation platform displayed face, content and construct validity. Performance in the procedure specific virtual reality task correlated highly with a porcine model (concurrent validity). Future efforts will integrate procedure specific virtual reality tasks and their global assessment. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Sharifi, Mona; Krishanswami, Shanthi; McPheeters, Melissa L
2013-12-30
To identify and assess billing, procedural, or diagnosis code, or pharmacy claim-based algorithms used to identify acute bronchospasm in administrative and claims databases. We searched the MEDLINE database from 1991 to September 2012 using controlled vocabulary and key terms related to bronchospasm, wheeze and acute asthma. We also searched the reference lists of included studies. Two investigators independently assessed the full text of studies against pre-determined inclusion criteria. Two reviewers independently extracted data regarding participant and algorithm characteristics. Our searches identified 677 citations of which 38 met our inclusion criteria. In these 38 studies, the most commonly used ICD-9 code was 493.x. Only 3 studies reported any validation methods for the identification of bronchospasm, wheeze or acute asthma in administrative and claims databases; all were among pediatric populations and only 2 offered any validation statistics. Some of the outcome definitions utilized were heterogeneous and included other disease based diagnoses, such as bronchiolitis and pneumonia, which are typically of an infectious etiology. One study offered the validation of algorithms utilizing Emergency Department triage chief complaint codes to diagnose acute asthma exacerbations with ICD-9 786.07 (wheezing) revealing the highest sensitivity (56%), specificity (97%), PPV (93.5%) and NPV (76%). There is a paucity of studies reporting rigorous methods to validate algorithms for the identification of bronchospasm in administrative data. The scant validated data available are limited in their generalizability to broad-based populations. Copyright © 2013 Elsevier Ltd. All rights reserved.
Coriat, R; Pommaret, E; Chryssostalis, A; Viennot, S; Gaudric, M; Brezault, C; Lamarque, D; Roche, H; Verdier, D; Parlier, D; Prat, F; Chaussade, S
2009-02-01
To produce valid information, an evaluation of professional practices has to assess the quality of all practices before, during and after the procedure under study. Several auditing techniques have been proposed for colonoscopy. The purpose of this work is to describe a straightforward original validated method for the prospective evaluation of professional practices in the field of colonoscopy applicable in all endoscopy units without increasing the staff work load. Pertinent quality-control criteria (14 items) were identified by the endoscopists at the Cochin Hospital and were compatible with: findings in the available literature; guidelines proposed by the Superior Health Authority; and application in any endoscopy unit. Prospective routine data were collected and the methodology validated by evaluating 50 colonoscopies every quarter for one year. The relevance of the criteria was assessed using data collected during four separate periods. The standard checklist was complete for 57% of the colonoscopy procedures. The colonoscopy procedure was appropriate according to national guidelines in 94% of cases. These observations were particularly noteworthy: the quality of the colonic preparation was insufficient for 9% of the procedures; complete colonoscopy was achieved for 93% of patients; and 0.38 adenomas and 0.045 carcinomas were identified per colonoscopy. This simple and reproducible method can be used for valid quality-control audits in all endoscopy units. In France, unit-wide application of this method enables endoscopists to validate 100 of the 250 points required for continuous medical training. This is a quality-control tool that can be applied annually, using a random month to evaluate any changes in routine practices.
Gupta, Sumit; Nathan, Paul C; Baxter, Nancy N; Lau, Cindy; Daly, Corinne; Pole, Jason D
2018-06-01
Despite the importance of estimating population level cancer outcomes, most registries do not collect critical events such as relapse. Attempts to use health administrative data to identify these events have focused on older adults and have been mostly unsuccessful. We developed and tested administrative data-based algorithms in a population-based cohort of adolescents and young adults with cancer. We identified all Ontario adolescents and young adults 15-21 years old diagnosed with leukemia, lymphoma, sarcoma, or testicular cancer between 1992-2012. Chart abstraction determined the end of initial treatment (EOIT) date and subsequent cancer-related events (progression, relapse, second cancer). Linkage to population-based administrative databases identified fee and procedure codes indicating cancer treatment or palliative care. Algorithms determining EOIT based on a time interval free of treatment-associated codes, and new cancer-related events based on billing codes, were compared with chart-abstracted data. The cohort comprised 1404 patients. Time periods free of treatment-associated codes did not validly identify EOIT dates; using subsequent codes to identify new cancer events was thus associated with low sensitivity (56.2%). However, using administrative data codes that occurred after the EOIT date based on chart abstraction, the first cancer-related event was identified with excellent validity (sensitivity, 87.0%; specificity, 93.3%; positive predictive value, 81.5%; negative predictive value, 95.5%). Although administrative data alone did not validly identify cancer-related events, administrative data in combination with chart collected EOIT dates was associated with excellent validity. The collection of EOIT dates by cancer registries would significantly expand the potential of administrative data linkage to assess cancer outcomes.
The PDB_REDO server for macromolecular structure model optimization.
Joosten, Robbie P; Long, Fei; Murshudov, Garib N; Perrakis, Anastassis
2014-07-01
The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395-1412]. The PDB_REDO procedure aims for 'constructive validation', aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo-graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB.
NASA Technical Reports Server (NTRS)
Carr, Peter C.; Mckissick, Burnell T.
1988-01-01
A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.
ERIC Educational Resources Information Center
Homem, Vera; Alves, Arminda; Santos, Lu´cia
2014-01-01
A laboratory application with a strong component in analytical chemistry was designed for undergraduate students, in order to introduce a current problem in the environmental science field, the water contamination by antibiotics. Therefore, a simple and rapid method based on direct injection and high performance liquid chromatography-tandem mass…
Data Collection Procedures for School-Based Surveys among Adolescents: The Youth in Europe Study
ERIC Educational Resources Information Center
Kristjansson, Alfgeir Logi; Sigfusson, Jon; Sigfusdottir, Inga Dora; Allegrante, John P.
2013-01-01
Background: Collection of valid and reliable surveillance data as a basis for school health promotion and education policy and practice for children and adolescence is of great importance. However, numerous methodological and practical problems arise in the planning and collection of such survey data that need to be resolved in order to ensure the…
The Role of Logic in the Validation of Mathematical Proofs. Technical Report. No. 1999-1
ERIC Educational Resources Information Center
Selden, Annie; Selden, John
1999-01-01
Mathematics departments rarely require students to study very much logic before working with proofs. Normally, the most they will offer is contained in a small portion of a "bridge" course designed to help students move from more procedurally-based lower-division courses (e.g., abstract algebra and real analysis). What accounts for this seeming…
Prefabricated Roof Beams for Hardened Shelters
1993-08-01
beam with a composite concrete slab. Based on the results of the concept evaluation, a test program was designed and conducted to validate the steel...ultimaw, strength. The results of these tests showed that the design procedure accurately predicts the response of the ste,-confined concrete composite...BENDING OF EXTERNALLY REINFORCED CONCRETE BEAMS ........ 67 TABLE 9. SINGLE POINT LOAD BEAM TEST RESULTS
ERIC Educational Resources Information Center
Chung, Siuman; Espin, Christine A.
2013-01-01
The reliability and validity of three curriculum-based measures as indicators of learning English as a foreign language were examined. Participants were 260 Dutch students in Grades 8 and 9 who were receiving English-language instruction. Predictor measures were maze-selection, Dutch-to-English word translation, and English-to-Dutch word…
Ferrero, Alejandro; Campos, Joaquin; Pons, Alicia
2006-04-10
What we believe to be a novel procedure to correct the nonuniformity that is inherent in all matrix detectors has been developed and experimentally validated. This correction method, unlike other nonuniformity-correction algorithms, consists of two steps that separate two of the usual problems that affect characterization of matrix detectors, i.e., nonlinearity and the relative variation of the pixels' responsivity across the array. The correction of the nonlinear behavior remains valid for any illumination wavelength employed, as long as the nonlinearity is not due to power dependence of the internal quantum efficiency. This method of correction of nonuniformity permits the immediate calculation of the correction factor for any given power level and for any illuminant that has a known spectral content once the nonuniform behavior has been characterized for a sufficient number of wavelengths. This procedure has a significant advantage compared with other traditional calibration-based methods, which require that a full characterization be carried out for each spectral distribution pattern of the incident optical radiation. The experimental application of this novel method has achieved a 20-fold increase in the uniformity of a CCD array for response levels close to saturation.
Devaluation and sequential decisions: linking goal-directed and model-based behavior
Friedel, Eva; Koch, Stefan P.; Wendt, Jean; Heinz, Andreas; Deserno, Lorenz; Schlagenhauf, Florian
2014-01-01
In experimental psychology different experiments have been developed to assess goal–directed as compared to habitual control over instrumental decisions. Similar to animal studies selective devaluation procedures have been used. More recently sequential decision-making tasks have been designed to assess the degree of goal-directed vs. habitual choice behavior in terms of an influential computational theory of model-based compared to model-free behavioral control. As recently suggested, different measurements are thought to reflect the same construct. Yet, there has been no attempt to directly assess the construct validity of these different measurements. In the present study, we used a devaluation paradigm and a sequential decision-making task to address this question of construct validity in a sample of 18 healthy male human participants. Correlational analysis revealed a positive association between model-based choices during sequential decisions and goal-directed behavior after devaluation suggesting a single framework underlying both operationalizations and speaking in favor of construct validity of both measurement approaches. Up to now, this has been merely assumed but never been directly tested in humans. PMID:25136310
Flood loss modelling with FLF-IT: a new flood loss function for Italian residential structures
NASA Astrophysics Data System (ADS)
Hasanzadeh Nafari, Roozbeh; Amadio, Mattia; Ngo, Tuan; Mysiak, Jaroslav
2017-07-01
The damage triggered by different flood events costs the Italian economy millions of euros each year. This cost is likely to increase in the future due to climate variability and economic development. In order to avoid or reduce such significant financial losses, risk management requires tools which can provide a reliable estimate of potential flood impacts across the country. Flood loss functions are an internationally accepted method for estimating physical flood damage in urban areas. In this study, we derived a new flood loss function for Italian residential structures (FLF-IT), on the basis of empirical damage data collected from a recent flood event in the region of Emilia-Romagna. The function was developed based on a new Australian approach (FLFA), which represents the confidence limits that exist around the parameterized functional depth-damage relationship. After model calibration, the performance of the model was validated for the prediction of loss ratios and absolute damage values. It was also contrasted with an uncalibrated relative model with frequent usage in Europe. In this regard, a three-fold cross-validation procedure was carried out over the empirical sample to measure the range of uncertainty from the actual damage data. The predictive capability has also been studied for some sub-classes of water depth. The validation procedure shows that the newly derived function performs well (no bias and only 10 % mean absolute error), especially when the water depth is high. Results of these validation tests illustrate the importance of model calibration. The advantages of the FLF-IT model over other Italian models include calibration with empirical data, consideration of the epistemic uncertainty of data, and the ability to change parameters based on building practices across Italy.
Validation and uncertainty analysis of a pre-treatment 2D dose prediction model
NASA Astrophysics Data System (ADS)
Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank
2018-02-01
Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.
21 CFR 820.75 - Process validation.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...
21 CFR 820.75 - Process validation.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...
21 CFR 820.75 - Process validation.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...
21 CFR 820.75 - Process validation.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...
21 CFR 820.75 - Process validation.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...
29 CFR 1607.9 - No assumption of validity.
Code of Federal Regulations, 2010 CFR
2010-07-01
... of validity. The enforcement agencies will take into account the fact that a thorough job analysis... professional standards enhance the probability that the selection procedure is valid for the job. ...
Woźniak, Mateusz Kacper; Wiergowski, Marek; Aszyk, Justyna; Kubica, Paweł; Namieśnik, Jacek; Biziuk, Marek
2018-01-30
Amphetamine, methamphetamine, phentermine, 3,4-methylenedioxyamphetamine (MDA), 3,4-methylenedioxymethamphetamine (MDMA), and 3,4-methylenedioxy-N-ethylamphetamine (MDEA) are the most popular amphetamine-type stimulants. The use of these substances is a serious societal problem worldwide. In this study, a method based on gas chromatography-tandem mass spectrometry (GC-MS/MS) with simple and rapid liquid-liquid extraction (LLE) and derivatization was developed and validated for the simultaneous determination of the six aforementioned amphetamine derivatives in blood and urine. The detection of all compounds was based on multiple reaction monitoring (MRM) transitions. The most important advantage of the method is the minimal sample volume (as low as 200μL) required for the extraction procedure. The validation parameters, i.e., the recovery (90.5-104%), inter-day accuracy (94.2-109.1%) and precision (0.5-5.8%), showed the repeatability and sensitivity of the method for both matrices and indicated that the proposed procedure fulfils internationally established acceptance criteria for bioanalytical methods The procedure was successfully applied to the analysis of real blood and urine samples examined in 22 forensic toxicological cases. To the best of our knowledge, this is the first work presenting the use of GC-MS/MS for the determination of amphetamine-type stimulants in blood and urine. In view of the low limits of detection (0.09-0.81ng/mL), limits of quantification (0.26-2.4ng/mL), and high selectivity, the procedure can be applied for drug monitoring in both fatal and non-fatal intoxication cases in routine toxicology analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
CT and MRI slice separation evaluation by LabView developed software.
Acri, Giuseppe; Testagrossa, Barbara; Sestito, Angela; Bonanno, Lilla; Vermiglio, Giuseppe
2018-02-01
The efficient use of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) equipment necessitates establishing adequate quality-control (QC) procedures. In particular, the accuracy of slice separation, during multislices acquisition, requires scan exploration of phantoms containing test objects. To simplify such procedures, a novel phantom and a computerised LabView-based procedure have been devised, enabling determination the midpoint of full width at half maximum (FWHM) in real time while the distance from the profile midpoint of two progressive images is evaluated and measured. The results were compared with those obtained by processing the same phantom images with commercial software. To validate the proposed methodology the Fisher test was conducted on the resulting data sets. In all cases, there was no statistically significant variation between the commercial procedure and the LabView one, which can be used on any CT and MRI diagnostic devices. Copyright © 2017. Published by Elsevier GmbH.
Training a Network of Electronic Neurons for Control of a Mobile Robot
NASA Astrophysics Data System (ADS)
Vromen, T. G. M.; Steur, E.; Nijmeijer, H.
An adaptive training procedure is developed for a network of electronic neurons, which controls a mobile robot driving around in an unknown environment while avoiding obstacles. The neuronal network controls the angular velocity of the wheels of the robot based on the sensor readings. The nodes in the neuronal network controller are clusters of neurons rather than single neurons. The adaptive training procedure ensures that the input-output behavior of the clusters is identical, even though the constituting neurons are nonidentical and have, in isolation, nonidentical responses to the same input. In particular, we let the neurons interact via a diffusive coupling, and the proposed training procedure modifies the diffusion interaction weights such that the neurons behave synchronously with a predefined response. The working principle of the training procedure is experimentally validated and results of an experiment with a mobile robot that is completely autonomously driving in an unknown environment with obstacles are presented.
Mazzitelli, S; Tosi, A; Balestra, C; Nastruzzi, C; Luca, G; Mancuso, F; Calafiore, R; Calvitti, M
2008-09-01
The optimization, through a Design of Experiments (DoE) approach, of a microencapsulation procedure for isolated neonatal porcine islets (NPI) is described. The applied method is based on the generation of monodisperse droplets by a vibrational nozzle. An alginate/polyornithine encapsulation procedure, developed and validated in our laboratory for almost a decade, was used to embody pancreatic islets. We analyzed different experimental parameters including frequency of vibration, amplitude of vibration, polymer pumping rate, and distance between the nozzle and the gelling bath. We produced calcium-alginate gel microbeads with excellent morphological characteristics as well as a very narrow size distribution. The automatically produced microcapsules did not alter morphology, viability and functional properties of the enveloped NPI. The optimization of this automatic procedure may provide a novel approach to obtain a large number of batches possibly suitable for large scale production of immunoisolated NPI for in vivo cell transplantation procedures in humans.
Evaluation of Operational Procedures for Using a Time-Based Airborne Inter-arrival Spacing Tool
NASA Technical Reports Server (NTRS)
Oseguera-Lohr, Rosa M.; Lohr, Gary W.; Abbott, Terence S.; Eischeid, Todd M.
2002-01-01
An airborne tool has been developed based on the concept of an aircraft maintaining a time-based spacing interval from the preceding aircraft. The Advanced Terminal Area Approach Spacing (ATAAS) tool uses Automatic Dependent Surveillance-Broadcast (ADS-B) aircraft state data to compute a speed command for the ATAAS-equipped aircraft to obtain a required time interval behind another aircraft. The tool and candidate operational procedures were tested in a high-fidelity, full mission simulator with active airline subject pilots flying an arrival scenario using three different modes for speed control. The objectives of this study were to validate the results of a prior Monte Carlo analysis of the ATAAS algorithm and to evaluate the concept from the standpoint of pilot acceptability and workload. Results showed that the aircraft was able to consistently achieve the target spacing interval within one second (the equivalent of approximately 220 ft at a final approach speed of 130 kt) when the ATAAS speed guidance was autothrottle-coupled, and a slightly greater (4-5 seconds), but consistent interval with the pilot-controlled speed modes. The subject pilots generally rated the workload level with the ATAAS procedure as similar to that with standard procedures, and also rated most aspects of the procedure high in terms of acceptability. Although pilots indicated that the head-down time was higher with ATAAS, the acceptability of head-down time was rated high. Oculometer data indicated slight changes in instrument scan patterns, but no significant change in the amount of time spent looking out the window between the ATAAS procedure versus standard procedures.
Validation of virtual-reality-based simulations for endoscopic sinus surgery.
Dharmawardana, N; Ruthenbeck, G; Woods, C; Elmiyeh, B; Diment, L; Ooi, E H; Reynolds, K; Carney, A S
2015-12-01
Virtual reality (VR) simulators provide an alternative to real patients for practicing surgical skills but require validation to ensure accuracy. Here, we validate the use of a virtual reality sinus surgery simulator with haptic feedback for training in Otorhinolaryngology - Head & Neck Surgery (OHNS). Participants were recruited from final-year medical students, interns, resident medical officers (RMOs), OHNS registrars and consultants. All participants completed an online questionnaire after performing four separate simulation tasks. These were then used to assess face, content and construct validity. anova with post hoc correlation was used for statistical analysis. The following groups were compared: (i) medical students/interns, (ii) RMOs, (iii) registrars and (iv) consultants. Face validity results had a statistically significant (P < 0.05) difference between the consultant group and others, while there was no significant difference between medical student/intern and RMOs. Variability within groups was not significant. Content validity results based on consultant scoring and comments indicated that the simulations need further development in several areas to be effective for registrar-level teaching. However, students, interns and RMOs indicated that the simulations provide a useful tool for learning OHNS-related anatomy and as an introduction to ENT-specific procedures. The VR simulations have been validated for teaching sinus anatomy and nasendoscopy to medical students, interns and RMOs. However, they require further development before they can be regarded as a valid tool for more advanced surgical training. © 2015 John Wiley & Sons Ltd.
Phylogenomics of plant genomes: a methodology for genome-wide searches for orthologs in plants
Conte, Matthieu G; Gaillard, Sylvain; Droc, Gaetan; Perin, Christophe
2008-01-01
Background Gene ortholog identification is now a major objective for mining the increasing amount of sequence data generated by complete or partial genome sequencing projects. Comparative and functional genomics urgently need a method for ortholog detection to reduce gene function inference and to aid in the identification of conserved or divergent genetic pathways between several species. As gene functions change during evolution, reconstructing the evolutionary history of genes should be a more accurate way to differentiate orthologs from paralogs. Phylogenomics takes into account phylogenetic information from high-throughput genome annotation and is the most straightforward way to infer orthologs. However, procedures for automatic detection of orthologs are still scarce and suffer from several limitations. Results We developed a procedure for ortholog prediction between Oryza sativa and Arabidopsis thaliana. Firstly, we established an efficient method to cluster A. thaliana and O. sativa full proteomes into gene families. Then, we developed an optimized phylogenomics pipeline for ortholog inference. We validated the full procedure using test sets of orthologs and paralogs to demonstrate that our method outperforms pairwise methods for ortholog predictions. Conclusion Our procedure achieved a high level of accuracy in predicting ortholog and paralog relationships. Phylogenomic predictions for all validated gene families in both species were easily achieved and we can conclude that our methodology outperforms similarly based methods. PMID:18426584
The evaluation of the abuse liability of drugs.
Johanson, C E
1990-01-01
In order to place appropriate restrictions upon the availability of certain therapeutic agents to limit their abuse, it is important to assess abuse liability, an important aspect of drug safety evaluation. However, the negative consequences of restriction must also be considered. Drugs most likely to be tested are psychoactive compounds with therapeutic indications similar to known drugs of abuse. Methods include assays of pharmacological profile, drug discrimination procedures, self-administration procedures, and measures of drug-induced toxicity including evaluations of tolerance and physical dependence. Furthermore, the evaluation of toxicity using behavioural end-points is an important component of the assessment, and it is generally believed that the most valid procedure in this evaluation is the measurement of drug self-administration. However, even this method rarely predicts the extent of abuse of a specific drug. Although methods are available which appear to measure relative abuse liability, these procedures are not validated for all drug classes. Thus, additional strategies, including abuse liability studies in humans, modelled after those used with animals, must be used in order to make a more informed prediction. Although there is pressure to place restrictions on new drugs at the time of marketing, in light of the difficulty of predicting relative abuse potential, a better strategy might be to market a drug without restrictions, but require postmarketing surveillance in order to obtain more accurate information on which to base a final decision.
Brydges, Ryan; Hatala, Rose; Zendejas, Benjamin; Erwin, Patricia J; Cook, David A
2015-02-01
To examine the evidence supporting the use of simulation-based assessments as surrogates for patient-related outcomes assessed in the workplace. The authors systematically searched MEDLINE, EMBASE, Scopus, and key journals through February 26, 2013. They included original studies that assessed health professionals and trainees using simulation and then linked those scores with patient-related outcomes assessed in the workplace. Two reviewers independently extracted information on participants, tasks, validity evidence, study quality, patient-related and simulation-based outcomes, and magnitude of correlation. All correlations were pooled using random-effects meta-analysis. Of 11,628 potentially relevant articles, the 33 included studies enrolled 1,203 participants, including postgraduate physicians (n = 24 studies), practicing physicians (n = 8), medical students (n = 6), dentists (n = 2), and nurses (n = 1). The pooled correlation for provider behaviors was 0.51 (95% confidence interval [CI], 0.38 to 0.62; n = 27 studies); for time behaviors, 0.44 (95% CI, 0.15 to 0.66; n = 7); and for patient outcomes, 0.24 (95% CI, -0.02 to 0.47; n = 5). Most reported validity evidence was favorable, though studies often included only correlational evidence. Validity evidence of internal structure (n = 13 studies), content (n = 12), response process (n = 2), and consequences (n = 1) were reported less often. Three tools showed large pooled correlations and favorable (albeit incomplete) validity evidence. Simulation-based assessments often correlate positively with patient-related outcomes. Although these surrogates are imperfect, tools with established validity evidence may replace workplace-based assessments for evaluating select procedural skills.
Varkey, Prathibha; Natt, Neena; Lesnick, Timothy; Downing, Steven; Yudkowsky, Rachel
2008-08-01
To determine the psychometric properties and validity of an OSCE to assess the competencies of Practice-Based Learning and Improvement (PBLI) and Systems-Based Practice (SBP) in graduate medical education. An eight-station OSCE was piloted at the end of a three-week Quality Improvement elective for nine preventive medicine and endocrinology fellows at Mayo Clinic. The stations assessed performance in quality measurement, root cause analysis, evidence-based medicine, insurance systems, team collaboration, prescription errors, Nolan's model, and negotiation. Fellows' performance in each of the stations was assessed by three faculty experts using checklists and a five-point global competency scale. A modified Angoff procedure was used to set standards. Evidence for the OSCE's validity, feasibility, and acceptability was gathered. Evidence for content and response process validity was judged as excellent by institutional content experts. Interrater reliability of scores ranged from 0.85 to 1 for most stations. Interstation correlation coefficients ranged from -0.62 to 0.99, reflecting case specificity. Implementation cost was approximately $255 per fellow. All faculty members agreed that the OSCE was realistic and capable of providing accurate assessments. The OSCE provides an opportunity to systematically sample the different subdomains of Quality Improvement. Furthermore, the OSCE provides an opportunity for the demonstration of skills rather than the testing of knowledge alone, thus making it a potentially powerful assessment tool for SBP and PBLI. The study OSCE was well suited to assess SBP and PBLI. The evidence gathered through this study lays the foundation for future validation work.
A semi-automatic method for left ventricle volume estimate: an in vivo validation study
NASA Technical Reports Server (NTRS)
Corsi, C.; Lamberti, C.; Sarti, A.; Saracino, G.; Shiota, T.; Thomas, J. D.
2001-01-01
This study aims to the validation of the left ventricular (LV) volume estimates obtained by processing volumetric data utilizing a segmentation model based on level set technique. The validation has been performed by comparing real-time volumetric echo data (RT3DE) and magnetic resonance (MRI) data. A validation protocol has been defined. The validation protocol was applied to twenty-four estimates (range 61-467 ml) obtained from normal and pathologic subjects, which underwent both RT3DE and MRI. A statistical analysis was performed on each estimate and on clinical parameters as stroke volume (SV) and ejection fraction (EF). Assuming MRI estimates (x) as a reference, an excellent correlation was found with volume measured by utilizing the segmentation procedure (y) (y=0.89x + 13.78, r=0.98). The mean error on SV was 8 ml and the mean error on EF was 2%. This study demonstrated that the segmentation technique is reliably applicable on human hearts in clinical practice.
Burdel, Martin; Šandrejová, Jana; Balogh, Ioseph S; Vishnikin, Andriy; Andruch, Vasil
2013-03-01
Three modes of liquid-liquid based microextraction techniques--namely auxiliary solvent-assisted dispersive liquid-liquid microextraction, auxiliary solvent-assisted dispersive liquid-liquid microextraction with low-solvent consumption, and ultrasound-assisted emulsification microextraction--were compared. Picric acid was used as the model analyte. The determination is based on the reaction of picric acid with Astra Phloxine reagent to produce an ion associate easily extractable by various organic solvents, followed by spectrophotometric detection at 558 nm. Each of the compared procedures has both advantages and disadvantages. The main benefit of ultrasound-assisted emulsification microextraction is that no hazardous chlorinated extraction solvents and no dispersive solvent are necessary. Therefore, this procedure was selected for validation. Under optimized experimental conditions (pH 3, 7 × 10(-5) mol/L of Astra Phloxine, and 100 μL of toluene), the calibration plot was linear in the range of 0.02-0.14 mg/L and the LOD was 7 μg/L of picric acid. The developed procedure was applied to the analysis of spiked water samples. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Clerici, Aldo; Perego, Susanna; Tellini, Claudio; Vescovi, Paolo
2006-08-01
Among the many GIS based multivariate statistical methods for landslide susceptibility zonation, the so called “Conditional Analysis method” holds a special place for its conceptual simplicity. In fact, in this method landslide susceptibility is simply expressed as landslide density in correspondence with different combinations of instability-factor classes. To overcome the operational complexity connected to the long, tedious and error prone sequence of commands required by the procedure, a shell script mainly based on the GRASS GIS was created. The script, starting from a landslide inventory map and a number of factor maps, automatically carries out the whole procedure resulting in the construction of a map with five landslide susceptibility classes. A validation procedure allows to assess the reliability of the resulting model, while the simple mean deviation of the density values in the factor class combinations, helps to evaluate the goodness of landslide density distribution. The procedure was applied to a relatively small basin (167 km2) in the Italian Northern Apennines considering three landslide types, namely rotational slides, flows and complex landslides, for a total of 1,137 landslides, and five factors, namely lithology, slope angle and aspect, elevation and slope/bedding relations. The analysis of the resulting 31 different models obtained combining the five factors, confirms the role of lithology, slope angle and slope/bedding relations in influencing slope stability.
The purpose of this SOP is to define the procedures for the initial and periodic verification and validation of computer programs. The programs are used during the Arizona NHEXAS project and Border study at the Illinois Institute of Technology (IIT) site. Keywords: computers; s...
Comparative Validity of the Shedler and Westen Assessment Procedure-200
ERIC Educational Resources Information Center
Mullins-Sweatt, Stephanie N.; Widiger, Thomas A.
2008-01-01
A predominant dimensional model of general personality structure is the five-factor model (FFM). Quite a number of alternative instruments have been developed to assess the domains of the FFM. The current study compares the validity of 2 alternative versions of the Shedler and Westen Assessment Procedure (SWAP-200) FFM scales, 1 that was developed…
ERIC Educational Resources Information Center
Mashburn, Andrew J.; Meyer, J. Patrick; Allen, Joseph P.; Pianta, Robert C.
2014-01-01
Observational methods are increasingly being used in classrooms to evaluate the quality of teaching. Operational procedures for observing teachers are somewhat arbitrary in existing measures and vary across different instruments. To study the effect of different observation procedures on score reliability and validity, we conducted an experimental…
The purpose of this SOP is to define the procedures used for the initial and periodic verification and validation of computer programs used during the Arizona NHEXAS project and the "Border" study. Keywords: Computers; Software; QA/QC.
The National Human Exposure Assessment Sur...
Assessing Women's Responses to Sexual Threat: Validity of a Virtual Role-Play Procedure
ERIC Educational Resources Information Center
Jouriles, Ernest N.; Rowe, Lorelei Simpson; McDonald, Renee; Platt, Cora G.; Gomez, Gabriella S.
2011-01-01
This study evaluated the validity of a role-play procedure that uses virtual reality technology to assess women's responses to sexual threat. Forty-eight female undergraduate students were randomly assigned to either a standard, face-to-face role-play (RP) or a virtual role-play (VRP) of a sexually coercive situation. A multimethod assessment…
Bio-Oil Analysis Laboratory Procedures | Bioenergy | NREL
Bio-Oil Analysis Laboratory Procedures Bio-Oil Analysis Laboratory Procedures NREL develops standard procedures have been validated and allow for reliable bio-oil analysis. Procedures Determination different hydroxyl groups (-OH) in pyrolysis bio-oil: aliphatic-OH, phenolic-OH, and carboxylic-OH. Download
Kodak, Tiffany; Campbell, Vincent; Bergmann, Samantha; LeBlanc, Brittany; Kurtz-Nelson, Eva; Cariveau, Tom; Haq, Shaji; Zemantic, Patricia; Mahon, Jacob
2016-09-01
Prior research shows that learners have idiosyncratic responses to error-correction procedures during instruction. Thus, assessments that identify error-correction strategies to include in instruction can aid practitioners in selecting individualized, efficacious, and efficient interventions. The current investigation conducted an assessment to compare 5 error-correction procedures that have been evaluated in the extant literature and are common in instructional practice for children with autism spectrum disorder (ASD). Results showed that the assessment identified efficacious and efficient error-correction procedures for all participants, and 1 procedure was efficient for 4 of the 5 participants. To examine the social validity of error-correction procedures, participants selected among efficacious and efficient interventions in a concurrent-chains assessment. We discuss the results in relation to prior research on error-correction procedures and current instructional practices for learners with ASD. © 2016 Society for the Experimental Analysis of Behavior.
Baczyńska, Anna K; Rowiński, Tomasz; Cybis, Natalia
2016-01-01
Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach's alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.
NASA Astrophysics Data System (ADS)
Feng, Yanchun; Lei, Deqing; Hu, Changqin
We created a rapid detection procedure for identifying herbal medicines illegally adulterated with synthetic drugs using near infrared spectroscopy. This procedure includes a reverse correlation coefficient method (RCCM) and comparison of characteristic peaks. Moreover, we made improvements to the RCCM based on new strategies for threshold settings. Any tested herbal medicine must meet two criteria to be identified with our procedure as adulterated. First, the correlation coefficient between the tested sample and the reference must be greater than the RCCM threshold. Next, the NIR spectrum of the tested sample must contain the same characteristic peaks as the reference. In this study, four pure synthetic anti-diabetic drugs (i.e., metformin, gliclazide, glibenclamide and glimepiride), 174 batches of laboratory samples and 127 batches of herbal anti-diabetic medicines were used to construct and validate the procedure. The accuracy of this procedure was greater than 80%. Our data suggest that this protocol is a rapid screening tool to identify synthetic drug adulterants in herbal medicines on the market.
Validation of reactive gases and aerosols in the MACC global analysis and forecast system
NASA Astrophysics Data System (ADS)
Eskes, H.; Huijnen, V.; Arola, A.; Benedictow, A.; Blechschmidt, A.-M.; Botek, E.; Boucher, O.; Bouarar, I.; Chabrillat, S.; Cuevas, E.; Engelen, R.; Flentje, H.; Gaudel, A.; Griesfeller, J.; Jones, L.; Kapsomenakis, J.; Katragkou, E.; Kinne, S.; Langerock, B.; Razinger, M.; Richter, A.; Schultz, M.; Schulz, M.; Sudarchikova, N.; Thouret, V.; Vrekoussis, M.; Wagner, A.; Zerefos, C.
2015-02-01
The European MACC (Monitoring Atmospheric Composition and Climate) project is preparing the operational Copernicus Atmosphere Monitoring Service (CAMS), one of the services of the European Copernicus Programme on Earth observation and environmental services. MACC uses data assimilation to combine in-situ and remote sensing observations with global and regional models of atmospheric reactive gases, aerosols and greenhouse gases, and is based on the Integrated Forecast System of the ECMWF. The global component of the MACC service has a dedicated validation activity to document the quality of the atmospheric composition products. In this paper we discuss the approach to validation that has been developed over the past three years. Topics discussed are the validation requirements, the operational aspects, the measurement data sets used, the structure of the validation reports, the models and assimilation systems validated, the procedure to introduce new upgrades, and the scoring methods. One specific target of the MACC system concerns forecasting special events with high pollution concentrations. Such events receive extra attention in the validation process. Finally, a summary is provided of the results from the validation of the latest set of daily global analysis and forecast products from the MACC system reported in November 2014.
Quality control and assurance for validation of DOS/I measurements
NASA Astrophysics Data System (ADS)
Cerussi, Albert; Durkin, Amanda; Kwong, Richard; Quang, Timothy; Hill, Brian; Tromberg, Bruce J.; MacKinnon, Nick; Mantulin, William W.
2010-02-01
Ongoing multi-center clinical trials are crucial for Biophotonics to gain acceptance in medical imaging. In these trials, quality control (QC) and assurance (QA) are key to success and provide "data insurance". Quality control and assurance deal with standardization, validation, and compliance of procedures, materials and instrumentation. Specifically, QC/QA involves systematic assessment of testing materials, instrumentation performance, standard operating procedures, data logging, analysis, and reporting. QC and QA are important for FDA accreditation and acceptance by the clinical community. Our Biophotonics research in the Network for Translational Research in Optical Imaging (NTROI) program for breast cancer characterization focuses on QA/QC issues primarily related to the broadband Diffuse Optical Spectroscopy and Imaging (DOS/I) instrumentation, because this is an emerging technology with limited standardized QC/QA in place. In the multi-center trial environment, we implement QA/QC procedures: 1. Standardize and validate calibration standards and procedures. (DOS/I technology requires both frequency domain and spectral calibration procedures using tissue simulating phantoms and reflectance standards, respectively.) 2. Standardize and validate data acquisition, processing and visualization (optimize instrument software-EZDOS; centralize data processing) 3. Monitor, catalog and maintain instrument performance (document performance; modularize maintenance; integrate new technology) 4. Standardize and coordinate trial data entry (from individual sites) into centralized database 5. Monitor, audit and communicate all research procedures (database, teleconferences, training sessions) between participants ensuring "calibration". This manuscript describes our ongoing efforts, successes and challenges implementing these strategies.
Custers, J W; Hoijtink, H; van der Net, J; Helders, P J
2000-01-01
For many reasons it is preferable to use established health related outcome instruments. The validity of an instrument, however, can be affected when used in another culture or language other than what it was originally developed. In this paper, the outcome on functional status measurement using a preliminary version of the Dutch translated 'Pediatric Evaluation of Disability Inventory' (PEDI) was studied involving a sample of 20 non-disabled Dutch children and American peers, to see if a cross-cultural validation procedure is needed before using the instrument in the Netherlands. The Rasch model was used to analyse the Dutch data. Score profiles were not found to be compatible with the score profiles of American children. In particular, ten items were scored differently with strong indications that these were based on inter-cultural differences. Based on our study, it is argued that cross-cultural validation of the PEDI is necessary before using the instrument in the Netherlands.
Providing a Science Base for the Evaluation of Tobacco Products
Berman, Micah L.; Connolly, Greg; Cummings, K. Michael; Djordjevic, Mirjana V.; Hatsukami, Dorothy K.; Henningfield, Jack E.; Myers, Matthew; O'Connor, Richard J.; Parascandola, Mark; Rees, Vaughan; Rice, Jerry M.
2015-01-01
Objective Evidence-based tobacco regulation requires a comprehensive scientific framework to guide the evaluation of new tobacco products and health-related claims made by product manufacturers. Methods The Tobacco Product Assessment Consortium (TobPRAC) employed an iterative process involving consortia investigators, consultants, a workshop of independent scientists and public health experts, and written reviews in order to develop a conceptual framework for evaluating tobacco products. Results The consortium developed a four-phased framework for the scientific evaluation of tobacco products. The four phases addressed by the framework are: (1) pre-market evaluation, (2) pre-claims evaluation, (3) post-market activities, and (4) monitoring and re-evaluation. For each phase, the framework proposes the use of validated testing procedures that will evaluate potential harms at both the individual and population level. Conclusions While the validation of methods for evaluating tobacco products is an ongoing and necessary process, the proposed framework need not wait for fully validated methods to be used in guiding tobacco product regulation today. PMID:26665160
Scaling field data to calibrate and validate moderate spatial resolution remote sensing models
Baccini, A.; Friedl, M.A.; Woodcock, C.E.; Zhu, Z.
2007-01-01
Validation and calibration are essential components of nearly all remote sensing-based studies. In both cases, ground measurements are collected and then related to the remote sensing observations or model results. In many situations, and particularly in studies that use moderate resolution remote sensing, a mismatch exists between the sensor's field of view and the scale at which in situ measurements are collected. The use of in situ measurements for model calibration and validation, therefore, requires a robust and defensible method to spatially aggregate ground measurements to the scale at which the remotely sensed data are acquired. This paper examines this challenge and specifically considers two different approaches for aggregating field measurements to match the spatial resolution of moderate spatial resolution remote sensing data: (a) landscape stratification; and (b) averaging of fine spatial resolution maps. The results show that an empirically estimated stratification based on a regression tree method provides a statistically defensible and operational basis for performing this type of procedure.
Moreno, Javier; Clotet, Eduard; Lupiañez, Ruben; Tresanchez, Marcel; Martínez, Dani; Pallejà, Tomàs; Casanovas, Jordi; Palacín, Jordi
2016-10-10
This paper presents the design, implementation and validation of the three-wheel holonomic motion system of a mobile robot designed to operate in homes. The holonomic motion system is described in terms of mechanical design and electronic control. The paper analyzes the kinematics of the motion system and validates the estimation of the trajectory comparing the displacement estimated with the internal odometry of the motors and the displacement estimated with a SLAM procedure based on LIDAR information. Results obtained in different experiments have shown a difference on less than 30 mm between the position estimated with the SLAM and odometry, and a difference in the angular orientation of the mobile robot lower than 5° in absolute displacements up to 1000 mm.
Moreno, Javier; Clotet, Eduard; Lupiañez, Ruben; Tresanchez, Marcel; Martínez, Dani; Pallejà, Tomàs; Casanovas, Jordi; Palacín, Jordi
2016-01-01
This paper presents the design, implementation and validation of the three-wheel holonomic motion system of a mobile robot designed to operate in homes. The holonomic motion system is described in terms of mechanical design and electronic control. The paper analyzes the kinematics of the motion system and validates the estimation of the trajectory comparing the displacement estimated with the internal odometry of the motors and the displacement estimated with a SLAM procedure based on LIDAR information. Results obtained in different experiments have shown a difference on less than 30 mm between the position estimated with the SLAM and odometry, and a difference in the angular orientation of the mobile robot lower than 5° in absolute displacements up to 1000 mm. PMID:27735857
Faraco, Marianna; Fico, Daniela; Pennetta, Antonio; De Benedetto, Giuseppe E
2016-10-01
This work presents an analytical procedure based on gas chromatography-mass spectrometry which allows the determination of aldoses (glucose, mannose, galactose, arabinose, xylose, fucose, rhamnose) and chetoses (fructose) in plant material. One peak for each target carbohydrate was obtained by using an efficient derivatization employing methylboronic acid and acetic anhydride sequentially, whereas the baseline separation of the analytes was accomplished using an ionic liquid capillary column. First, the proposed method was optimized and validated. Successively, it was applied to identify the carbohydrates present in plant material. Finally, the procedure was successfully applied to samples from a XVII century painting, thus highlighting the occurrence of starch glue and fruit tree gum as polysaccharide materials. Copyright © 2016 Elsevier B.V. All rights reserved.
Fuzzy model-based fault detection and diagnosis for a pilot heat exchanger
NASA Astrophysics Data System (ADS)
Habbi, Hacene; Kidouche, Madjid; Kinnaert, Michel; Zelmat, Mimoun
2011-04-01
This article addresses the design and real-time implementation of a fuzzy model-based fault detection and diagnosis (FDD) system for a pilot co-current heat exchanger. The design method is based on a three-step procedure which involves the identification of data-driven fuzzy rule-based models, the design of a fuzzy residual generator and the evaluation of the residuals for fault diagnosis using statistical tests. The fuzzy FDD mechanism has been implemented and validated on the real co-current heat exchanger, and has been proven to be efficient in detecting and isolating process, sensor and actuator faults.
NASA Astrophysics Data System (ADS)
Silvestro, Paolo Cosmo; Casa, Raffaele; Pignatti, Stefano; Castaldi, Fabio; Yang, Hao; Guijun, Yang
2016-08-01
The aim of this work was to develop a tool to evaluate the effect of water stress on yield losses at the farmland and regional scale, by assimilating remotely sensed biophysical variables into crop growth models. Biophysical variables were retrieved from HJ1A, HJ1B and Landsat 8 images, using an algorithm based on the training of artificial neural networks on PROSAIL.For the assimilation, two crop models of differing degree of complexity were used: Aquacrop and SAFY. For Aquacrop, an optimization procedure to reduce the difference between the remotely sensed and simulated CC was developed. For the modified version of SAFY, the assimilation procedure was based on the Ensemble Kalman Filter.These procedures were tested in a spatialized application, by using data collected in the rural area of Yangling (Shaanxi Province) between 2013 and 2015Results were validated by utilizing yield data both from ground measurements and statistical survey.
Objective assessment of laparoscopic skills using a virtual reality stimulator.
Eriksen, J R; Grantcharov, T
2005-09-01
Virtual reality simulation has a great potential as a training and assessment tool of laparoscopic skills. The study was carried out to investigate whether the LapSim system (Surgical Science Ltd., Gothenburg, Sweden) was able to differentiate between subjects with different laparoscopic experience and thus to demonstrate its construct validity. Subjects 24 were divided into two groups: experienced (performed > 100 laparoscopic procedures, n = 10) and beginners (performed <10 laparoscopic procedures, n = 14). Assessment of laparoscopic skills was based on parameters measured by the computer system. Experienced surgeons performed consistently better than the residents. Significant differences in the parameters time and economy of motion existed between the two groups in seven of seven tasks. Regarding error parameters, differences existed in most but not all tasks. LapSim was able to differentiate between subjects with different laparoscopic experience. This indicates that the system measures skills relevant for laparoscopic surgery and can be used in training programs as a valid assessment tool.
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Pilot In-Trail Procedure Validation Simulation Study
NASA Technical Reports Server (NTRS)
Bussink, Frank J. L.; Murdoch, Jennifer L.; Chamberlain, James P.; Chartrand, Ryan; Jones, Kenneth M.
2008-01-01
A Human-In-The-Loop experiment was conducted at the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) to investigate the viability of the In-Trail Procedure (ITP) concept from a flight crew perspective, by placing participating airline pilots in a simulated oceanic flight environment. The test subject pilots used new onboard avionics equipment that provided improved information about nearby traffic and enabled them, when specific criteria were met, to request an ITP flight level change referencing one or two nearby aircraft that might otherwise block the flight level change. The subject pilots subjective assessments of ITP validity and acceptability were measured via questionnaires and discussions, and their objective performance in appropriately selecting, requesting, and performing ITP flight level changes was evaluated for each simulated flight scenario. Objective performance and subjective workload assessment data from the experiment s test conditions were analyzed for statistical and operational significance and are reported in the paper. Based on these results, suggestions are made to further improve the ITP.
Lei, Pingguang; Lei, Guanghe; Tian, Jianjun; Zhou, Zengfen; Zhao, Miao; Wan, Chonghua
2014-10-01
This paper is aimed to develop the irritable bowel syndrome (IBS) scale of the system of Quality of Life Instruments for Chronic Diseases (QLICD-IBS) by the modular approach and validate it by both classical test theory and generalizability theory. The QLICD-IBS was developed based on programmed decision procedures with multiple nominal and focus group discussions, in-depth interview, and quantitative statistical procedures. One hundred twelve inpatients with IBS were used to provide the data measuring QOL three times before and after treatments. The psychometric properties of the scale were evaluated with respect to validity, reliability, and responsiveness employing correlation analysis, factor analyses, multi-trait scaling analysis, t tests and also G studies and D studies of generalizability theory analysis. Multi-trait scaling analysis, correlation, and factor analyses confirmed good construct validity and criterion-related validity when using SF-36 as a criterion. Test-retest reliability coefficients (Pearson r and intra-class correlation (ICC)) for the overall score and all domains were higher than 0.80; the internal consistency α for all domains at two measurements were higher than 0.70 except for the social domain (0.55 and 0.67, respectively). The overall score and scores for all domains/facets had statistically significant changes after treatments with moderate or higher effect size standardized response mean (SRM) ranging from 0.72 to 1.02 at domain levels. G coefficients and index of dependability (Ф coefficients) confirmed the reliability of the scale further with more exact variance components. The QLICD-IBS has good validity, reliability, responsiveness, and some highlights and can be used as the quality of life instrument for patients with IBS.
NASA Astrophysics Data System (ADS)
Nilasari, Yoni; Dasining
2018-04-01
In this era of globalization, every human resource is faced with a competitive climate that will have a major impact on the development of the business and industrial sector. Therefore it is deemed necessary to research the development of curriculum based on INQF and the business/industries sector in order to improve the competence of Sewing Technique for Vocational High School Students of fashion clothing program. The development of curricula based on INQF and the business/industries is an activity to produce a curriculum that suits the needs of the business and industries sector. The formulation of the problem in this research are: (1) what is the curriculum based on INQF and the business/industries sector?; (2) how is the process and procedure of curriculum development of fashion program profession based on INQF and the business/industries sector?; And (3) how the result of the curriculum of fashion expertise based on INQF and the business/industries sector. The aims of research are: (1) explain what is meant by curriculum based on INQF and business/industries sector; (2) to know the process and procedure of curriculum development of fashion program profession based on INQF and the business/industries sectors ; And (3) to know result the curriculum of clothing expertise based on INQF and the business/industries sector. The research method chosen in developing curriculum based on INQFand business/industry sector is using by 4-D model from Thiagarajan, which includes: (1) define; (2) design; (3) development; And (4) disseminate. Step 4, not done but in this study. The result of the research shows that: (1) the curriculum based on INQF and the business/industries sector is the curriculum created by applying the principles and procedures of the Indonesian National Qualification Framework (INQF) that will improve the quality of graduates of Vocational High School level 2, and establish cooperation with Business/industries as a guest teacher (counselor) in the learning process; (2) process and procedure of curriculum development of fashion program profession based on INQF and business/industries sector is process and procedure of curriculum development of fashion program profession based on INQF and business/industries sector there are several stages: feasibility study and requirement, preparation of initial concept of curriculum planning based on INQF and the business/industries sector in the field of fashion, as well as the development of a plan to implement the curriculum based on INQF and the business/industries sector in the field of fashion, this development will produce a curriculum of fashion proficiency program in the form of learning competency of sewing technology where the implementer of learning (counselor) Is a guest teacher from business/industries sector. (3) the learning device validity aspect earns an average score of 3.5 with very valid criteria and the practicality aspect of the device obtains an average score of 3.3 with practical criteria.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Canhai; Xu, Zhijie; Pan, Wenxiao
2016-01-01
To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesianmore » calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.« less
Zhang, T; Yang, M; Xiao, X; Feng, Z; Li, C; Zhou, Z; Ren, Q; Li, X
2014-03-01
Many infectious diseases exhibit repetitive or regular behaviour over time. Time-domain approaches, such as the seasonal autoregressive integrated moving average model, are often utilized to examine the cyclical behaviour of such diseases. The limitations for time-domain approaches include over-differencing and over-fitting; furthermore, the use of these approaches is inappropriate when the assumption of linearity may not hold. In this study, we implemented a simple and efficient procedure based on the fast Fourier transformation (FFT) approach to evaluate the epidemic dynamic of scarlet fever incidence (2004-2010) in China. This method demonstrated good internal and external validities and overcame some shortcomings of time-domain approaches. The procedure also elucidated the cycling behaviour in terms of environmental factors. We concluded that, under appropriate circumstances of data structure, spectral analysis based on the FFT approach may be applicable for the study of oscillating diseases.
Cannon, W Dilworth; Garrett, William E; Hunter, Robert E; Sweeney, Howard J; Eckhoff, Donald G; Nicandri, Gregg T; Hutchinson, Mark R; Johnson, Donald D; Bisson, Leslie J; Bedi, Asheesh; Hill, James A; Koh, Jason L; Reinig, Karl D
2014-11-05
There is a paucity of articles in the surgical literature demonstrating transfer validity (transfer of training). The purpose of this study was to assess whether skills learned on the ArthroSim virtual-reality arthroscopic knee simulator transferred to greater skill levels in the operating room. Postgraduate year-3 orthopaedic residents were randomized into simulator-trained and control groups at seven academic institutions. The experimental group trained on the simulator, performing a knee diagnostic arthroscopy procedure to a predetermined proficiency level based on the average proficiency of five community-based orthopaedic surgeons performing the same procedure on the simulator. The residents in the control group continued their institution-specific orthopaedic education and training. Both groups then performed a diagnostic knee arthroscopy procedure on a live patient. Video recordings of the arthroscopic surgery were analyzed by five pairs of expert arthroscopic surgeons blinded to the identity of the residents. A proprietary global rating scale and a procedural checklist, which included visualization and probing scales, were used for rating. Forty-eight (89%) of the fifty-four postgraduate year-3 residents from seven academic institutions completed the study. The simulator-trained group averaged eleven hours of training on the simulator to reach proficiency. The simulator-trained group performed significantly better when rated according to our procedural checklist (p = 0.031), including probing skills (p = 0.016) but not visualization skills (p = 0.34), compared with the control group. The procedural checklist weighted probing skills double the weight of visualization skills. The global rating scale failed to reach significance (p = 0.061) because of one extreme outlier. The duration of the procedure was not significant. This lack of a significant difference seemed to be related to the fact that residents in the control group were less thorough, which shortened their time to completion of the arthroscopic procedure. We have demonstrated transfer validity (transfer of training) that residents trained to proficiency on a high-fidelity realistic virtual-reality arthroscopic knee simulator showed a greater skill level in the operating room compared with the control group. We believe that the results of our study will stimulate residency program directors to incorporate surgical simulation into the core curriculum of their residency programs. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.
Improved modeling of GaN HEMTs for predicting thermal and trapping-induced-kink effects
NASA Astrophysics Data System (ADS)
Jarndal, Anwar; Ghannouchi, Fadhel M.
2016-09-01
In this paper, an improved modeling approach has been developed and validated for GaN high electron mobility transistors (HEMTs). The proposed analytical model accurately simulates the drain current and its inherent trapping and thermal effects. Genetic-algorithm-based procedure is developed to automatically find the fitting parameters of the model. The developed modeling technique is implemented on a packaged GaN-on-Si HEMT and validated by DC and small-/large-signal RF measurements. The model is also employed for designing and realizing a switch-mode inverse class-F power amplifier. The amplifier simulations showed a very good agreement with RF large-signal measurements.
ERIC Educational Resources Information Center
Blagov, Pavel S.; Bi, Wu; Shedler, Jonathan; Westen, Drew
2012-01-01
The Shedler-Westen Assessment Procedure (SWAP) is a personality assessment instrument designed for use by expert clinical assessors. Critics have raised questions about its psychometrics, most notably its validity across observers and situations, the impact of its fixed score distribution on research findings, and its test-retest reliability. We…
What Makes AS Marking Reliable? An Experiment with Some Stages from the Standardisation Process
ERIC Educational Resources Information Center
Greatorex, Jackie; Bell, John F.
2008-01-01
It is particularly important that GCSE and A-level marking is valid and reliable as it affects the life chances of many young people in England. Current developments in marking technology are coinciding with potential changes in procedures to ensure valid and reliable marking. In this research the effectiveness of procedures to facilitate the…
The purpose of this SOP is to define the procedures used for the initial and periodic verification and validation of computer programs used during the Arizona NHEXAS project and the Border study. Keywords: Computers; Software; QA/QC.
The U.S.-Mexico Border Program is sponsored ...
ERIC Educational Resources Information Center
Longford, Nicholas T.
Operational procedures for the Graduate Record Examinations Validity Study Service are reviewed, with emphasis on the problem of frequent occurrence of negative coefficients in the fitted within-department regressions obtained by the empirical Bayes method of H. I. Braun and D. Jones (1985). Several alterations of the operational procedures are…
Yamaki, Regina Terumi; Nunes, Luana Sena; de Oliveira, Hygor Rodrigues; Araújo, André S; Bezerra, Marcos Almeida; Lemos, Valfredo Azevedo
2011-01-01
The synthesis and characterization of the reagent 2-(5-bromothiazolylazo)-4-chlorophenol and its application in the development of a preconcentration procedure for cobalt determination using flame atomic absorption spectrometry after cloud point extraction is presented. This procedure is based on cobalt complexing and entrapment of the metal chelates into micelles of a surfactant-rich phase of Triton X-114. The preconcentration procedure was optimized by using a response surface methodology through the application of the Box-Behnken matrix. Under optimum conditions, the procedure determined the presence of cobalt with an LOD of 2.8 microg/L and LOQ of 9.3 microg/L. The enrichment factor obtained was 25. The precision was evaluated as the RSD, which was 5.5% for 10 microg/L cobalt and 6.9% for 30 microg/L. The accuracy of the procedure was assessed by comparing the results with those found using inductively coupled plasma-optical emission spectrometry. After validation, the procedure was applied to the determination of cobalt in pharmaceutical preparation samples containing cobalamin (vitamin B12).
Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-07-28
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.
Numerical Procedures for Inlet/Diffuser/Nozzle Flows
NASA Technical Reports Server (NTRS)
Rubin, Stanley G.
1998-01-01
Two primitive variable, pressure based, flux-split, RNS/NS solution procedures for viscous flows are presented. Both methods are uniformly valid across the full Mach number range, Le., from the incompressible limit to high supersonic speeds. The first method is an 'optimized' version of a previously developed global pressure relaxation RNS procedure. Considerable reduction in the number of relatively expensive matrix inversion, and thereby in the computational time, has been achieved with this procedure. CPU times are reduced by a factor of 15 for predominantly elliptic flows (incompressible and low subsonic). The second method is a time-marching, 'linearized' convection RNS/NS procedure. The key to the efficiency of this procedure is the reduction to a single LU inversion at the inflow cross-plane. The remainder of the algorithm simply requires back-substitution with this LU and the corresponding residual vector at any cross-plane location. This method is not time-consistent, but has a convective-type CFL stability limitation. Both formulations are robust and provide accurate solutions for a variety of internal viscous flows to be provided herein.
NASA Astrophysics Data System (ADS)
Jandt, Simon; Laagemaa, Priidik; Janssen, Frank
2014-05-01
The systematic and objective comparison between output from a numerical ocean model and a set of observations, called validation in the context of this presentation, is a beneficial activity at several stages, starting from early steps in model development and ending at the quality control of model based products delivered to customers. Even though the importance of this kind of validation work is widely acknowledged it is often not among the most popular tasks in ocean modelling. In order to ease the validation work a comprehensive toolbox has been developed in the framework of the MyOcean-2 project. The objective of this toolbox is to carry out validation integrating different data sources, e.g. time-series at stations, vertical profiles, surface fields or along track satellite data, with one single program call. The validation toolbox, implemented in MATLAB, features all parts of the validation process - ranging from read-in procedures of datasets to the graphical and numerical output of statistical metrics of the comparison. The basic idea is to have only one well-defined validation schedule for all applications, in which all parts of the validation process are executed. Each part, e.g. read-in procedures, forms a module in which all available functions of this particular part are collected. The interface between the functions, the module and the validation schedule is highly standardized. Functions of a module are set up for certain validation tasks, new functions can be implemented into the appropriate module without affecting the functionality of the toolbox. The functions are assigned for each validation task in user specific settings, which are externally stored in so-called namelists and gather all information of the used datasets as well as paths and metadata. In the framework of the MyOcean-2 project the toolbox is frequently used to validate the forecast products of the Baltic Sea Marine Forecasting Centre. Hereby the performance of any new product version is compared with the previous version. Although, the toolbox is mainly tested for the Baltic Sea yet, it can easily be adapted to different datasets and parameters, regardless of the geographic region. In this presentation the usability of the toolbox is demonstrated along with several results of the validation process.
Ahn, Woojin; Dargar, Saurabh; Halic, Tansel; Lee, Jason; Li, Baichun; Pan, Junjun; Sankaranarayanan, Ganesh; Roberts, Kurt; De, Suvranu
2014-01-01
The first virtual-reality-based simulator for Natural Orifice Translumenal Endoscopic Surgery (NOTES) is developed called the Virtual Translumenal Endoscopic Surgery Trainer (VTESTTM). VTESTTM aims to simulate hybrid NOTES cholecystectomy procedure using a rigid scope inserted through the vaginal port. The hardware interface is designed for accurate motion tracking of the scope and laparoscopic instruments to reproduce the unique hand-eye coordination. The haptic-enabled multimodal interactive simulation includes exposing the Calot's triangle and detaching the gall bladder while performing electrosurgery. The developed VTESTTM was demonstrated and validated at NOSCAR 2013.
CAG12 - A CSCM based procedure for flow of an equilibrium chemically reacting gas
NASA Technical Reports Server (NTRS)
Green, M. J.; Davy, W. C.; Lombard, C. K.
1985-01-01
The Conservative Supra Characteristic Method (CSCM), an implicit upwind Navier-Stokes algorithm, is extended to the numerical simulation of flows in chemical equilibrium. The resulting computer code known as Chemistry and Gasdynamics Implicit - Version 2 (CAG12) is described. First-order accurate results are presented for inviscid and viscous Mach 20 flows of air past a hemisphere-cylinder. The solution procedure captures the bow shock in a chemically reacting gas, a technique that is needed for simulating high altitude, rarefied flows. In an initial effort to validate the code, the inviscid results are compared with published gasdynamic and chemistry solutions and satisfactorily agreement is obtained.
Vermeulen, Ph; Fernández Pierna, J A; van Egmond, H P; Zegers, J; Dardenne, P; Baeten, V
2013-09-01
In recent years, near-infrared (NIR) hyperspectral imaging has proved its suitability for quality and safety control in the cereal sector by allowing spectroscopic images to be collected at single-kernel level, which is of great interest to cereal control laboratories. Contaminants in cereals include, inter alia, impurities such as straw, grains from other crops, and insects, as well as undesirable substances such as ergot (sclerotium of Claviceps purpurea). For the cereal sector, the presence of ergot creates a high toxicity risk for animals and humans because of its alkaloid content. A study was undertaken, in which a complete procedure for detecting ergot bodies in cereals was developed, based on their NIR spectral characteristics. These were used to build relevant decision rules based on chemometric tools and on the morphological information obtained from the NIR images. The study sought to transfer this procedure from a pilot online NIR hyperspectral imaging system at laboratory level to a NIR hyperspectral imaging system at industrial level and to validate the latter. All the analyses performed showed that the results obtained using both NIR hyperspectral imaging cameras were quite stable and repeatable. In addition, a correlation higher than 0.94 was obtained between the predicted values obtained by NIR hyperspectral imaging and those supplied by the stereo-microscopic method which is the reference method. The validation of the transferred protocol on blind samples showed that the method could identify and quantify ergot contamination, demonstrating the transferability of the method. These results were obtained on samples with an ergot concentration of 0.02% which is less than the EC limit for cereals (intervention grains) destined for humans fixed at 0.05%.
National Variation in Costs and Mortality for Leukodystrophy Patients in U.S. Children’s Hospitals
Brimley, Cameron J; Lopez, Jonathan; van Haren, Keith; Wilkes, Jacob; Sheng, Xiaoming; Nelson, Clint; Korgenski, E. Kent; Srivastava, Rajendu; Bonkowsky, Joshua L.
2013-01-01
Background Inherited leukodystrophies are progressive, debilitating neurological disorders with few treatment options and high mortality rates. Our objective was to determine national variation in the costs for leukodystrophy patients, and to evaluate differences in their care. Methods We developed an algorithm to identify inherited leukodystrophy patients in de-identified data sets using a recursive tree model based on ICD-9 CM diagnosis and procedure charge codes. Validation of the algorithm was performed independently at two institutions, and with data from the Pediatric Health Information System (PHIS) of 43 U.S. children’s hospitals, for a seven year time period, 2004–2010. Results A recursive algorithm was developed and validated, based on six ICD-9 codes and one procedure code, that had a sensitivity up to 90% (range 61–90%) and a specificity up to 99% (range 53–99%) for identifying inherited leukodystrophy patients. Inherited leukodystrophy patients comprise 0.4% of admissions to children’s hospitals and 0.7% of costs. Over seven years these patients required $411 million of hospital care, or $131,000/patient. Hospital costs for leukodystrophy patients varied at different institutions, ranging from 2 to 15 times more than the average pediatric patient. There was a statistically significant correlation between higher volume and increased cost efficiency. Increased mortality rates had an inverse relationship with increased patient volume that was not statistically significant. Conclusions We developed and validated a code-based algorithm for identifying leukodystrophy patients in deidentified national datasets. Leukodystrophy patients account for $59 million of costs yearly at children’s hospitals. Our data highlight potential to reduce unwarranted variability and improve patient care. PMID:23953952
POWER-ENHANCED MULTIPLE DECISION FUNCTIONS CONTROLLING FAMILY-WISE ERROR AND FALSE DISCOVERY RATES.
Peña, Edsel A; Habiger, Joshua D; Wu, Wensong
2011-02-01
Improved procedures, in terms of smaller missed discovery rates (MDR), for performing multiple hypotheses testing with weak and strong control of the family-wise error rate (FWER) or the false discovery rate (FDR) are developed and studied. The improvement over existing procedures such as the Šidák procedure for FWER control and the Benjamini-Hochberg (BH) procedure for FDR control is achieved by exploiting possible differences in the powers of the individual tests. Results signal the need to take into account the powers of the individual tests and to have multiple hypotheses decision functions which are not limited to simply using the individual p -values, as is the case, for example, with the Šidák, Bonferroni, or BH procedures. They also enhance understanding of the role of the powers of individual tests, or more precisely the receiver operating characteristic (ROC) functions of decision processes, in the search for better multiple hypotheses testing procedures. A decision-theoretic framework is utilized, and through auxiliary randomizers the procedures could be used with discrete or mixed-type data or with rank-based nonparametric tests. This is in contrast to existing p -value based procedures whose theoretical validity is contingent on each of these p -value statistics being stochastically equal to or greater than a standard uniform variable under the null hypothesis. Proposed procedures are relevant in the analysis of high-dimensional "large M , small n " data sets arising in the natural, physical, medical, economic and social sciences, whose generation and creation is accelerated by advances in high-throughput technology, notably, but not limited to, microarray technology.
Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason
2014-06-01
Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.
Retrospective validation of renewal-based, medium-term earthquake forecasts
NASA Astrophysics Data System (ADS)
Rotondi, R.
2013-10-01
In this paper, some methods for scoring the performances of an earthquake forecasting probability model are applied retrospectively for different goals. The time-dependent occurrence probabilities of a renewal process are tested against earthquakes of Mw ≥ 5.3 recorded in Italy according to decades of the past century. An aim was to check the capability of the model to reproduce the data by which the model was calibrated. The scoring procedures used can be distinguished on the basis of the requirement (or absence) of a reference model and of probability thresholds. Overall, a rank-based score, information gain, gambling scores, indices used in binary predictions and their loss functions are considered. The definition of various probability thresholds as percentages of the hazard functions allows proposals of the values associated with the best forecasting performance as alarm level in procedures for seismic risk mitigation. Some improvements are then made to the input data concerning the completeness of the historical catalogue and the consistency of the composite seismogenic sources with the hypotheses of the probability model. Another purpose of this study was thus to obtain hints on what is the most influential factor and on the suitability of adopting the consequent changes of the data sets. This is achieved by repeating the estimation procedure of the occurrence probabilities and the retrospective validation of the forecasts obtained under the new assumptions. According to the rank-based score, the completeness appears to be the most influential factor, while there are no clear indications of the usefulness of the decomposition of some composite sources, although in some cases, it has led to improvements of the forecast.
The Aristotle score: a complexity-adjusted method to evaluate surgical results.
Lacour-Gayet, F; Clarke, D; Jacobs, J; Comas, J; Daebritz, S; Daenen, W; Gaynor, W; Hamilton, L; Jacobs, M; Maruszsewski, B; Pozzi, M; Spray, T; Stellin, G; Tchervenkov, C; Mavroudis And, C
2004-06-01
Quality control is difficult to achieve in Congenital Heart Surgery (CHS) because of the diversity of the procedures. It is particularly needed, considering the potential adverse outcomes associated with complex cases. The aim of this project was to develop a new method based on the complexity of the procedures. The Aristotle project, involving a panel of expert surgeons, started in 1999 and included 50 pediatric surgeons from 23 countries, representing the EACTS, STS, ECHSA and CHSS. The complexity was based on the procedures as defined by the STS/EACTS International Nomenclature and was undertaken in two steps: the first step was establishing the Basic Score, which adjusts only the complexity of the procedures. It is based on three factors: the potential for mortality, the potential for morbidity and the anticipated technical difficulty. A questionnaire was completed by the 50 centers. The second step was the development of the Comprehensive Aristotle Score, which further adjusts the complexity according to the specific patient characteristics. It includes two categories of complexity factors, the procedure dependent and independent factors. After considering the relationship between complexity and performance, the Aristotle Committee is proposing that: Performance = Complexity x Outcome. The Aristotle score, allows precise scoring of the complexity for 145 CHS procedures. One interesting notion coming out of this study is that complexity is a constant value for a given patient regardless of the center where he is operated. The Aristotle complexity score was further applied to 26 centers reporting to the EACTS congenital database. A new display of centers is presented based on the comparison of hospital survival to complexity and to our proposed definition of performance. A complexity-adjusted method named the Aristotle Score, based on the complexity of the surgical procedures has been developed by an international group of experts. The Aristotle score, electronically available, was introduced in the EACTS and STS databases. A validation process evaluating its predictive value is being developed.
Torres, Heloísa de Carvalho; Chaves, Fernanda Figueredo; da Silva, Daniel Dutra Romualdo; Bosco, Adriana Aparecida; Gabriel, Beatriz Diniz; Reis, Ilka Afonso; Rodrigues, Júlia Santos Nunes; Pagano, Adriana Silvina
2016-01-01
ABSTRACT Objective: to translate, adapt and validate the contents of the Diabetes Medical Management Plan for the Brazilian context. This protocol was developed by the American Diabetes Association and guides the procedure of educators for the care of children and adolescents with diabetes in schools. Method: this methodological study was conducted in four stages: initial translation, synthesis of initial translation, back translation and content validation by an expert committee, composed of 94 specialists (29 applied linguists and 65 health professionals), for evaluation of the translated version through an online questionnaire. The concordance level of the judges was calculated based on the Content Validity Index. Data were exported into the R program for statistical analysis: Results: the evaluation of the instrument showed good concordance between the judges of the Health and Applied Linguistics areas, with a mean content validity index of 0.9 and 0.89, respectively, and slight variability of the index between groups (difference of less than 0.01). The items in the translated version, evaluated as unsatisfactory by the judges, were reformulated based on the considerations of the professionals of each group. Conclusion: a Brazilian version of Diabetes Medical Management Plan was constructed, called the Plano de Manejo do Diabetes na Escola. PMID:27508911
Larsen, C R; Grantcharov, T; Aggarwal, R; Tully, A; Sørensen, J L; Dalsgaard, T; Ottesen, B
2006-09-01
Safe realistic training and unbiased quantitative assessment of technical skills are required for laparoscopy. Virtual reality (VR) simulators may be useful tools for training and assessing basic and advanced surgical skills and procedures. This study aimed to investigate the construct validity of the LapSimGyn VR simulator, and to determine the learning curves of gynecologists with different levels of experience. For this study, 32 gynecologic trainees and consultants (juniors or seniors) were allocated into three groups: novices (0 advanced laparoscopic procedures), intermediate level (>20 and <60 procedures), and experts (>100 procedures). All performed 10 sets of simulations consisting of three basic skill tasks and an ectopic pregnancy program. The simulations were carried out on 3 days within a maximum period of 2 weeks. Assessment of skills was based on time, economy of movement, and error parameters measured by the simulator. The data showed that expert gynecologists performed significantly and consistently better than intermediate and novice gynecologists. The learning curves differed significantly between the groups, showing that experts start at a higher level and more rapidly reach the plateau of their learning curve than do intermediate and novice groups of surgeons. The LapSimGyn VR simulator package demonstrates construct validity on both the basic skills module and the procedural gynecologic module for ectopic pregnancy. Learning curves can be obtained, but to reach the maximum performance for the more complex tasks, 10 repetitions do not seem sufficient at the given task level and settings. LapSimGyn also seems to be flexible and widely accepted by the users.
ERIC Educational Resources Information Center
Herrmann-Abell, Cari F.; DeBoer, George E.
2011-01-01
Distractor-driven multiple-choice assessment items and Rasch modeling were used as diagnostic tools to investigate students' understanding of middle school chemistry ideas. Ninety-one items were developed according to a procedure that ensured content alignment to the targeted standards and construct validity. The items were administered to 13360…
Development and Validation of a Shear Punch Test Fixture
2013-08-01
composites (MMC) manufactured by friction stir processing (FSP) that are being developed as part of a Technology Investment Fund (TIF) project, as the...leading a team of government departments and academics to develop a friction stir processing (FSP) based procedure to create metal matrix composite... friction stir process to fabricate surface metal matrix composites in aluminum alloys for potential application in light armoured vehicles. The
Modeling Wind Wave Evolution from Deep to Shallow Water
2011-09-30
validation and calibration of new model developments. WORK COMPLETED Development of a Lumped Quadruplet Approximation ( LQA ) To make evaluation of the...interactions based on the WRT method. This Lumped Quadruplet Approximation ( LQA ) clusters (lumps) contributions to the integrations over the...total transfer rate. A procedure has been developed to test the implementation (of LQA and other reduced versions of the WRT) where 1) the non
Standard Procedure for Calibrating an Areal Calorimetry Based Dosimeter
2015-05-01
detector target surface. In this case, the source was on for approximately 2.5 s, shortly after which the data acquisition ends. For this shot , the...no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control...48 APPENDIX D – CALIBRATION DATA SHOT RESULTS ...................................................... 49 APPENDIX E – SAMPLE
NASA Technical Reports Server (NTRS)
Bingham, Gail; Bates, Scott; Bugbee, Bruce; Garland, Jay; Podolski, Igor; Levinskikh, Rita; Sychev, Vladimir; Gushin, Vadim
2009-01-01
Validating Vegetable Production Unit (VPU) Plants, Protocols, Procedures and Requirements (P3R) Using Currently Existing Flight Resources (Lada-VPU-P3R) is a study to advance the technology required for plant growth in microgravity and to research related food safety issues. Lada-VPU-P3R also investigates the non-nutritional value to the flight crew of developing plants on-orbit. The Lada-VPU-P3R uses the Lada hardware on the ISS and falls under a cooperative agreement between National Aeronautics and Space Administration (NASA) and the Russian Federal Space Association (FSA). Research Summary: Validating Vegetable Production Unit (VPU) Plants, Protocols, Procedures and Requirements (P3R) Using Currently Existing Flight Resources (Lada-VPU-P3R) will optimize hardware and
Evaluation of animal models of neurobehavioral disorders
van der Staay, F Josef; Arndt, Saskia S; Nordquist, Rebecca E
2009-01-01
Animal models play a central role in all areas of biomedical research. The process of animal model building, development and evaluation has rarely been addressed systematically, despite the long history of using animal models in the investigation of neuropsychiatric disorders and behavioral dysfunctions. An iterative, multi-stage trajectory for developing animal models and assessing their quality is proposed. The process starts with defining the purpose(s) of the model, preferentially based on hypotheses about brain-behavior relationships. Then, the model is developed and tested. The evaluation of the model takes scientific and ethical criteria into consideration. Model development requires a multidisciplinary approach. Preclinical and clinical experts should establish a set of scientific criteria, which a model must meet. The scientific evaluation consists of assessing the replicability/reliability, predictive, construct and external validity/generalizability, and relevance of the model. We emphasize the role of (systematic and extended) replications in the course of the validation process. One may apply a multiple-tiered 'replication battery' to estimate the reliability/replicability, validity, and generalizability of result. Compromised welfare is inherent in many deficiency models in animals. Unfortunately, 'animal welfare' is a vaguely defined concept, making it difficult to establish exact evaluation criteria. Weighing the animal's welfare and considerations as to whether action is indicated to reduce the discomfort must accompany the scientific evaluation at any stage of the model building and evaluation process. Animal model building should be discontinued if the model does not meet the preset scientific criteria, or when animal welfare is severely compromised. The application of the evaluation procedure is exemplified using the rat with neonatal hippocampal lesion as a proposed model of schizophrenia. In a manner congruent to that for improving animal models, guided by the procedure expounded upon in this paper, the developmental and evaluation procedure itself may be improved by careful definition of the purpose(s) of a model and by defining better evaluation criteria, based on the proposed use of the model. PMID:19243583
Competency-based medical education for plastic surgery: where do we begin?
Knox, Aaron D C; Gilardino, Mirko S; Kasten, Steve J; Warren, Richard J; Anastakis, Dimitri J
2014-05-01
North American surgical education is beginning to shift toward competency-based medical education, in which trainees complete their training only when competence has been demonstrated through objective milestones. Pressure is mounting to embrace competency-based medical education because of the perception that it provides more transparent standards and increased public accountability. In response to calls for reform from leading bodies in medical education, competency-based medical education is rapidly becoming the standard in training of physicians. The authors summarize the rationale behind the recent shift toward competency-based medical education and creation of the milestones framework. With respect to procedural skills, initial efforts will require the field of plastic surgery to overcome three challenges: identifying competencies (principles and procedures), modeling teaching strategies, and developing assessment tools. The authors provide proposals for how these challenges may be addressed and the educational rationale behind each proposal. A framework for identification of competencies and a stepwise approach toward creation of a principles oriented competency-based medical education curriculum for plastic surgery are presented. An assessment matrix designed to sample resident exposure to core principles and key procedures is proposed, along with suggestions for generating validity evidence for assessment tools. The ideal curriculum should provide exposure to core principles of plastic surgery while demonstrating competence through performance of index procedures that are most likely to benefit graduating residents when entering independent practice and span all domains of plastic surgery. The authors advocate that exploring the role and potential benefits of competency-based medical education in plastic surgery residency training is timely.
Airport Landside - Volume III : ALSIM Calibration and Validation.
DOT National Transportation Integrated Search
1982-06-01
This volume discusses calibration and validation procedures applied to the Airport Landside Simulation Model (ALSIM), using data obtained at Miami, Denver and LaGuardia Airports. Criteria for the selection of a validation methodology are described. T...
Hierarchical Clustering on the Basis of Inter-Job Similarity as a Tool in Validity Generalization
ERIC Educational Resources Information Center
Mobley, William H.; Ramsay, Robert S.
1973-01-01
The present research was stimulated by three related problems frequently faced in validation research: viable procedures for combining similar jobs in order to assess the validity of various predictors, for assessing groups of jobs represented in previous validity studies, and for assessing the applicability of validity findings between units.…
Vasak, Christoph; Strbac, Georg D; Huber, Christian D; Lettner, Stefan; Gahleitner, André; Zechner, Werner
2015-02-01
The study aims to evaluate the accuracy of the NobelGuide™ (Medicim/Nobel Biocare, Göteborg, Sweden) concept maximally reducing the influence of clinical and surgical parameters. Moreover, the study was to compare and validate two validation procedures versus a reference method. Overall, 60 implants were placed in 10 artificial edentulous mandibles according to the NobelGuide™ protocol. For merging the pre- and postoperative DICOM data sets, three different fusion methods (Triple Scan Technique, NobelGuide™ Validation software, and AMIRA® software [VSG - Visualization Sciences Group, Burlington, MA, USA] as reference) were applied. Discrepancies between the virtual and the actual implant positions were measured. The mean deviations measured with AMIRA® were 0.49 mm (implant shoulder), 0.69 mm (implant apex), and 1.98°mm (implant axis). The Triple Scan Technique as well as the NobelGuide™ Validation software revealed similar deviations compared with the reference method. A significant correlation between angular and apical deviations was seen (r = 0.53; p < .001). A greater implant diameter was associated with greater deviations (p = .03). The Triple Scan Technique as a system-independent validation procedure as well as the NobelGuide™ Validation software are in accordance with the AMIRA® software. The NobelGuide™ system showed similar or less spatial and angular deviations compared with others. © 2013 Wiley Periodicals, Inc.
The PDB_REDO server for macromolecular structure model optimization
Joosten, Robbie P.; Long, Fei; Murshudov, Garib N.; Perrakis, Anastassis
2014-01-01
The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395–1412]. The PDB_REDO procedure aims for ‘constructive validation’, aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallographers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB. PMID:25075342
Akhtar, Kashif; Sugand, Kapil; Sperrin, Matthew; Cobb, Justin; Standfield, Nigel; Gupte, Chinmay
2015-01-01
Virtual-reality (VR) simulation in orthopedic training is still in its infancy, and much of the work has been focused on arthroscopy. We evaluated the construct validity of a new VR trauma simulator for performing dynamic hip screw (DHS) fixation of a trochanteric femoral fracture. 30 volunteers were divided into 3 groups according to the number of postgraduate (PG) years and the amount of clinical experience: novice (1-4 PG years; less than 10 DHS procedures); intermediate (5-12 PG years; 10-100 procedures); expert (> 12 PG years; > 100 procedures). Each participant performed a DHS procedure and objective performance metrics were recorded. These data were analyzed with each performance metric taken as the dependent variable in 3 regression models. There were statistically significant differences in performance between groups for (1) number of attempts at guide-wire insertion, (2) total fluoroscopy time, (3) tip-apex distance, (4) probability of screw cutout, and (5) overall simulator score. The intermediate group performed the procedure most quickly, with the lowest fluoroscopy time, the lowest tip-apex distance, the lowest probability of cutout, and the highest simulator score, which correlated with their frequency of exposure to running the trauma lists for hip fracture surgery. This study demonstrates the construct validity of a haptic VR trauma simulator with surgeons undertaking the procedure most frequently performing best on the simulator. VR simulation may be a means of addressing restrictions on working hours and allows trainees to practice technical tasks without putting patients at risk. The VR DHS simulator evaluated in this study may provide valid assessment of technical skill.
[Target volume segmentation of PET images by an iterative method based on threshold value].
Castro, P; Huerga, C; Glaría, L A; Plaza, R; Rodado, S; Marín, M D; Mañas, A; Serrada, A; Núñez, L
2014-01-01
An automatic segmentation method is presented for PET images based on an iterative approximation by threshold value that includes the influence of both lesion size and background present during the acquisition. Optimal threshold values that represent a correct segmentation of volumes were determined based on a PET phantom study that contained different sizes spheres and different known radiation environments. These optimal values were normalized to background and adjusted by regression techniques to a two-variable function: lesion volume and signal-to-background ratio (SBR). This adjustment function was used to build an iterative segmentation method and then, based in this mention, a procedure of automatic delineation was proposed. This procedure was validated on phantom images and its viability was confirmed by retrospectively applying it on two oncology patients. The resulting adjustment function obtained had a linear dependence with the SBR and was inversely proportional and negative with the volume. During the validation of the proposed method, it was found that the volume deviations respect to its real value and CT volume were below 10% and 9%, respectively, except for lesions with a volume below 0.6 ml. The automatic segmentation method proposed can be applied in clinical practice to tumor radiotherapy treatment planning in a simple and reliable way with a precision close to the resolution of PET images. Copyright © 2013 Elsevier España, S.L.U. and SEMNIM. All rights reserved.
41 CFR 60-3.7 - Use of other validity studies.
Code of Federal Regulations, 2010 CFR
2010-07-01
... studies. 60-3.7 Section 60-3.7 Public Contracts and Property Management Other Provisions Relating to... of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection procedures by validity studies conducted by other users or conducted...
41 CFR 60-3.7 - Use of other validity studies.
Code of Federal Regulations, 2011 CFR
2011-07-01
... studies. 60-3.7 Section 60-3.7 Public Contracts and Property Management Other Provisions Relating to... of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection procedures by validity studies conducted by other users or conducted...
29 CFR 1607.6 - Use of selection procedures which have not been validated.
Code of Federal Regulations, 2010 CFR
2010-07-01
... circumstances in which a user cannot or need not utilize the validation techniques contemplated by these... which has an adverse impact, the validation techniques contemplated by these guidelines usually should be followed if technically feasible. Where the user cannot or need not follow the validation...
40 CFR 86.1341-90 - Test cycle validation criteria.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Test cycle validation criteria. 86... Procedures § 86.1341-90 Test cycle validation criteria. (a) To minimize the biasing effect of the time lag... brake horsepower-hour. (c) Regression line analysis to calculate validation statistics. (1) Linear...
29 CFR 1607.6 - Use of selection procedures which have not been validated.
Code of Federal Regulations, 2014 CFR
2014-07-01
... circumstances in which a user cannot or need not utilize the validation techniques contemplated by these... which has an adverse impact, the validation techniques contemplated by these guidelines usually should be followed if technically feasible. Where the user cannot or need not follow the validation...
29 CFR 1607.6 - Use of selection procedures which have not been validated.
Code of Federal Regulations, 2011 CFR
2011-07-01
... circumstances in which a user cannot or need not utilize the validation techniques contemplated by these... which has an adverse impact, the validation techniques contemplated by these guidelines usually should be followed if technically feasible. Where the user cannot or need not follow the validation...
29 CFR 1607.6 - Use of selection procedures which have not been validated.
Code of Federal Regulations, 2013 CFR
2013-07-01
... circumstances in which a user cannot or need not utilize the validation techniques contemplated by these... which has an adverse impact, the validation techniques contemplated by these guidelines usually should be followed if technically feasible. Where the user cannot or need not follow the validation...
40 CFR 86.1341-90 - Test cycle validation criteria.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 20 2013-07-01 2013-07-01 false Test cycle validation criteria. 86... Procedures § 86.1341-90 Test cycle validation criteria. (a) To minimize the biasing effect of the time lag... brake horsepower-hour. (c) Regression line analysis to calculate validation statistics. (1) Linear...
29 CFR 1607.6 - Use of selection procedures which have not been validated.
Code of Federal Regulations, 2012 CFR
2012-07-01
... circumstances in which a user cannot or need not utilize the validation techniques contemplated by these... which has an adverse impact, the validation techniques contemplated by these guidelines usually should be followed if technically feasible. Where the user cannot or need not follow the validation...
40 CFR 86.1341-90 - Test cycle validation criteria.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 20 2012-07-01 2012-07-01 false Test cycle validation criteria. 86... Procedures § 86.1341-90 Test cycle validation criteria. (a) To minimize the biasing effect of the time lag... brake horsepower-hour. (c) Regression line analysis to calculate validation statistics. (1) Linear...
Validity Issues in Clinical Assessment.
ERIC Educational Resources Information Center
Foster, Sharon L.; Cone, John D.
1995-01-01
Validation issues that arise with measures of constructs and behavior are addressed with reference to general reasons for using assessment procedures in clinical psychology. A distinction is made between the representational phase of validity assessment and the elaborative validity phase in which the meaning and utility of scores are examined.…
The Need and Requirements for Validating Damage Detection Capability
2011-09-01
Testing of Airborne Equipment [11], 2) Materials / Structure Certification, 3) NDE (POD) Validation Procedures, 4) Failure Mode Effects and Criticality...Analysis (FMECA), and 5) Cost Benefits Analysis [12]. Existing procedures for environmental testing of airborne equipment ensure flight...e.g. ultrasound or eddy current), damage type or failure conditions to detect, criticality of the damage state (e.g. safety of flight), likelihood of
A framework for assessing the adequacy and effectiveness of software development methodologies
NASA Technical Reports Server (NTRS)
Arthur, James D.; Nance, Richard E.
1990-01-01
Tools, techniques, environments, and methodologies dominate the software engineering literature, but relatively little research in the evaluation of methodologies is evident. This work reports an initial attempt to develop a procedural approach to evaluating software development methodologies. Prominent in this approach are: (1) an explication of the role of a methodology in the software development process; (2) the development of a procedure based on linkages among objectives, principles, and attributes; and (3) the establishment of a basis for reduction of the subjective nature of the evaluation through the introduction of properties. An application of the evaluation procedure to two Navy methodologies has provided consistent results that demonstrate the utility and versatility of the evaluation procedure. Current research efforts focus on the continued refinement of the evaluation procedure through the identification and integration of product quality indicators reflective of attribute presence, and the validation of metrics supporting the measure of those indicators. The consequent refinement of the evaluation procedure offers promise of a flexible approach that admits to change as the field of knowledge matures. In conclusion, the procedural approach presented in this paper represents a promising path toward the end goal of objectively evaluating software engineering methodologies.
Haeckel or Hennig? The Gordian Knot of Characters, Development, and Procedures in Phylogeny.
ERIC Educational Resources Information Center
Dupuis, Claude
1984-01-01
Discusses the conditions for validating customary phylogenetic procedures. Concludes that the requisites of homogeneity and completeness for proved short lineages seem satisfied by the Hennigian but not the Haeckelian procedure. The epistemological antinomy of the two procedures is emphasized for the first time. (Author/RH)
Sauerland, Melanie; Wolfs, Andrea C F; Crans, Samantha; Verschuere, Bruno
2017-11-27
Direct eyewitness identification is widely used, but prone to error. We tested the validity of indirect eyewitness identification decisions using the reaction time-based concealed information test (CIT) for assessing cooperative eyewitnesses' face memory as an alternative to traditional lineup procedures. In a series of five experiments, a total of 401 mock eyewitnesses watched one of 11 different stimulus events that depicted a breach of law. Eyewitness identifications in the CIT were derived from longer reaction times as compared to well-matched foil faces not encountered before. Across the five experiments, the weighted mean effect size d was 0.14 (95% CI 0.08-0.19). The reaction time-based CIT seems unsuited for testing cooperative eyewitnesses' memory for faces. The careful matching of the faces required for a fair lineup or the lack of intent to deceive may have hampered the diagnosticity of the reaction time-based CIT.
Formalizing procedures for operations automation, operator training and spacecraft autonomy
NASA Technical Reports Server (NTRS)
Lecouat, Francois; Desaintvincent, Arnaud
1994-01-01
The generation and validation of operations procedures is a key task of mission preparation that is quite complex and costly. This has motivated the development of software applications providing support for procedures preparation. Several applications have been developed at MATRA MARCONI SPACE (MMS) over the last five years. They are presented in the first section of this paper. The main idea is that if procedures are represented in a formal language, they can be managed more easily with a computer tool and some automatic verifications can be performed. One difficulty is to define a formal language that is easy to use for operators and operations engineers. From the experience of the various procedures management tools developed in the last five years (including the POM, EOA, and CSS projects), MMS has derived OPSMAKER, a generic tool for procedure elaboration and validation. It has been applied to quite different types of missions, ranging from crew procedures (PREVISE system), ground control centers management procedures (PROCSU system), and - most relevant to the present paper - satellite operation procedures (PROCSAT developed for CNES, to support the preparation and verification of SPOT 4 operation procedures, and OPSAT for MMS telecom satellites operation procedures).
Jackson, Lauren S; Al-Taher, Fadwa M; Moorman, Mark; DeVries, Jonathan W; Tippett, Roger; Swanson, Katherine M J; Fu, Tong-Jen; Salter, Robert; Dunaif, George; Estes, Susan; Albillos, Silvia; Gendel, Steven M
2008-02-01
Food allergies affect an estimated 10 to 12 million people in the United States. Some of these individuals can develop life-threatening allergic reactions when exposed to allergenic proteins. At present, the only successful method to manage food allergies is to avoid foods containing allergens. Consumers with food allergies rely on food labels to disclose the presence of allergenic ingredients. However, undeclared allergens can be inadvertently introduced into a food via cross-contact during manufacturing. Although allergen removal through cleaning of shared equipment or processing lines has been identified as one of the critical points for effective allergen control, there is little published information on the effectiveness of cleaning procedures for removing allergenic materials from processing equipment. There also is no consensus on how to validate or verify the efficacy of cleaning procedures. The objectives of this review were (i) to study the incidence and cause of allergen cross-contact, (ii) to assess the science upon which the cleaning of food contact surfaces is based, (iii) to identify best practices for cleaning allergenic foods from food contact surfaces in wet and dry manufacturing environments, and (iv) to present best practices for validating and verifying the efficacy of allergen cleaning protocols.
Corte, Rosa María Muñoz; Estepa, Raúl García; Ramos, Bernardo Santos; Paloma, Francisco Javier Bautista
2009-01-01
To evaluate the quality of the pharmacotherapeutic recommendations included in the Integrated Care Procedures (PAIs regarding its initials in Spanish) of the Andalusian Ministry of Health, published up to March 2008, through the design and validation of a tool. The assessment tool was designed based on similar instruments, specifically the AGREE. Other criteria included were taken from various literature sources or were devised by ourselves. The tool was validated prior to being used. After applying it to all the PAIs, we examined the degree of compliance with these pharmacotherapeutic criteria, both as a whole and by PAIs subgroups. The developed tool is a questionnaire of 20 items, divided into 4 sections. The first section consists of the essential criteria, and the rest make reference to more specific, non essential criteria: definition of the level of evidence, thoroughness of information and definition of indicators. It was found that 4 of the 60 PAIs do not contain any type of therapeutic recommendation. No PAI fulfils all the items listed in the tool, however, 70 % of them fulfil the essential quality criteria established. There is a great variability in the content of pharmacotherapeutic recommendations for each PAI. Once the validity of the tool has been proved, it could be used to assess the quality of the therapeutic recommendations in clinical practice guidelines.
NASA Astrophysics Data System (ADS)
Ruiz-Pérez, Guiomar; Koch, Julian; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix
2017-12-01
Ecohydrological modeling studies in developing countries, such as sub-Saharan Africa, often face the problem of extensive parametrical requirements and limited available data. Satellite remote sensing data may be able to fill this gap, but require novel methodologies to exploit their spatio-temporal information that could potentially be incorporated into model calibration and validation frameworks. The present study tackles this problem by suggesting an automatic calibration procedure, based on the empirical orthogonal function, for distributed ecohydrological daily models. The procedure is tested with the support of remote sensing data in a data-scarce environment - the upper Ewaso Ngiro river basin in Kenya. In the present application, the TETIS-VEG model is calibrated using only NDVI (Normalized Difference Vegetation Index) data derived from MODIS. The results demonstrate that (1) satellite data of vegetation dynamics can be used to calibrate and validate ecohydrological models in water-controlled and data-scarce regions, (2) the model calibrated using only satellite data is able to reproduce both the spatio-temporal vegetation dynamics and the observed discharge at the outlet and (3) the proposed automatic calibration methodology works satisfactorily and it allows for a straightforward incorporation of spatio-temporal data into the calibration and validation framework of a model.
Moncayo, S; Rosales, J D; Izquierdo-Hornillos, R; Anzano, J; Caceres, J O
2016-09-01
This work reports on a simple and fast classification procedure for the quality control of red wines with protected designation of origin (PDO) by means of Laser Induced Breakdown Spectroscopy (LIBS) technique combined with Neural Networks (NN) in order to increase the quality assurance and authenticity issues. A total of thirty-eight red wine samples from different PDO were analyzed to detect fake wines and to avoid unfair competition in the market. LIBS is well known for not requiring sample preparation, however, in order to increase its analytical performance a new sample preparation treatment by previous liquid-to-solid transformation of the wine using a dry collagen gel has been developed. The use of collagen pellets allowed achieving successful classification results, avoiding the limitations and difficulties of working with aqueous samples. The performance of the NN model was assessed by three validation procedures taking into account their sensitivity (internal validation), generalization ability and robustness (independent external validation). The results of the use of a spectroscopic technique coupled with a chemometric analysis (LIBS-NN) are discussed in terms of its potential use in the food industry, providing a methodology able to perform the quality control of alcoholic beverages. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bourgine, Bernard; Lasseur, Éric; Leynet, Aurélien; Badinier, Guillaume; Ortega, Carole; Issautier, Benoit; Bouchet, Valentin
2015-04-01
In 2012 BRGM launched an extensive program to build the new French Geological Reference platform (RGF). Among the objectives of this program is to provide the public with validated, reliable and 3D-consistent geological data, with estimation of uncertainty. Approx. 100,000 boreholes over the whole French national territory provide a preliminary interpretation in terms of depths of main geological interfaces, but with an unchecked, unknown and often low reliability. The aim of this paper is to present the procedure that has been tested on two areas in France, in order to validate (or not) these boreholes, with the aim of being generalized as much as possible to the nearly 100,000 boreholes waiting for validation. The approach is based on the following steps, and includes the management of uncertainty at different steps: (a) Selection of a loose network of boreholes owning a logging or coring information enabling a reliable interpretation. This first interpretation is based on the correlation of well log data and allows defining 3D sequence stratigraphic framework identifying isochronous surfaces. A litho-stratigraphic interpretation is also performed. Be "A" the collection of all boreholes used for this step (typically 3 % of the total number of holes to be validated) and "B" the other boreholes to validate, (b) Geostatistical analysis of characteristic geological interfaces. The analysis is carried out firstly on the "A" type data (to validate the variogram model), then on the "B" type data and at last on "B" knowing "A". It is based on cross-validation tests and evaluation of the uncertainty associated to each geological interface. In this step, we take into account inequality constraints provided by boreholes that do not intersect all interfaces, as well as the "litho-stratigraphic pile" defining the formations and their relationships (depositing surfaces or erosion). The goal is to identify quickly and semi-automatically potential errors among the data, up to the geologist to check and correct the anomalies, (c) Consistency tests are also used to verify the appropriateness of interpretations towards other constraints (geological map, maximal formation extension limits, digital terrain model ...), (d) Construction of a 3D geological model from "A"+ "B" boreholes: continuous surfaces representation makes it possible to assess the overall consistency and to validate or invalidate interpretations. Standard-deviation maps allow visualizing areas where data from available but not yet validated boreholes could be added to reduce uncertainty. Maps of absolute or relative errors help to quantify and visualize model uncertainty. This procedure helps to quickly identify the main errors in the data. It guarantees rationalization, reproducibility and traceability of the various stages of validation. Automation aspect is obviously important when it comes to dealing with datasets that can contain tens of thousands of surveys. For this, specific tools have been developed by BRGM (GDM/ MultiLayer software, R scripts, GIS tools).
Malaei, Reyhane; Ramezani, Amir M; Absalan, Ghodratollah
2018-05-04
A sensitive and reliable ultrasound-assisted dispersive liquid-liquid microextraction (UA-DLLME) procedure was developed and validated for extraction and analysis of malondialdehyde (MDA) as an important lipids-peroxidation biomarker in human plasma. In this methodology, to achieve an applicable extraction procedure, the whole optimization processes were performed in human plasma. To convert MDA into readily extractable species, it was derivatized to hydrazone structure-base by 2,4-dinitrophenylhydrazine (DNPH) at 40 °C within 60 min. Influences of experimental variables on the extraction process including type and volume of extraction and disperser solvents, amount of derivatization agent, temperature, pH, ionic strength, sonication and centrifugation times were evaluated. Under the optimal experimental conditions, the enhancement factor and extraction recovery were 79.8 and 95.8%, respectively. The analytical signal linearly (R 2 = 0.9988) responded over a concentration range of 5.00-4000 ng mL -1 with a limit of detection of 0.75 ng mL -1 (S/N = 3) in the plasma sample. To validate the developed procedure, the recommend guidelines of Food and Drug Administration for bioanalytical analysis have been employed. Copyright © 2018. Published by Elsevier B.V.
Gaipa, Giuseppe; Tilenni, Manuela; Straino, Stefania; Burba, Ilaria; Zaccagnini, Germana; Belotti, Daniela; Biagi, Ettore; Valentini, Marco; Perseghin, Paolo; Parma, Matteo; Campli, Cristiana Di; Biondi, Andrea; Capogrossi, Maurizio C; Pompilio, Giulio; Pesce, Maurizio
2010-01-01
Abstract The aim of the present study was to develop and validate a good manufacturing practice (GMP) compliant procedure for the preparation of bone marrow (BM) derived CD133+ cells for cardiovascular repair. Starting from available laboratory protocols to purify CD133+ cells from human cord blood, we implemented these procedures in a GMP facility and applied quality control conditions defining purity, microbiological safety and vitality of CD133+ cells. Validation of CD133+ cells isolation and release process were performed according to a two-step experimental program comprising release quality checking (step 1) as well as ‘proofs of principle’ of their phenotypic integrity and biological function (step 2). This testing program was accomplished using in vitro culture assays and in vivo testing in an immunosuppressed mouse model of hindlimb ischemia. These criteria and procedures were successfully applied to GMP production of CD133+ cells from the BM for an ongoing clinical trial of autologous stem cells administration into patients with ischemic cardiomyopathy. Our results show that GMP implementation of currently available protocols for CD133+ cells selection is feasible and reproducible, and enables the production of cells having a full biological potential according to the most recent quality requirements by European Regulatory Agencies. PMID:19627397
Efficient automatic OCR word validation using word partial format derivation and language model
NASA Astrophysics Data System (ADS)
Chen, Siyuan; Misra, Dharitri; Thoma, George R.
2010-01-01
In this paper we present an OCR validation module, implemented for the System for Preservation of Electronic Resources (SPER) developed at the U.S. National Library of Medicine.1 The module detects and corrects suspicious words in the OCR output of scanned textual documents through a procedure of deriving partial formats for each suspicious word, retrieving candidate words by partial-match search from lexicons, and comparing the joint probabilities of N-gram and OCR edit transformation corresponding to the candidates. The partial format derivation, based on OCR error analysis, efficiently and accurately generates candidate words from lexicons represented by ternary search trees. In our test case comprising a historic medico-legal document collection, this OCR validation module yielded the correct words with 87% accuracy and reduced the overall OCR word errors by around 60%.
[Interest of positioning control in onboard imaging and its delegation to the therapists].
de Crevoisier, R; Duvergé, L; Hulot, C; Chauvet, B; Henry, O; Bouvet, C; Castelli, J
2016-10-01
The delegation of the on board imaging position control, from the radiation oncologist to the therapist, is justified by the generalization of the image-guided radiotherapy techniques which are particularly time consuming. This delegation is however partial. Indeed, the validation of the position by the therapist can be clearly performed when the registration is based on bony landmark or fiducial. The radiation oncologist needs however to make the validation in case of large target displacement, in more complex soft tissue-based registration, and in case of stereotactic body radiation therapy. Moreover, this delegation implies at least three conditions which are first the training of the staff, then the formalization of the procedures, responsibilities and delegations and finally, the evaluation of the practices of IGRT. Copyright © 2016. Published by Elsevier SAS.
Virtual reality simulator training for laparoscopic colectomy: what metrics have construct validity?
Shanmugan, Skandan; Leblanc, Fabien; Senagore, Anthony J; Ellis, C Neal; Stein, Sharon L; Khan, Sadaf; Delaney, Conor P; Champagne, Bradley J
2014-02-01
Virtual reality simulation for laparoscopic colectomy has been used for training of surgical residents and has been considered as a model for technical skills assessment of board-eligible colorectal surgeons. However, construct validity (the ability to distinguish between skill levels) must be confirmed before widespread implementation. This study was designed to specifically determine which metrics for laparoscopic sigmoid colectomy have evidence of construct validity. General surgeons that had performed fewer than 30 laparoscopic colon resections and laparoscopic colorectal experts (>200 laparoscopic colon resections) performed laparoscopic sigmoid colectomy on the LAP Mentor model. All participants received a 15-minute instructional warm-up and had never used the simulator before the study. Performance was then compared between each group for 21 metrics (procedural, 14; intraoperative errors, 7) to determine specifically which measurements demonstrate construct validity. Performance was compared with the Mann-Whitney U-test (p < 0.05 was significant). Fifty-three surgeons; 29 general surgeons, and 24 colorectal surgeons enrolled in the study. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 of 14 procedural metrics by distinguishing levels of surgical experience (p < 0.05). The most discriminatory procedural metrics (p < 0.01) favoring experts were reduced instrument path length, accuracy of the peritoneal/medial mobilization, and dissection of the inferior mesenteric artery. Intraoperative errors were not discriminatory for most metrics and favored general surgeons for colonic wall injury (general surgeons, 0.7; colorectal surgeons, 3.5; p = 0.045). Individual variability within the general surgeon and colorectal surgeon groups was not accounted for. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 procedure-specific metrics. However, using virtual reality simulator metrics to detect intraoperative errors did not discriminate between groups. If the virtual reality simulator continues to be used for the technical assessment of trainees and board-eligible surgeons, the evaluation of performance should be limited to procedural metrics.
Brown, Jeremiah R; MacKenzie, Todd A; Maddox, Thomas M; Fly, James; Tsai, Thomas T; Plomondon, Mary E; Nielson, Christopher D; Siew, Edward D; Resnic, Frederic S; Baker, Clifton R; Rumsfeld, John S; Matheny, Michael E
2015-12-11
Acute kidney injury (AKI) occurs frequently after cardiac catheterization and percutaneous coronary intervention. Although a clinical risk model exists for percutaneous coronary intervention, no models exist for both procedures, nor do existing models account for risk factors prior to the index admission. We aimed to develop such a model for use in prospective automated surveillance programs in the Veterans Health Administration. We collected data on all patients undergoing cardiac catheterization or percutaneous coronary intervention in the Veterans Health Administration from January 01, 2009 to September 30, 2013, excluding patients with chronic dialysis, end-stage renal disease, renal transplant, and missing pre- and postprocedural creatinine measurement. We used 4 AKI definitions in model development and included risk factors from up to 1 year prior to the procedure and at presentation. We developed our prediction models for postprocedural AKI using the least absolute shrinkage and selection operator (LASSO) and internally validated using bootstrapping. We developed models using 115 633 angiogram procedures and externally validated using 27 905 procedures from a New England cohort. Models had cross-validated C-statistics of 0.74 (95% CI: 0.74-0.75) for AKI, 0.83 (95% CI: 0.82-0.84) for AKIN2, 0.74 (95% CI: 0.74-0.75) for contrast-induced nephropathy, and 0.89 (95% CI: 0.87-0.90) for dialysis. We developed a robust, externally validated clinical prediction model for AKI following cardiac catheterization or percutaneous coronary intervention to automatically identify high-risk patients before and immediately after a procedure in the Veterans Health Administration. Work is ongoing to incorporate these models into routine clinical practice. © 2015 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
An ontology-based framework for bioinformatics workflows.
Digiampietri, Luciano A; Perez-Alcazar, Jose de J; Medeiros, Claudia Bauzer
2007-01-01
The proliferation of bioinformatics activities brings new challenges - how to understand and organise these resources, how to exchange and reuse successful experimental procedures, and to provide interoperability among data and tools. This paper describes an effort toward these directions. It is based on combining research on ontology management, AI and scientific workflows to design, reuse and annotate bioinformatics experiments. The resulting framework supports automatic or interactive composition of tasks based on AI planning techniques and takes advantage of ontologies to support the specification and annotation of bioinformatics workflows. We validate our proposal with a prototype running on real data.
New robust statistical procedures for the polytomous logistic regression models.
Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro
2018-05-17
This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.
[Objective surgery -- advanced robotic devices and simulators used for surgical skill assessment].
Suhánszki, Norbert; Haidegger, Tamás
2014-12-01
Robotic assistance became a leading trend in minimally invasive surgery, which is based on the global success of laparoscopic surgery. Manual laparoscopy requires advanced skills and capabilities, which is acquired through tedious learning procedure, while da Vinci type surgical systems offer intuitive control and advanced ergonomics. Nevertheless, in either case, the key issue is to be able to assess objectively the surgeons' skills and capabilities. Robotic devices offer radically new way to collect data during surgical procedures, opening the space for new ways of skill parameterization. This may be revolutionary in MIS training, given the new and objective surgical curriculum and examination methods. The article reviews currently developed skill assessment techniques for robotic surgery and simulators, thoroughly inspecting their validation procedure and utility. In the coming years, these methods will become the mainstream of Western surgical education.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owens, J; Koester, C
The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method for analysis of aldicarb, bromadiolone, carbofuran, oxamyl, and methomyl in water by high performance liquid chromatography tandem mass spectrometry (HPLC-MS/MS), titled Method EPA MS666. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures described in MS666 for analysis of carbamatemore » pesticides in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of Method EPA MS666 can be determined.« less
Analysis of Ethanolamines: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS888
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owens, J; Vu, A; Koester, C
The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method titled 'Analysis of Diethanolamine, Triethanolamine, n-Methyldiethanolamine, and n-Ethyldiethanolamine in Water by Single Reaction Monitoring Liquid Chromatography/Tandem Mass Spectrometry (LC/MS/MS): EPA Method MS888'. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures described in 'EPA Method MS888' for analysis of themore » listed ethanolamines in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of 'EPA Method MS888' can be determined.« less
Engineering applications of strong ground motion simulation
NASA Astrophysics Data System (ADS)
Somerville, Paul
1993-02-01
The formulation, validation and application of a procedure for simulating strong ground motions for use in engineering practice are described. The procedure uses empirical source functions (derived from near-source strong motion recordings of small earthquakes) to provide a realistic representation of effects such as source radiation that are difficult to model at high frequencies due to their partly stochastic behavior. Wave propagation effects are modeled using simplified Green's functions that are designed to transfer empirical source functions from their recording sites to those required for use in simulations at a specific site. The procedure has been validated against strong motion recordings of both crustal and subduction earthquakes. For the validation process we choose earthquakes whose source models (including a spatially heterogeneous distribution of the slip of the fault) are independently known and which have abundant strong motion recordings. A quantitative measurement of the fit between the simulated and recorded motion in this validation process is used to estimate the modeling and random uncertainty associated with the simulation procedure. This modeling and random uncertainty is one part of the overall uncertainty in estimates of ground motions of future earthquakes at a specific site derived using the simulation procedure. The other contribution to uncertainty is that due to uncertainty in the source parameters of future earthquakes that affect the site, which is estimated from a suite of simulations generated by varying the source parameters over their ranges of uncertainty. In this paper, we describe the validation of the simulation procedure for crustal earthquakes against strong motion recordings of the 1989 Loma Prieta, California, earthquake, and for subduction earthquakes against the 1985 Michoacán, Mexico, and Valparaiso, Chile, earthquakes. We then show examples of the application of the simulation procedure to the estimatation of the design response spectra for crustal earthquakes at a power plant site in California and for subduction earthquakes in the Seattle-Portland region. We also demonstrate the use of simulation methods for modeling the attenuation of strong ground motion, and show evidence of the effect of critical reflections from the lower crust in causing the observed flattening of the attenuation of strong ground motion from the 1988 Saguenay, Quebec, and 1989 Loma Prieta earthquakes.
Development and content validation of performance assessments for endoscopic third ventriculostomy.
Breimer, Gerben E; Haji, Faizal A; Hoving, Eelco W; Drake, James M
2015-08-01
This study aims to develop and establish the content validity of multiple expert rating instruments to assess performance in endoscopic third ventriculostomy (ETV), collectively called the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT). The important aspects of ETV were identified through a review of current literature, ETV videos, and discussion with neurosurgeons, fellows, and residents. Three assessment measures were subsequently developed: a procedure-specific checklist (CL), a CL of surgical errors, and a global rating scale (GRS). Neurosurgeons from various countries, all identified as experts in ETV, were then invited to participate in a modified Delphi survey to establish the content validity of these instruments. In each Delphi round, experts rated their agreement including each procedural step, error, and GRS item in the respective instruments on a 5-point Likert scale. Seventeen experts agreed to participate in the study and completed all Delphi rounds. After item generation, a total of 27 procedural CL items, 26 error CL items, and 9 GRS items were posed to Delphi panelists for rating. An additional 17 procedural CL items, 12 error CL items, and 1 GRS item were added by panelists. After three rounds, strong consensus (>80% agreement) was achieved on 35 procedural CL items, 29 error CL items, and 10 GRS items. Moderate consensus (50-80% agreement) was achieved on an additional 7 procedural CL items and 1 error CL item. The final procedural and error checklist contained 42 and 30 items, respectively (divided into setup, exposure, navigation, ventriculostomy, and closure). The final GRS contained 10 items. We have established the content validity of three ETV assessment measures by iterative consensus of an international expert panel. Each measure provides unique assessment information and thus can be used individually or in combination, depending on the characteristics of the learner and the purpose of the assessment. These instruments must now be evaluated in both the simulated and operative settings, to determine their construct validity and reliability. Ultimately, the measures contained in the NEVAT may prove suitable for formative assessment during ETV training and potentially as summative assessment measures during certification.
Validation of reactive gases and aerosols in the MACC global analysis and forecast system
NASA Astrophysics Data System (ADS)
Eskes, H.; Huijnen, V.; Arola, A.; Benedictow, A.; Blechschmidt, A.-M.; Botek, E.; Boucher, O.; Bouarar, I.; Chabrillat, S.; Cuevas, E.; Engelen, R.; Flentje, H.; Gaudel, A.; Griesfeller, J.; Jones, L.; Kapsomenakis, J.; Katragkou, E.; Kinne, S.; Langerock, B.; Razinger, M.; Richter, A.; Schultz, M.; Schulz, M.; Sudarchikova, N.; Thouret, V.; Vrekoussis, M.; Wagner, A.; Zerefos, C.
2015-11-01
The European MACC (Monitoring Atmospheric Composition and Climate) project is preparing the operational Copernicus Atmosphere Monitoring Service (CAMS), one of the services of the European Copernicus Programme on Earth observation and environmental services. MACC uses data assimilation to combine in situ and remote sensing observations with global and regional models of atmospheric reactive gases, aerosols, and greenhouse gases, and is based on the Integrated Forecasting System of the European Centre for Medium-Range Weather Forecasts (ECMWF). The global component of the MACC service has a dedicated validation activity to document the quality of the atmospheric composition products. In this paper we discuss the approach to validation that has been developed over the past 3 years. Topics discussed are the validation requirements, the operational aspects, the measurement data sets used, the structure of the validation reports, the models and assimilation systems validated, the procedure to introduce new upgrades, and the scoring methods. One specific target of the MACC system concerns forecasting special events with high-pollution concentrations. Such events receive extra attention in the validation process. Finally, a summary is provided of the results from the validation of the latest set of daily global analysis and forecast products from the MACC system reported in November 2014.
Janssen, Ellen M; Marshall, Deborah A; Hauber, A Brett; Bridges, John F P
2017-12-01
The recent endorsement of discrete-choice experiments (DCEs) and other stated-preference methods by regulatory and health technology assessment (HTA) agencies has placed a greater focus on demonstrating the validity and reliability of preference results. Areas covered: We present a practical overview of tests of validity and reliability that have been applied in the health DCE literature and explore other study qualities of DCEs. From the published literature, we identify a variety of methods to assess the validity and reliability of DCEs. We conceptualize these methods to create a conceptual model with four domains: measurement validity, measurement reliability, choice validity, and choice reliability. Each domain consists of three categories that can be assessed using one to four procedures (for a total of 24 tests). We present how these tests have been applied in the literature and direct readers to applications of these tests in the health DCE literature. Based on a stakeholder engagement exercise, we consider the importance of study characteristics beyond traditional concepts of validity and reliability. Expert commentary: We discuss study design considerations to assess the validity and reliability of a DCE, consider limitations to the current application of tests, and discuss future work to consider the quality of DCEs in healthcare.
Siu, B W M; Au-Yeung, C C Y; Chan, A W L; Chan, L S Y; Yuen, K K; Leung, H W; Yan, C K; Ng, K K; Lai, A C H; Davies, S; Collins, M
Mapping forensic psychiatric services with the security needs of patients is a salient step in service planning, audit and review. A valid and reliable instrument for measuring the security needs of Chinese forensic psychiatric inpatients was not yet available. This study aimed to develop and validate the Chinese version of the Security Needs Assessment Profile for measuring the profiles of security needs of Chinese forensic psychiatric inpatients. The Security Needs Assessment Profile by Davis was translated into Chinese. Its face validity, content validity, construct validity and internal consistency reliability were assessed by measuring the security needs of 98 Chinese forensic psychiatric inpatients. Principal factor analysis for construct validity provided a six-factor security needs model explaining 68.7% of the variance. Based on the Cronbach's alpha coefficient, the internal consistency reliability was rated as acceptable for procedural security (0.73), and fair for both physical security (0.62) and relational security (0.58). A significant sex difference (p=0.002) in total security score was found. The Chinese version of the Security Needs Assessment Profile is a valid and reliable instrument for assessing the security needs of Chinese forensic psychiatric inpatients. Copyright © 2017 Elsevier Ltd. All rights reserved.
Novel risk score of contrast-induced nephropathy after percutaneous coronary intervention.
Ji, Ling; Su, XiaoFeng; Qin, Wei; Mi, XuHua; Liu, Fei; Tang, XiaoHong; Li, Zi; Yang, LiChuan
2015-08-01
Contrast-induced nephropathy (CIN) post-percutaneous coronary intervention (PCI) is a major cause of acute kidney injury. In this study, we established a comprehensive risk score model to assess risk of CIN after PCI procedure, which could be easily used in a clinical environment. A total of 805 PCI patients, divided into analysis cohort (70%) and validation cohort (30%), were enrolled retrospectively in this study. Risk factors for CIN were identified using univariate analysis and multivariate logistic regression in the analysis cohort. Risk score model was developed based on multiple regression coefficients. Sensitivity and specificity of the new risk score system was validated in the validation cohort. Comparisons between the new risk score model and previous reported models were applied. The incidence of post-PCI CIN in the analysis cohort (n = 565) was 12%. Considerably high CIN incidence (50%) was observed in patients with chronic kidney disease (CKD). Age >75, body mass index (BMI) >25, myoglobin level, cardiac function level, hypoalbuminaemia, history of chronic kidney disease (CKD), Intra-aortic balloon pump (IABP) and peripheral vascular disease (PVD) were identified as independent risk factors of post-PCI CIN. A novel risk score model was established using multivariate regression coefficients, which showed highest sensitivity and specificity (0.917, 95%CI 0.877-0.957) compared with previous models. A new post-PCI CIN risk score model was developed based on a retrospective study of 805 patients. Application of this model might be helpful to predict CIN in patients undergoing PCI procedure. © 2015 Asian Pacific Society of Nephrology.
Baczyńska, Anna K.; Rowiński, Tomasz; Cybis, Natalia
2016-01-01
Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach’s alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed. PMID:27014111
Reliability Prediction Approaches For Domestic Intelligent Electric Energy Meter Based on IEC62380
NASA Astrophysics Data System (ADS)
Li, Ning; Tong, Guanghua; Yang, Jincheng; Sun, Guodong; Han, Dongjun; Wang, Guixian
2018-01-01
The reliability of intelligent electric energy meter is a crucial issue considering its large calve application and safety of national intelligent grid. This paper developed a procedure of reliability prediction for domestic intelligent electric energy meter according to IEC62380, especially to identify the determination of model parameters combining domestic working conditions. A case study was provided to show the effectiveness and validation.
ERIC Educational Resources Information Center
Mazza, M.; Di Rienzo, A.; Costagliola, C.; Roncone, R.; Casacchia, M.; Ricci, A.; Galzio, R.J.
2004-01-01
Based on the observation of the course of callosal fibres and of their artero-venous support as appearing in a microanatomic study, the Authors propose a variant of standard callosotomy procedure by the introduction of the transverse section of callosal fibres. This technique would allow the surgeon to spare a larger number of callosal fibres by…
A Model for Estimating the Reliability and Validity of Criterion-Referenced Measures.
ERIC Educational Resources Information Center
Edmonston, Leon P.; Randall, Robert S.
A decision model designed to determine the reliability and validity of criterion referenced measures (CRMs) is presented. General procedures which pertain to the model are discussed as to: Measures of relationship, Reliability, Validity (content, criterion-oriented, and construct validation), and Item Analysis. The decision model is presented in…
49 CFR 40.89 - What is validity testing, and are laboratories required to conduct it?
Code of Federal Regulations, 2013 CFR
2013-10-01
... PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Drug Testing Laboratories § 40.89 What is validity testing, and are laboratories required to conduct it? (a) Specimen validity testing is... 49 Transportation 1 2013-10-01 2013-10-01 false What is validity testing, and are laboratories...
49 CFR 40.89 - What is validity testing, and are laboratories required to conduct it?
Code of Federal Regulations, 2011 CFR
2011-10-01
... PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Drug Testing Laboratories § 40.89 What is validity testing, and are laboratories required to conduct it? (a) Specimen validity testing is... 49 Transportation 1 2011-10-01 2011-10-01 false What is validity testing, and are laboratories...
49 CFR 40.89 - What is validity testing, and are laboratories required to conduct it?
Code of Federal Regulations, 2010 CFR
2010-10-01
... PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Drug Testing Laboratories § 40.89 What is validity testing, and are laboratories required to conduct it? (a) Specimen validity testing is... 49 Transportation 1 2010-10-01 2010-10-01 false What is validity testing, and are laboratories...
49 CFR 40.89 - What is validity testing, and are laboratories required to conduct it?
Code of Federal Regulations, 2012 CFR
2012-10-01
... PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Drug Testing Laboratories § 40.89 What is validity testing, and are laboratories required to conduct it? (a) Specimen validity testing is... 49 Transportation 1 2012-10-01 2012-10-01 false What is validity testing, and are laboratories...
49 CFR 40.89 - What is validity testing, and are laboratories required to conduct it?
Code of Federal Regulations, 2014 CFR
2014-10-01
... PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Drug Testing Laboratories § 40.89 What is validity testing, and are laboratories required to conduct it? (a) Specimen validity testing is... 49 Transportation 1 2014-10-01 2014-10-01 false What is validity testing, and are laboratories...
29 CFR 1607.7 - Use of other validity studies.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 4 2011-07-01 2011-07-01 false Use of other validity studies. 1607.7 Section 1607.7 Labor... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection...
40 CFR 86.1341-98 - Test cycle validation criteria.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 20 2012-07-01 2012-07-01 false Test cycle validation criteria. 86... Procedures § 86.1341-98 Test cycle validation criteria. Section 86.1341-98 includes text that specifies...-90 (d)(4), shall be excluded from both cycle validation and the integrated work used for emissions...