Sample records for validation procedures based

  1. Assessing Procedural Competence: Validity Considerations.

    PubMed

    Pugh, Debra M; Wood, Timothy J; Boulet, John R

    2015-10-01

    Simulation-based medical education (SBME) offers opportunities for trainees to learn how to perform procedures and to be assessed in a safe environment. However, SBME research studies often lack robust evidence to support the validity of the interpretation of the results obtained from tools used to assess trainees' skills. The purpose of this paper is to describe how a validity framework can be applied when reporting and interpreting the results of a simulation-based assessment of skills related to performing procedures. The authors discuss various sources of validity evidence because they relate to SBME. A case study is presented.

  2. An improved procedure for the validation of satellite-based precipitation estimates

    NASA Astrophysics Data System (ADS)

    Tang, Ling; Tian, Yudong; Yan, Fang; Habib, Emad

    2015-09-01

    The objective of this study is to propose and test a new procedure to improve the validation of remote-sensing, high-resolution precipitation estimates. Our recent studies show that many conventional validation measures do not accurately capture the unique error characteristics in precipitation estimates to better inform both data producers and users. The proposed new validation procedure has two steps: 1) an error decomposition approach to separate the total retrieval error into three independent components: hit error, false precipitation and missed precipitation; and 2) the hit error is further analyzed based on a multiplicative error model. In the multiplicative error model, the error features are captured by three model parameters. In this way, the multiplicative error model separates systematic and random errors, leading to more accurate quantification of the uncertainties. The proposed procedure is used to quantitatively evaluate the recent two versions (Version 6 and 7) of TRMM's Multi-sensor Precipitation Analysis (TMPA) real-time and research product suite (3B42 and 3B42RT) for seven years (2005-2011) over the continental United States (CONUS). The gauge-based National Centers for Environmental Prediction (NCEP) Climate Prediction Center (CPC) near-real-time daily precipitation analysis is used as the reference. In addition, the radar-based NCEP Stage IV precipitation data are also model-fitted to verify the effectiveness of the multiplicative error model. The results show that winter total bias is dominated by the missed precipitation over the west coastal areas and the Rocky Mountains, and the false precipitation over large areas in Midwest. The summer total bias is largely coming from the hit bias in Central US. Meanwhile, the new version (V7) tends to produce more rainfall in the higher rain rates, which moderates the significant underestimation exhibited in the previous V6 products. Moreover, the error analysis from the multiplicative error model

  3. A competency based selection procedure for Dutch postgraduate GP training: a pilot study on validity and reliability.

    PubMed

    Vermeulen, Margit I; Tromp, Fred; Zuithoff, Nicolaas P A; Pieters, Ron H M; Damoiseaux, Roger A M J; Kuyvenhoven, Marijke M

    2014-12-01

    Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the competencies that are required to complete GP training. The objective was to explore reliability and validity aspects of the instruments developed. The new selection procedure comprising the National GP Knowledge Test (LHK), a situational judgement tests (SJT), a patterned behaviour descriptive interview (PBDI) and a simulated encounter (SIM) was piloted alongside the current procedure. Forty-seven candidates volunteered in both procedures. Admission decision was based on the results of the current procedure. Study participants did hardly differ from the other candidates. The mean scores of the candidates on the LHK and SJT were 21.9 % (SD 8.7) and 83.8% (SD 3.1), respectively. The mean self-reported competency scores (PBDI) were higher than the observed competencies (SIM): 3.7(SD 0.5) and 2.9(SD 0.6), respectively. Content-related competencies showed low correlations with one another when measured with different instruments, whereas more diverse competencies measured by a single instrument showed strong to moderate correlations. Moreover, a moderate correlation between LHK and SJT was found. The internal consistencies (intraclass correlation, ICC) of LHK and SJT were poor while the ICC of PBDI and SIM showed acceptable levels of reliability. Findings on content validity and reliability of these new instruments are promising to realize a competency based procedure. Further development of the instruments and research on predictive validity should be pursued.

  4. Is Earth-based scaling a valid procedure for calculating heat flows for Mars?

    NASA Astrophysics Data System (ADS)

    Ruiz, Javier; Williams, Jean-Pierre; Dohm, James M.; Fernández, Carlos; López, Valle

    2013-09-01

    Heat flow is a very important parameter for constraining the thermal evolution of a planetary body. Several procedures for calculating heat flows for Mars from geophysical or geological proxies have been used, which are valid for the time when the structures used as indicators were formed. The more common procedures are based on estimates of lithospheric strength (the effective elastic thickness of the lithosphere or the depth to the brittle-ductile transition). On the other hand, several works by Kargel and co-workers have estimated martian heat flows from scaling the present-day terrestrial heat flow to Mars, but the so-obtained values are much higher than those deduced from lithospheric strength. In order to explain the discrepancy, a recent paper by Rodriguez et al. (Rodriguez, J.A.P., Kargel, J.S., Tanaka, K.L., Crown, D.A., Berman, D.C., Fairén, A.G., Baker, V.R., Furfaro, R., Candelaria, P., Sasaki, S. [2011]. Icarus 213, 150-194) criticized the heat flow calculations for ancient Mars presented by Ruiz et al. (Ruiz, J., Williams, J.-P., Dohm, J.M., Fernández, C., López, V. [2009]. Icarus 207, 631-637) and other studies calculating ancient martian heat flows from lithospheric strength estimates, and casted doubts on the validity of the results obtained by these works. Here however we demonstrate that the discrepancy is due to computational and conceptual errors made by Kargel and co-workers, and we conclude that the scaling from terrestrial heat flow values is not a valid procedure for estimating reliable heat flows for Mars.

  5. In-Trail Procedure Air Traffic Control Procedures Validation Simulation Study

    NASA Technical Reports Server (NTRS)

    Chartrand, Ryan C.; Hewitt, Katrin P.; Sweeney, Peter B.; Graff, Thomas J.; Jones, Kenneth M.

    2012-01-01

    In August 2007, Airservices Australia (Airservices) and the United States National Aeronautics and Space Administration (NASA) conducted a validation experiment of the air traffic control (ATC) procedures associated with the Automatic Dependant Surveillance-Broadcast (ADS-B) In-Trail Procedure (ITP). ITP is an Airborne Traffic Situation Awareness (ATSA) application designed for near-term use in procedural airspace in which ADS-B data are used to facilitate climb and descent maneuvers. NASA and Airservices conducted the experiment in Airservices simulator in Melbourne, Australia. Twelve current operational air traffic controllers participated in the experiment, which identified aspects of the ITP that could be improved (mainly in the communication and controller approval process). Results showed that controllers viewed the ITP as valid and acceptable. This paper describes the experiment design and results.

  6. Reliability and validity of procedure-based assessments in otolaryngology training.

    PubMed

    Awad, Zaid; Hayden, Lindsay; Robson, Andrew K; Muthuswamy, Keerthini; Tolley, Neil S

    2015-06-01

    To investigate the reliability and construct validity of procedure-based assessment (PBA) in assessing performance and progress in otolaryngology training. Retrospective database analysis using a national electronic database. We analyzed PBAs of otolaryngology trainees in North London from core trainees (CTs) to specialty trainees (STs). The tool contains six multi-item domains: consent, planning, preparation, exposure/closure, technique, and postoperative care, rated as "satisfactory" or "development required," in addition to an overall performance rating (pS) of 1 to 4. Individual domain score, overall calculated score (cS), and number of "development-required" items were calculated for each PBA. Receiver operating characteristic analysis helped determine sensitivity and specificity. There were 3,152 otolaryngology PBAs from 46 otolaryngology trainees analyzed. PBA reliability was high (Cronbach's α 0.899), and sensitivity approached 99%. cS correlated positively with pS and level in training (rs : +0.681 and +0.324, respectively). ST had higher cS and pS than CT (93% ± 0.6 and 3.2 ± 0.03 vs. 71% ± 3.1 and 2.3 ± 0.08, respectively; P < .001). cS and pS increased from CT1 to ST8 showing construct validity (rs : +0.348 and +0.354, respectively; P < .001). The technical skill domain had the highest utilization (98% of PBAs) and was the best predictor of cS and pS (rs : +0.96 and +0.66, respectively). PBA is reliable and valid for assessing otolaryngology trainees' performance and progress at all levels. It is highly sensitive in identifying competent trainees. The tool is used in a formative and feedback capacity. The technical domain is the best predictor and should be given close attention. NA. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  7. In Defense of an Instrument-Based Approach to Validity

    ERIC Educational Resources Information Center

    Hood, S. Brian

    2012-01-01

    Paul E. Newton argues in favor of a conception of validity, viz, "the consensus definition of validity," according to which the extension of the predicate "is valid" is a subset of "assessment-based decision-making procedure[s], which [are] underwritten by an argument that the assessment procedure can be used to measure the attribute entailed by…

  8. Analytical procedure validation and the quality by design paradigm.

    PubMed

    Rozet, Eric; Lebrun, Pierre; Michiels, Jean-François; Sondag, Perceval; Scherder, Tara; Boulanger, Bruno

    2015-01-01

    Since the adoption of the ICH Q8 document concerning the development of pharmaceutical processes following a quality by design (QbD) approach, there have been many discussions on the opportunity for analytical procedure developments to follow a similar approach. While development and optimization of analytical procedure following QbD principles have been largely discussed and described, the place of analytical procedure validation in this framework has not been clarified. This article aims at showing that analytical procedure validation is fully integrated into the QbD paradigm and is an essential step in developing analytical procedures that are effectively fit for purpose. Adequate statistical methodologies have also their role to play: such as design of experiments, statistical modeling, and probabilistic statements. The outcome of analytical procedure validation is also an analytical procedure design space, and from it, control strategy can be set.

  9. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  10. What Do You Think You Are Measuring? A Mixed-Methods Procedure for Assessing the Content Validity of Test Items and Theory-Based Scaling

    PubMed Central

    Koller, Ingrid; Levenson, Michael R.; Glück, Judith

    2017-01-01

    The valid measurement of latent constructs is crucial for psychological research. Here, we present a mixed-methods procedure for improving the precision of construct definitions, determining the content validity of items, evaluating the representativeness of items for the target construct, generating test items, and analyzing items on a theoretical basis. To illustrate the mixed-methods content-scaling-structure (CSS) procedure, we analyze the Adult Self-Transcendence Inventory, a self-report measure of wisdom (ASTI, Levenson et al., 2005). A content-validity analysis of the ASTI items was used as the basis of psychometric analyses using multidimensional item response models (N = 1215). We found that the new procedure produced important suggestions concerning five subdimensions of the ASTI that were not identifiable using exploratory methods. The study shows that the application of the suggested procedure leads to a deeper understanding of latent constructs. It also demonstrates the advantages of theory-based item analysis. PMID:28270777

  11. Validation of biological activity testing procedure of recombinant human interleukin-7.

    PubMed

    Lutsenko, T N; Kovalenko, M V; Galkin, O Yu

    2017-01-01

    Validation procedure for method of monitoring the biological activity of reсombinant human interleukin-7 has been developed and conducted according to the requirements of national and international recommendations. This method is based on the ability of recombinant human interleukin-7 to induce proliferation of T lymphocytes. It has been shown that to control the biological activity of recombinant human interleukin-7 peripheral blood mononuclear cells (PBMCs) derived from blood or cell lines can be used. Validation charac­teristics that should be determined depend on the method, type of product or object test/measurement and biological test systems used in research. The validation procedure for the method of control of biological activity of recombinant human interleukin-7 in peripheral blood mononuclear cells showed satisfactory results on all parameters tested such as specificity, accuracy, precision and linearity.

  12. Harmonization of strategies for the validation of quantitative analytical procedures. A SFSTP proposal--Part I.

    PubMed

    Hubert, Ph; Nguyen-Huu, J-J; Boulanger, B; Chapuzet, E; Chiap, P; Cohen, N; Compagnon, P-A; Dewé, W; Feinberg, M; Lallier, M; Laurentie, M; Mercier, N; Muzard, G; Nivet, C; Valat, L

    2004-11-15

    This paper is the first part of a summary report of a new commission of the Société Française des Sciences et Techniques Pharmaceutiques (SFSTP). The main objective of this commission was the harmonization of approaches for the validation of quantitative analytical procedures. Indeed, the principle of the validation of theses procedures is today widely spread in all the domains of activities where measurements are made. Nevertheless, this simple question of acceptability or not of an analytical procedure for a given application, remains incompletely determined in several cases despite the various regulations relating to the good practices (GLP, GMP, ...) and other documents of normative character (ISO, ICH, FDA, ...). There are many official documents describing the criteria of validation to be tested, but they do not propose any experimental protocol and limit themselves most often to the general concepts. For those reasons, two previous SFSTP commissions elaborated validation guides to concretely help the industrial scientists in charge of drug development to apply those regulatory recommendations. If these two first guides widely contributed to the use and progress of analytical validations, they present, nevertheless, weaknesses regarding the conclusions of the performed statistical tests and the decisions to be made with respect to the acceptance limits defined by the use of an analytical procedure. The present paper proposes to review even the bases of the analytical validation for developing harmonized approach, by distinguishing notably the diagnosis rules and the decision rules. This latter rule is based on the use of the accuracy profile, uses the notion of total error and allows to simplify the approach of the validation of an analytical procedure while checking the associated risk to its usage. Thanks to this novel validation approach, it is possible to unambiguously demonstrate the fitness for purpose of a new method as stated in all regulatory

  13. Validity evidence for procedural competency in virtual reality robotic simulation, establishing a credible pass/fail standard for the vaginal cuff closure procedure.

    PubMed

    Hovgaard, Lisette Hvid; Andersen, Steven Arild Wuyts; Konge, Lars; Dalsgaard, Torur; Larsen, Christian Rifbjerg

    2018-03-30

    The use of robotic surgery for minimally invasive procedures has increased considerably over the last decade. Robotic surgery has potential advantages compared to laparoscopic surgery but also requires new skills. Using virtual reality (VR) simulation to facilitate the acquisition of these new skills could potentially benefit training of robotic surgical skills and also be a crucial step in developing a robotic surgical training curriculum. The study's objective was to establish validity evidence for a simulation-based test for procedural competency for the vaginal cuff closure procedure that can be used in a future simulation-based, mastery learning training curriculum. Eleven novice gynaecological surgeons without prior robotic experience and 11 experienced gynaecological robotic surgeons (> 30 robotic procedures) were recruited. After familiarization with the VR simulator, participants completed the module 'Guided Vaginal Cuff Closure' six times. Validity evidence was investigated for 18 preselected simulator metrics. The internal consistency was assessed using Cronbach's alpha and a composite score was calculated based on metrics with significant discriminative ability between the two groups. Finally, a pass/fail standard was established using the contrasting groups' method. The experienced surgeons significantly outperformed the novice surgeons on 6 of the 18 metrics. The internal consistency was 0.58 (Cronbach's alpha). The experienced surgeons' mean composite score for all six repetitions were significantly better than the novice surgeons' (76.1 vs. 63.0, respectively, p < 0.001). A pass/fail standard of 75/100 was established. Four novice surgeons passed this standard (false positives) and three experienced surgeons failed (false negatives). Our study has gathered validity evidence for a simulation-based test for procedural robotic surgical competency in the vaginal cuff closure procedure and established a credible pass/fail standard for future

  14. Validation of the procedures. [integrated multidisciplinary optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Mantay, Wayne R.

    1989-01-01

    Validation strategies are described for procedures aimed at improving the rotor blade design process through a multidisciplinary optimization approach. Validation of the basic rotor environment prediction tools and the overall rotor design are discussed.

  15. Optimization and validation of moving average quality control procedures using bias detection curves and moving average validation charts.

    PubMed

    van Rossum, Huub H; Kemperman, Hans

    2017-02-01

    To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.

  16. Validity, reliability and support for implementation of independence-scaled procedural assessment in laparoscopic surgery.

    PubMed

    Kramp, Kelvin H; van Det, Marc J; Veeger, Nic J G M; Pierie, Jean-Pierre E N

    2016-06-01

    There is no widely used method to evaluate procedure-specific laparoscopic skills. The first aim of this study was to develop a procedure-based assessment method. The second aim was to compare its validity, reliability and feasibility with currently available global rating scales (GRSs). An independence-scaled procedural assessment was created by linking the procedural key steps of the laparoscopic cholecystectomy to an independence scale. Subtitled and blinded videos of a novice, an intermediate and an almost competent trainee, were evaluated with GRSs (OSATS and GOALS) and the independence-scaled procedural assessment by seven surgeons, three senior trainees and six scrub nurses. Participants received a short introduction to the GRSs and independence-scaled procedural assessment before assessment. The validity was estimated with the Friedman and Wilcoxon test and the reliability with the intra-class correlation coefficient (ICC). A questionnaire was used to evaluate user opinion. Independence-scaled procedural assessment and GRS scores improved significantly with surgical experience (OSATS p = 0.001, GOALS p < 0.001, independence-scaled procedural assessment p < 0.001). The ICCs of the OSATS, GOALS and independence-scaled procedural assessment were 0.78, 0.74 and 0.84, respectively, among surgeons. The ICCs increased when the ratings of scrub nurses were added to those of the surgeons. The independence-scaled procedural assessment was not considered more of an administrative burden than the GRSs (p = 0.692). A procedural assessment created by combining procedural key steps to an independence scale is a valid, reliable and acceptable assessment instrument in surgery. In contrast to the GRSs, the reliability of the independence-scaled procedural assessment exceeded the threshold of 0.8, indicating that it can also be used for summative assessment. It furthermore seems that scrub nurses can assess the operative competence of surgical trainees.

  17. Standardized Procedure Content And Data Structure Based On Human Factors Requirements For Computer-Based Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bly, Aaron; Oxstrand, Johanna; Le Blanc, Katya L

    2015-02-01

    Most activities that involve human interaction with systems in a nuclear power plant are guided by procedures. Traditionally, the use of procedures has been a paper-based process that supports safe operation of the nuclear power industry. However, the nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. Advances in digital technology make computer-based procedures (CBPs) a valid option that provides further enhancement of safety by improving human performance related to procedure use. The transition from paper-based procedures (PBPs) to CBPs creates a need for a computer-based proceduremore » system (CBPS). A CBPS needs to have the ability to perform logical operations in order to adjust to the inputs received from either users or real time data from plant status databases. Without the ability for logical operations the procedure is just an electronic copy of the paper-based procedure. In order to provide the CBPS with the information it needs to display the procedure steps to the user, special care is needed in the format used to deliver all data and instructions to create the steps. The procedure should be broken down into basic elements and formatted in a standard method for the CBPS. One way to build the underlying data architecture is to use an Extensible Markup Language (XML) schema, which utilizes basic elements to build each step in the smart procedure. The attributes of each step will determine the type of functionality that the system will generate for that step. The CBPS will provide the context for the step to deliver referential information, request a decision, or accept input from the user. The XML schema needs to provide all data necessary for the system to accurately perform each step without the need for the procedure writer to reprogram the CBPS. The research team at the Idaho National Laboratory has developed a prototype CBPS for field workers as well as

  18. Development and validation of a screening procedure to identify speech-language delay in toddlers with cleft palate.

    PubMed

    Jørgensen, Line Dahl; Willadsen, Elisabeth

    2017-01-01

    The purpose of this study was to develop and validate a clinically useful speech-language screening procedure for young children with cleft palate ± cleft lip (CP) to identify those in need of speech-language intervention. Twenty-two children with CP were assigned to a +/- need for intervention conditions based on assessment of consonant inventory using a real-time listening procedure in combination with parent-reported expressive vocabulary. These measures allowed evaluation of early speech-language skills found to correlate significantly with later speech-language performance in longitudinal studies of children with CP. The external validity of this screening procedure was evaluated by comparing the +/- need for intervention assignment determined by the screening procedure to experienced speech-language pathologist (SLP)s' clinical judgement of whether or not a child needed early intervention. The results of real-time listening assessment showed good-excellent inter-rater agreement on different consonant inventory measures. Furthermore, there was almost perfect agreement between the children selected for intervention with the screening procedure and the clinical judgement of experienced SLPs indicate that the screening procedure is a valid way of identifying children with CP who need early intervention.

  19. Validity of diagnoses, procedures, and laboratory data in Japanese administrative data.

    PubMed

    Yamana, Hayato; Moriwaki, Mutsuko; Horiguchi, Hiromasa; Kodan, Mariko; Fushimi, Kiyohide; Yasunaga, Hideo

    2017-10-01

    Validation of recorded data is a prerequisite for studies that utilize administrative databases. The present study evaluated the validity of diagnoses and procedure records in the Japanese Diagnosis Procedure Combination (DPC) data, along with laboratory test results in the newly-introduced Standardized Structured Medical Record Information Exchange (SS-MIX) data. Between November 2015 and February 2016, we conducted chart reviews of 315 patients hospitalized between April 2014 and March 2015 in four middle-sized acute-care hospitals in Shizuoka, Kochi, Fukuoka, and Saga Prefectures and used them as reference standards. The sensitivity and specificity of DPC data in identifying 16 diseases and 10 common procedures were identified. The accuracy of SS-MIX data for 13 laboratory test results was also examined. The specificity of diagnoses in the DPC data exceeded 96%, while the sensitivity was below 50% for seven diseases and variable across diseases. When limited to primary diagnoses, the sensitivity and specificity were 78.9% and 93.2%, respectively. The sensitivity of procedure records exceeded 90% for six procedures, and the specificity exceeded 90% for nine procedures. Agreement between the SS-MIX data and the chart reviews was above 95% for all 13 items. The validity of diagnoses and procedure records in the DPC data and laboratory results in the SS-MIX data was high in general, supporting their use in future studies. Copyright © 2017 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  20. Validation of a Video-based Game-Understanding Test Procedure in Badminton.

    ERIC Educational Resources Information Center

    Blomqvist, Minna T.; Luhtanen, Pekka; Laakso, Lauri; Keskinen, Esko

    2000-01-01

    Reports the development and validation of video-based game-understanding tests in badminton for elementary and secondary students. The tests included different sequences that simulated actual game situations. Players had to solve tactical problems by selecting appropriate solutions and arguments for their decisions. Results suggest that the test…

  1. Towards objective hand hygiene technique assessment: validation of the ultraviolet-dye-based hand-rubbing quality assessment procedure.

    PubMed

    Lehotsky, Á; Szilágyi, L; Bánsághi, S; Szerémy, P; Wéber, G; Haidegger, T

    2017-09-01

    Ultraviolet spectrum markers are widely used for hand hygiene quality assessment, although their microbiological validation has not been established. A microbiology-based assessment of the procedure was conducted. Twenty-five artificial hand models underwent initial full contamination, then disinfection with UV-dyed hand-rub solution, digital imaging under UV-light, microbiological sampling and cultivation, and digital imaging of the cultivated flora were performed. Paired images of each hand model were registered by a software tool, then the UV-marked regions were compared with the pathogen-free sites pixel by pixel. Statistical evaluation revealed that the method indicates correctly disinfected areas with 95.05% sensitivity and 98.01% specificity. Copyright © 2017 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  2. Validating abortion procedure coding in Canadian administrative databases.

    PubMed

    Samiedaluie, Saied; Peterson, Sandra; Brant, Rollin; Kaczorowski, Janusz; Norman, Wendy V

    2016-07-12

    The British Columbia (BC) Ministry of Health collects abortion procedure data in the Medical Services Plan (MSP) physician billings database and in the hospital information Discharge Abstracts Database (DAD). Our study seeks to validate abortion procedure coding in these databases. Two randomized controlled trials enrolled a cohort of 1031 women undergoing abortion. The researcher collected database includes both enrollment and follow up chart review data. The study cohort was linked to MSP and DAD data to identify all abortions events captured in the administrative databases. We compared clinical chart data on abortion procedures with health administrative data. We considered a match to occur if an abortion related code was found in administrative data within 30 days of the date of the same event documented in a clinical chart. Among 1158 abortion events performed during enrollment and follow-up period, 99.1 % were found in at least one of the administrative data sources. The sensitivities for the two databases, evaluated using a gold standard, were 97.7 % (95 % confidence interval (CI): 96.6-98.5) for the MSP database and 91.9 % (95 % CI: 90.0-93.4) for the DAD. Abortion events coded in the BC health administrative databases are highly accurate. Single-payer health administrative databases at the provincial level in Canada have the potential to offer valid data reflecting abortion events. ClinicalTrials.gov Identifier NCT01174225 , Current Controlled Trials ISRCTN19506752 .

  3. 41 CFR 60-3.6 - Use of selection procedures which have not been validated.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) General Principles § 60-3.6 Use of selection procedures which have not been validated. A. Use of alternate... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Use of selection procedures which have not been validated. 60-3.6 Section 60-3.6 Public Contracts and Property Management...

  4. Pilot In-Trail Procedure Validation Simulation Study

    NASA Technical Reports Server (NTRS)

    Bussink, Frank J. L.; Murdoch, Jennifer L.; Chamberlain, James P.; Chartrand, Ryan; Jones, Kenneth M.

    2008-01-01

    A Human-In-The-Loop experiment was conducted at the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) to investigate the viability of the In-Trail Procedure (ITP) concept from a flight crew perspective, by placing participating airline pilots in a simulated oceanic flight environment. The test subject pilots used new onboard avionics equipment that provided improved information about nearby traffic and enabled them, when specific criteria were met, to request an ITP flight level change referencing one or two nearby aircraft that might otherwise block the flight level change. The subject pilots subjective assessments of ITP validity and acceptability were measured via questionnaires and discussions, and their objective performance in appropriately selecting, requesting, and performing ITP flight level changes was evaluated for each simulated flight scenario. Objective performance and subjective workload assessment data from the experiment s test conditions were analyzed for statistical and operational significance and are reported in the paper. Based on these results, suggestions are made to further improve the ITP.

  5. More than 95% completeness of reported procedures in the population-based Dutch Arthroplasty Register

    PubMed Central

    van Steenbergen, Liza N; Spooren, Anneke; van Rooden, Stephanie M; van Oosterhout, Frank J; Morrenhof, Jan W; Nelissen, Rob G H H

    2015-01-01

    Background and purpose A complete and correct national arthroplasty register is indispensable for the quality of arthroplasty outcome studies. We evaluated the coverage, completeness, and validity of the Dutch Arthroplasty Register (LROI) for hip and knee arthroplasty. Patients and methods The LROI is a nationwide population-based registry with information on joint arthroplasties in the Netherlands. Completeness of entered procedures was validated in 2 ways: (1) by comparison with the number of reimbursements for arthroplasty surgeries (Vektis database), and (2) by comparison with data from hospital information systems (HISs). The validity was examined by conducting checks on missing or incorrectly coded values in the LROI. Results The LROI contains over 300,000 hip and knee arthroplasties performed since 2007. Coverage of all Dutch hospitals (n = 100) was reached in 2012. Completeness of registered procedures was 98% for hip arthroplasty and 96% for knee arthroplasty in 2012, based on Vektis data. Based on comparison with data from the HIS, completeness of registered procedures was 97% for primary total hip arthroplasty and 96% for primary knee arthroplasty in 2013. Completeness of revision arthroplasty was 88% for hips and 90% for knees in 2013. The proportion of missing or incorrectly coded values of variables was generally less than 0.5%, except for encrypted personal identity numbers (17% of which were missing) and ASA scores (10% of which were missing). Interpretation The LROI now contains over 300,000 hip and knee arthroplasty procedures, with coverage of all hospitals. It has a good level of completeness (i.e. more than 95% for primary hip and knee arthroplasty procedures in 2012 and 2013) and the database has high validity. PMID:25758646

  6. Development and Validation of a Novel Robotic Procedure Specific Simulation Platform: Partial Nephrectomy.

    PubMed

    Hung, Andrew J; Shah, Swar H; Dalag, Leonard; Shin, Daniel; Gill, Inderbir S

    2015-08-01

    We developed a novel procedure specific simulation platform for robotic partial nephrectomy. In this study we prospectively evaluate its face, content, construct and concurrent validity. This hybrid platform features augmented reality and virtual reality. Augmented reality involves 3-dimensional robotic partial nephrectomy surgical videos overlaid with virtual instruments to teach surgical anatomy, technical skills and operative steps. Advanced technical skills are assessed with an embedded full virtual reality renorrhaphy task. Participants were classified as novice (no surgical training, 15), intermediate (less than 100 robotic cases, 13) or expert (100 or more robotic cases, 14) and prospectively assessed. Cohort performance was compared with the Kruskal-Wallis test (construct validity). Post-study questionnaire was used to assess the realism of simulation (face validity) and usefulness for training (content validity). Concurrent validity evaluated correlation between virtual reality renorrhaphy task and a live porcine robotic partial nephrectomy performance (Spearman's analysis). Experts rated the augmented reality content as realistic (median 8/10) and helpful for resident/fellow training (8.0-8.2/10). Experts rated the platform highly for teaching anatomy (9/10) and operative steps (8.5/10) but moderately for technical skills (7.5/10). Experts and intermediates outperformed novices (construct validity) in efficiency (p=0.0002) and accuracy (p=0.002). For virtual reality renorrhaphy, experts outperformed intermediates on GEARS metrics (p=0.002). Virtual reality renorrhaphy and in vivo porcine robotic partial nephrectomy performance correlated significantly (r=0.8, p <0.0001) (concurrent validity). This augmented reality simulation platform displayed face, content and construct validity. Performance in the procedure specific virtual reality task correlated highly with a porcine model (concurrent validity). Future efforts will integrate procedure specific

  7. Face and construct validity of a computer-based virtual reality simulator for ERCP.

    PubMed

    Bittner, James G; Mellinger, John D; Imam, Toufic; Schade, Robert R; Macfadyen, Bruce V

    2010-02-01

    Currently, little evidence supports computer-based simulation for ERCP training. To determine face and construct validity of a computer-based simulator for ERCP and assess its perceived utility as a training tool. Novice and expert endoscopists completed 2 simulated ERCP cases by using the GI Mentor II. Virtual Education and Surgical Simulation Laboratory, Medical College of Georgia. Outcomes included times to complete the procedure, reach the papilla, and use fluoroscopy; attempts to cannulate the papilla, pancreatic duct, and common bile duct; and number of contrast injections and complications. Subjects assessed simulator graphics, procedural accuracy, difficulty, haptics, overall realism, and training potential. Only when performance data from cases A and B were combined did the GI Mentor II differentiate novices and experts based on times to complete the procedure, reach the papilla, and use fluoroscopy. Across skill levels, overall opinions were similar regarding graphics (moderately realistic), accuracy (similar to clinical ERCP), difficulty (similar to clinical ERCP), overall realism (moderately realistic), and haptics. Most participants (92%) claimed that the simulator has definite training potential or should be required for training. Small sample size, single institution. The GI Mentor II demonstrated construct validity for ERCP based on select metrics. Most subjects thought that the simulated graphics, procedural accuracy, and overall realism exhibit face validity. Subjects deemed it a useful training tool. Study repetition involving more participants and cases may help confirm results and establish the simulator's ability to differentiate skill levels based on ERCP-specific metrics.

  8. Validation of Short-Term Noise Assessment Procedures: FY16 Summary of Procedures, Progress, and Preliminary Results

    DTIC Science & Technology

    Validation project. This report describes the procedure used to generate the noise models output dataset , and then it compares that dataset to the...benchmark, the Engineer Research and Development Centers Long-Range Sound Propagation dataset . It was found that the models consistently underpredict the

  9. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  10. Validation of an advanced analytical procedure applied to the measurement of environmental radioactivity.

    PubMed

    Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van

    2018-04-01

    In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Meta-Analysis of Criterion Validity for Curriculum-Based Measurement in Written Language

    ERIC Educational Resources Information Center

    Romig, John Elwood; Therrien, William J.; Lloyd, John W.

    2017-01-01

    We used meta-analysis to examine the criterion validity of four scoring procedures used in curriculum-based measurement of written language. A total of 22 articles representing 21 studies (N = 21) met the inclusion criteria. Results indicated that two scoring procedures, correct word sequences and correct minus incorrect sequences, have acceptable…

  12. 41 CFR 60-3.6 - Use of selection procedures which have not been validated.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... validation techniques contemplated by these guidelines. In such circumstances, the user should utilize... a formal and scored selection procedure is used which has an adverse impact, the validation... user cannot or need not follow the validation techniques anticipated by these guidelines, the user...

  13. 41 CFR 60-3.6 - Use of selection procedures which have not been validated.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... validation techniques contemplated by these guidelines. In such circumstances, the user should utilize... a formal and scored selection procedure is used which has an adverse impact, the validation... user cannot or need not follow the validation techniques anticipated by these guidelines, the user...

  14. 41 CFR 60-3.6 - Use of selection procedures which have not been validated.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... validation techniques contemplated by these guidelines. In such circumstances, the user should utilize... a formal and scored selection procedure is used which has an adverse impact, the validation... user cannot or need not follow the validation techniques anticipated by these guidelines, the user...

  15. 41 CFR 60-3.6 - Use of selection procedures which have not been validated.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... validation techniques contemplated by these guidelines. In such circumstances, the user should utilize... a formal and scored selection procedure is used which has an adverse impact, the validation... user cannot or need not follow the validation techniques anticipated by these guidelines, the user...

  16. URANS simulations of the tip-leakage cavitating flow with verification and validation procedures

    NASA Astrophysics Data System (ADS)

    Cheng, Huai-yu; Long, Xin-ping; Liang, Yun-zhi; Long, Yun; Ji, Bin

    2018-04-01

    In the present paper, the Vortex Identified Zwart-Gerber-Belamri (VIZGB) cavitation model coupled with the SST-CC turbulence model is used to investigate the unsteady tip-leakage cavitating flow induced by a NACA0009 hydrofoil. A qualitative comparison between the numerical and experimental results is made. In order to quantitatively evaluate the reliability of the numerical data, the verification and validation (V&V) procedures are used in the present paper. Errors of numerical results are estimated with seven error estimators based on the Richardson extrapolation method. It is shown that though a strict validation cannot be achieved, a reasonable prediction of the gross characteristics of the tip-leakage cavitating flow can be obtained. Based on the numerical results, the influence of the cavitation on the tip-leakage vortex (TLV) is discussed, which indicates that the cavitation accelerates the fusion of the TLV and the tip-separation vortex (TSV). Moreover, the trajectory of the TLV, when the cavitation occurs, is close to the side wall.

  17. Validation of Procedures for Monitoring Crewmember Immune Function - Short Duration Biological Investigation

    NASA Technical Reports Server (NTRS)

    Sams, Clarence; Crucian, Brian; Stowe, Raymond; Pierson, Duane; Mehta, Satish; Morukov, Boris; Uchakin, Peter; Nehlsen-Cannarella, Sandra

    2008-01-01

    Validation of Procedures for Monitoring Crew Member Immune Function - Short Duration Biological Investigation (Integrated Immune-SDBI) will assess the clinical risks resulting from the adverse effects of space flight on the human immune system and will validate a flightcompatible immune monitoring strategy. Immune system changes will be monitored by collecting and analyzing blood, urine and saliva samples from crewmembers before, during and after space flight.

  18. Comparative Validity of the Shedler and Westen Assessment Procedure-200

    ERIC Educational Resources Information Center

    Mullins-Sweatt, Stephanie N.; Widiger, Thomas A.

    2008-01-01

    A predominant dimensional model of general personality structure is the five-factor model (FFM). Quite a number of alternative instruments have been developed to assess the domains of the FFM. The current study compares the validity of 2 alternative versions of the Shedler and Westen Assessment Procedure (SWAP-200) FFM scales, 1 that was developed…

  19. Approaching the investigation of plasma turbulence through a rigorous verification and validation procedure: A practical example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricci, P., E-mail: paolo.ricci@epfl.ch; Riva, F.; Theiler, C.

    In the present work, a Verification and Validation procedure is presented and applied showing, through a practical example, how it can contribute to advancing our physics understanding of plasma turbulence. Bridging the gap between plasma physics and other scientific domains, in particular, the computational fluid dynamics community, a rigorous methodology for the verification of a plasma simulation code is presented, based on the method of manufactured solutions. This methodology assesses that the model equations are correctly solved, within the order of accuracy of the numerical scheme. The technique to carry out a solution verification is described to provide a rigorousmore » estimate of the uncertainty affecting the numerical results. A methodology for plasma turbulence code validation is also discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The Verification and Validation methodology is then applied to the study of plasma turbulence in the basic plasma physics experiment TORPEX [Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulations carried out with the GBS code [Ricci et al., Plasma Phys. Controlled Fusion 54, 124047 (2012)]. The validation procedure allows progress in the understanding of the turbulent dynamics in TORPEX, by pinpointing the presence of a turbulent regime transition, due to the competition between the resistive and ideal interchange instabilities.« less

  20. Characterization of the olfactory impact around a wastewater treatment plant: optimization and validation of a hydrogen sulfide determination procedure based on passive diffusion sampling.

    PubMed

    Colomer, Fernando Llavador; Espinós-Morató, Héctor; Iglesias, Enrique Mantilla; Pérez, Tatiana Gómez; Campos-Candel, Andreu; Lozano, Caterina Coll

    2012-08-01

    A monitoring program based on an indirect method was conducted to assess the approximation of the olfactory impact in several wastewater treatment plants (in the present work, only one is shown). The method uses H2S passive sampling using Palmes-type diffusion tubes impregnated with silver nitrate and fluorometric analysis employing fluorescein mercuric acetate. The analytical procedure was validated in the exposure chamber. Exposure periods ofat least 4 days are recommended. The quantification limit of the procedure is 0.61 ppb for a 5-day sampling, which allows the H2S immission (ground concentration) level to be measured within its low odor threshold, from 0.5 to 300 ppb. Experimental results suggest an exposure time greater than 4 days, while recovery efficiency of the procedure, 93.0+/-1.8%, seems not to depend on the amount of H2S collected by the samplers within their application range. The repeatability, expressed as relative standard deviation, is lower than 7%, which is within the limits normally accepted for this type of sampler. Statistical comparison showed that this procedure and the reference method provide analogous accuracy. The proposed procedure was applied in two experimental campaigns, one intensive and the other extensive, and concentrations within the H2S low odor threshold were quantified at each sampling point. From these results, it can be concluded that the procedure shows good potential for monitoring the olfactory impact around facilities where H2S emissions are dominant.

  1. Development and Validation of a Mobile Device-based External Ventricular Drain Simulator.

    PubMed

    Morone, Peter J; Bekelis, Kimon; Root, Brandon K; Singer, Robert J

    2017-10-01

    Multiple external ventricular drain (EVD) simulators have been created, yet their cost, bulky size, and nonreusable components limit their accessibility to residency programs. To create and validate an animated EVD simulator that is accessible on a mobile device. We developed a mobile-based EVD simulator that is compatible with iOS (Apple Inc., Cupertino, California) and Android-based devices (Google, Mountain View, California) and can be downloaded from the Apple App and Google Play Store. Our simulator consists of a learn mode, which teaches users the procedure, and a test mode, which assesses users' procedural knowledge. Twenty-eight participants, who were divided into expert and novice categories, completed the simulator in test mode and answered a postmodule survey. This was graded using a 5-point Likert scale, with 5 representing the highest score. Using the survey results, we assessed the module's face and content validity, whereas construct validity was evaluated by comparing the expert and novice test scores. Participants rated individual survey questions pertaining to face and content validity a median score of 4 out of 5. When comparing test scores, generated by the participants completing the test mode, the experts scored higher than the novices (mean, 71.5; 95% confidence interval, 69.2 to 73.8 vs mean, 48; 95% confidence interval, 44.2 to 51.6; P < .001). We created a mobile-based EVD simulator that is inexpensive, reusable, and accessible. Our results demonstrate that this simulator is face, content, and construct valid. Copyright © 2017 by the Congress of Neurological Surgeons

  2. Validation of standard operating procedures in a multicenter retrospective study to identify -omics biomarkers for chronic low back pain.

    PubMed

    Dagostino, Concetta; De Gregori, Manuela; Gieger, Christian; Manz, Judith; Gudelj, Ivan; Lauc, Gordan; Divizia, Laura; Wang, Wei; Sim, Moira; Pemberton, Iain K; MacDougall, Jane; Williams, Frances; Van Zundert, Jan; Primorac, Dragan; Aulchenko, Yurii; Kapural, Leonardo; Allegri, Massimo

    2017-01-01

    Chronic low back pain (CLBP) is one of the most common medical conditions, ranking as the greatest contributor to global disability and accounting for huge societal costs based on the Global Burden of Disease 2010 study. Large genetic and -omics studies provide a promising avenue for the screening, development and validation of biomarkers useful for personalized diagnosis and treatment (precision medicine). Multicentre studies are needed for such an effort, and a standardized and homogeneous approach is vital for recruitment of large numbers of participants among different centres (clinical and laboratories) to obtain robust and reproducible results. To date, no validated standard operating procedures (SOPs) for genetic/-omics studies in chronic pain have been developed. In this study, we validated an SOP model that will be used in the multicentre (5 centres) retrospective "PainOmics" study, funded by the European Community in the 7th Framework Programme, which aims to develop new biomarkers for CLBP through three different -omics approaches: genomics, glycomics and activomics. The SOPs describe the specific procedures for (1) blood collection, (2) sample processing and storage, (3) shipping details and (4) cross-check testing and validation before assays that all the centres involved in the study have to follow. Multivariate analysis revealed the absolute specificity and homogeneity of the samples collected by the five centres for all genetics, glycomics and activomics analyses. The SOPs used in our multicenter study have been validated. Hence, they could represent an innovative tool for the correct management and collection of reliable samples in other large-omics-based multicenter studies.

  3. Validation of internet-based self-reported anthropometric, demographic data and participant identity in the Food4Me study

    USDA-ARS?s Scientific Manuscript database

    BACKGROUND In e-health intervention studies, there are concerns about the reliability of internet-based, self-reported (SR) data and about the potential for identity fraud. This study introduced and tested a novel procedure for assessing the validity of internet-based, SR identity and validated anth...

  4. Reverberation Chamber Uniformity Validation and Radiated Susceptibility Test Procedures for the NASA High Intensity Radiated Fields Laboratory

    NASA Technical Reports Server (NTRS)

    Koppen, Sandra V.; Nguyen, Truong X.; Mielnik, John J.

    2010-01-01

    The NASA Langley Research Center's High Intensity Radiated Fields Laboratory has developed a capability based on the RTCA/DO-160F Section 20 guidelines for radiated electromagnetic susceptibility testing in reverberation chambers. Phase 1 of the test procedure utilizes mode-tuned stirrer techniques and E-field probe measurements to validate chamber uniformity, determines chamber loading effects, and defines a radiated susceptibility test process. The test procedure is segmented into numbered operations that are largely software controlled. This document is intended as a laboratory test reference and includes diagrams of test setups, equipment lists, as well as test results and analysis. Phase 2 of development is discussed.

  5. Positive predictive value of cardiac examination, procedure and surgery codes in the Danish National Patient Registry: a population-based validation study

    PubMed Central

    Adelborg, Kasper; Sundbøll, Jens; Munch, Troels; Frøslev, Trine; Sørensen, Henrik Toft; Bøtker, Hans Erik; Schmidt, Morten

    2016-01-01

    Objective Danish medical registries are widely used for cardiovascular research, but little is known about the data quality of cardiac interventions. We computed positive predictive values (PPVs) of codes for cardiac examinations, procedures and surgeries registered in the Danish National Patient Registry during 2010–2012. Design Population-based validation study. Setting We randomly sampled patients from 1 university hospital and 2 regional hospitals in the Central Denmark Region. Participants 1239 patients undergoing different cardiac interventions. Main outcome measure PPVs with medical record review as reference standard. Results A total of 1233 medical records (99% of the total sample) were available for review. PPVs ranged from 83% to 100%. For examinations, the PPV was overall 98%, reflecting PPVs of 97% for echocardiography, 97% for right heart catheterisation and 100% for coronary angiogram. For procedures, the PPV was 98% overall, with PPVs of 98% for thrombolysis, 92% for cardioversion, 100% for radiofrequency ablation, 98% for percutaneous coronary intervention, and 100% for both cardiac pacemakers and implantable cardiac defibrillators. For cardiac surgery, the overall PPVs was 99%, encompassing PPVs of 100% for mitral valve surgery, 99% for aortic valve surgery, 98% for coronary artery bypass graft surgery, and 100% for heart transplantation. The accuracy of coding was consistent within age, sex, and calendar year categories, and the agreement between independent reviewers was high (99%). Conclusions Cardiac examinations, procedures and surgeries have high PPVs in the Danish National Patient Registry. PMID:27940630

  6. Are we really measuring what we say we're measuring? Using video techniques to supplement traditional construct validation procedures.

    PubMed

    Podsakoff, Nathan P; Podsakoff, Philip M; Mackenzie, Scott B; Klinger, Ryan L

    2013-01-01

    Several researchers have persuasively argued that the most important evidence to consider when assessing construct validity is whether variations in the construct of interest cause corresponding variations in the measures of the focal construct. Unfortunately, the literature provides little practical guidance on how researchers can go about testing this. Therefore, the purpose of this article is to describe how researchers can use video techniques to test whether their scales measure what they purport to measure. First, we discuss how researchers can develop valid manipulations of the focal construct that they hope to measure. Next, we explain how to design a study to use this manipulation to test the validity of the scale. Finally, comparing and contrasting traditional and contemporary perspectives on validation, we discuss the advantages and limitations of video-based validation procedures. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  7. Characterization of the olfactory impact around a wastewater treatment plant: Optimization and validation of a hydrogen sulfide determination procedure based on passive diffusion sampling.

    PubMed

    Colomer, Fernando Llavador; Espinós-Morató, Héctor; Iglesias, Enrique Mantilla; Pérez, Tatiana Gómez; Campos-Candel, Andreu; Coll Lozano, Caterina

    2012-08-01

    A monitoring program based on an indirect method was conducted to assess the approximation of the olfactory impact in several wastewater treatment plants (in the present work, only one is shown). The method uses H 2 S passive sampling using Palmes-type diffusion tubes impregnated with silver nitrate and fluorometric analysis employing fluorescein mercuric acetate. The analytical procedure was validated in the exposure chamber. Exposure periods of at least 4 days are recommended. The quantification limit of the procedure is 0.61 ppb for a 5-day sampling, which allows the H 2 S immission (ground concentration) level to be measured within its low odor threshold, from 0.5 to 300 ppb. Experimental results suggest an exposure time greater than 4 days, while recovery efficiency of the procedure, 93.0 ± 1.8%, seems not to depend on the amount of H 2 S collected by the samplers within their application range. The repeatability, expressed as relative standard deviation, is lower than 7%, which is within the limits normally accepted for this type of sampler. Statistical comparison showed that this procedure and the reference method provide analogous accuracy. The proposed procedure was applied in two experimental campaigns, one intensive and the other extensive, and concentrations within the H 2 S low odor threshold were quantified at each sampling point. From these results, it can be concluded that the procedure shows good potential for monitoring the olfactory impact around facilities where H 2 S emissions are dominant. [Box: see text].

  8. Design for validation: An approach to systems validation

    NASA Technical Reports Server (NTRS)

    Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)

    1989-01-01

    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.

  9. Validation of a New Procedure for Impedance Eduction in Flow

    NASA Technical Reports Server (NTRS)

    Watson, W. R.; Jones, M. G.

    2010-01-01

    A new impedance eduction procedure is validated by comparing the educed impedance spectrum to that of an older but well-tested eduction procedure. The older procedure requires the installation of a microphone array in the liner test section but the new procedure removes this requirement. A 12.7-mm stainless steel plate and a conventional liner consisting of a perforated plate bonded to a honeycomb core are tested. Test data is acquired from a grazing flow, impedance tube facility for a range of source frequencies and mean flow Mach numbers for which only plane waves are cut on. For the stainless steel plate, the educed admittance spectrum using the new procedure shows an improvement over that of the old procedure. This improvement shows up primarily in the educed conductance spectrum. Both eduction procedures show discrepancies in educed admittance in the mid-frequency range. Indications are that this discrepancy is triggered by an inconsistency between the measured eduction data (that contains boundary layer effects) and the two eduction models (for which the boundary layer is neglected). For the conventional liner, both eduction procedures are in very good agreement with each other. Small discrepancies occur for one or two frequencies in the mid-frequency range and for frequencies beyond the cut on frequency of higher-order duct modes. This discrepancy in the midfrequency range occurs because an automated optimizer is used to educe the impedance and the objective function used by the optimizer is extremely flat and therefore sensitive to initial starting values. The discrepancies at frequencies beyond the cut on frequency of higher order duct modes are due to the assumption of only plane waves in the impedance eduction model, although higher order modes are propagating in the impedance tube facility.

  10. Validation of Student and Parent Reported Data on the Basic Grant Application Form, 1978-79 Comprehensive Validation Guide. Procedural Manual for: Validation of Cases Referred by Institutions; Validation of Cases Referred by the Office of Education; Recovery of Overpayments.

    ERIC Educational Resources Information Center

    Smith, Karen; And Others

    Procedures for validating data reported by students and parents on an application for Basic Educational Opportunity Grants were developed in 1978 for the U.S. Office of Education (OE). Validation activities include: validation of flagged Student Eligibility Reports (SERs) for students whose schools are part of the Alternate Disbursement System;…

  11. Validation protocol of analytical procedures for quantification of drugs in polymeric systems for parenteral administration: dexamethasone phosphate disodium microparticles.

    PubMed

    Martín-Sabroso, Cristina; Tavares-Fernandes, Daniel Filipe; Espada-García, Juan Ignacio; Torres-Suárez, Ana Isabel

    2013-12-15

    In this work a protocol to validate analytical procedures for the quantification of drug substances formulated in polymeric systems that comprise both drug entrapped into the polymeric matrix (assay:content test) and drug released from the systems (assay:dissolution test) is developed. This protocol is applied to the validation two isocratic HPLC analytical procedures for the analysis of dexamethasone phosphate disodium microparticles for parenteral administration. Preparation of authentic samples and artificially "spiked" and "unspiked" samples is described. Specificity (ability to quantify dexamethasone phosphate disodium in presence of constituents of the dissolution medium and other microparticle constituents), linearity, accuracy and precision are evaluated, in the range from 10 to 50 μg mL(-1) in the assay:content test procedure and from 0.25 to 10 μg mL(-1) in the assay:dissolution test procedure. The robustness of the analytical method to extract drug from microparticles is also assessed. The validation protocol developed allows us to conclude that both analytical methods are suitable for their intended purpose, but the lack of proportionality of the assay:dissolution analytical method should be taken into account. The validation protocol designed in this work could be applied to the validation of any analytical procedure for the quantification of drugs formulated in controlled release polymeric microparticles. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. The Validity of Selection and Classification Procedures for Predicting Job Performance.

    DTIC Science & Technology

    1987-04-01

    lacholual or pulley Issues. They cemmunicate Me resulls of special analyses, Iantrim rp or phses of a teak, ad hasm quick macton werk. Paperm r reviw ...51 I. Alternative Selection Procedures ................. 56 J. Meta-Analyses of Validities ............. 58 K . Meta-Analytic Comparisons of...Aptitude Test Battery GM General Maintenance GS General Science GVN Cognitive Ability HS&T Health, Social and Technology K Motor Coordination KFM

  13. A closer look at diagnosis in clinical dental practice: part 1. Reliability, validity, specificity and sensitivity of diagnostic procedures.

    PubMed

    Pretty, Iain A; Maupomé, Gerardo

    2004-04-01

    Dentists are involved in diagnosing disease in every aspect of their clinical practice. A range of tests, systems, guides and equipment--which can be generally referred to as diagnostic procedures--are available to aid in diagnostic decision making. In this era of evidence-based dentistry, and given the increasing demand for diagnostic accuracy and properly targeted health care, it is important to assess the value of such diagnostic procedures. Doing so allows dentists to weight appropriately the information these procedures supply, to purchase new equipment if it proves more reliable than existing equipment or even to discard a commonly used procedure if it is shown to be unreliable. This article, the first in a 6-part series, defines several concepts used to express the usefulness of diagnostic procedures, including reliability and validity, and describes some of their operating characteristics (statistical measures of performance), in particular, specificity and sensitivity. Subsequent articles in the series will discuss the value of diagnostic procedures used in daily dental practice and will compare today's most innovative procedures with established methods.

  14. Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilches-Freixas, Gloria; Létang, Jean Michel; Rit,

    2016-09-15

    Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performedmore » at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation

  15. Validity of a semantically cued recall procedure for the mini-mental state examination.

    PubMed

    Yuspeh, R L; Vanderploeg, R D; Kershaw, D A

    1998-10-01

    The validity of supplementing the three-item recall portion of the Mini-Mental State Examination (MMSE) with a cued recall procedure to help specify the nature of patients' memory problems was examined. Subjects were 247 individuals representing three diagnostic groups: Alzheimer's disease (AD), subcortical vascular ischemic dementia (SVaD), and normal controls. Individuals were administered a battery of neuropsychological tests, including the MMSE, as part of a comprehensive evaluation for the presence of dementia or other neurologic disorder. MMSE performance differed among groups. The three-item free recall performance also differed among groups, with post hoc analyses revealing the AD and SVaD groups were more impaired than controls but did not differ significantly from each other. Following a cued recall procedure of the MMSE three-items, groups differed, with post hoc analyses showing that AD patients failed to benefit from cues, whereas SVaD patients performed significantly better and comparable to control subjects. Significant correlations between the MMSE three-item cued recall performance and other memory measures demonstrated concurrent validity. Consistent with previous research indicating that SVaD is associated with memory encoding and retrieval deficits, whereas AD is associated with consolidation and storage problems, the present study supported the validity of the cued recall procedure of the three items on the MMSE in helping to distinguish between patients with AD and those with a vascular dementia with primarily subcortical pathology; however, despite these findings, a more extensive battery of neuropsychological measures is still recommended to consistently assess subtle diagnostic differences in these memory processes.

  16. 40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 14 2010-07-01 2010-07-01 false Alternative Validation Procedure for EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. D Appendix D to Part 63—Alternative Validation...

  17. 40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 14 2011-07-01 2011-07-01 false Alternative Validation Procedure for EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. D Appendix D to Part 63—Alternative Validation...

  18. 40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 15 2014-07-01 2014-07-01 false Alternative Validation Procedure for EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. D Appendix D to Part 63—Alternative Validation...

  19. 40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 15 2013-07-01 2013-07-01 false Alternative Validation Procedure for EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. D Appendix D to Part 63—Alternative Validation...

  20. 40 CFR Appendix D to Part 63 - Alternative Validation Procedure for EPA Waste and Wastewater Methods

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 15 2012-07-01 2012-07-01 false Alternative Validation Procedure for EPA Waste and Wastewater Methods D Appendix D to Part 63 Protection of Environment ENVIRONMENTAL... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. D Appendix D to Part 63—Alternative Validation...

  1. A Fourier-based textural feature extraction procedure

    NASA Technical Reports Server (NTRS)

    Stromberg, W. D.; Farr, T. G.

    1986-01-01

    A procedure is presented to discriminate and characterize regions of uniform image texture. The procedure utilizes textural features consisting of pixel-by-pixel estimates of the relative emphases of annular regions of the Fourier transform. The utility and derivation of the features are described through presentation of a theoretical justification of the concept followed by a heuristic extension to a real environment. Two examples are provided that validate the technique on synthetic images and demonstrate its applicability to the discrimination of geologic texture in a radar image of a tropical vegetated area.

  2. Prediction of Protein-Protein Interaction Sites with Machine-Learning-Based Data-Cleaning and Post-Filtering Procedures.

    PubMed

    Liu, Guang-Hui; Shen, Hong-Bin; Yu, Dong-Jun

    2016-04-01

    Accurately predicting protein-protein interaction sites (PPIs) is currently a hot topic because it has been demonstrated to be very useful for understanding disease mechanisms and designing drugs. Machine-learning-based computational approaches have been broadly utilized and demonstrated to be useful for PPI prediction. However, directly applying traditional machine learning algorithms, which often assume that samples in different classes are balanced, often leads to poor performance because of the severe class imbalance that exists in the PPI prediction problem. In this study, we propose a novel method for improving PPI prediction performance by relieving the severity of class imbalance using a data-cleaning procedure and reducing predicted false positives with a post-filtering procedure: First, a machine-learning-based data-cleaning procedure is applied to remove those marginal targets, which may potentially have a negative effect on training a model with a clear classification boundary, from the majority samples to relieve the severity of class imbalance in the original training dataset; then, a prediction model is trained on the cleaned dataset; finally, an effective post-filtering procedure is further used to reduce potential false positive predictions. Stringent cross-validation and independent validation tests on benchmark datasets demonstrated the efficacy of the proposed method, which exhibits highly competitive performance compared with existing state-of-the-art sequence-based PPIs predictors and should supplement existing PPI prediction methods.

  3. Using formal methods for content validation of medical procedure documents.

    PubMed

    Cota, Érika; Ribeiro, Leila; Bezerra, Jonas Santos; Costa, Andrei; da Silva, Rosiana Estefane; Cota, Gláucia

    2017-08-01

    We propose the use of a formal approach to support content validation of a standard operating procedure (SOP) for a therapeutic intervention. Such an approach provides a useful tool to identify ambiguities, omissions and inconsistencies, and improves the applicability and efficacy of documents in the health settings. We apply and evaluate a methodology originally proposed for the verification of software specification documents to a specific SOP. The verification methodology uses the graph formalism to model the document. Semi-automatic analysis identifies possible problems in the model and in the original document. The verification is an iterative process that identifies possible faults in the original text that should be revised by its authors and/or specialists. The proposed method was able to identify 23 possible issues in the original document (ambiguities, omissions, redundant information, and inaccuracies, among others). The formal verification process aided the specialists to consider a wider range of usage scenarios and to identify which instructions form the kernel of the proposed SOP and which ones represent additional or required knowledge that are mandatory for the correct application of the medical document. By using the proposed verification process, a simpler and yet more complete SOP could be produced. As consequence, during the validation process the experts received a more mature document and could focus on the technical aspects of the procedure itself. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Clinical-scale validation of a new efficient procedure for cryopreservation of ex vivo expanded cord blood hematopoietic stem and progenitor cells.

    PubMed

    Duchez, Pascale; Rodriguez, Laura; Chevaleyre, Jean; De La Grange, Philippe Brunet; Ivanovic, Zoran

    2016-12-01

    Survival of ex vivo expanded hematopoietic stem cells (HSC) and progenitor cells is low with the standard cryopreservation procedure. We recently showed that the efficiency of cryopreservation of these cells may be greatly enhanced by adding a serum-free xeno-free culture medium (HP01 Macopharma), which improves the antioxidant and biochemical properties of the cryopreservation solution. Here we present the clinical-scale validation of this cryopreservation procedure. The hematopoietic cells expanded in clinical-scale cultures were cryopreserved applying the new HP01-based procedure. The viability, apoptosis rate and number of functional committed progenitors (methyl-cellulose colony forming cell test), short-term repopulating HSCs (primary recipient NSG mice) and long-term HSCs (secondary recipient NSG mice) were tested before and after thawing. The efficiency of clinical-scale procedure reproduced the efficiency of cryopreservation obtained earlier in miniature sample experiments. Furthermore, the full preservation of short- and long-term HSCs was obtained in clinical scale conditions. Because the results obtained in clinical-scale volume are comparable to our earlier results in miniature-scale cultures, the clinical-scale procedure should be considered validated. It allows cryopreservation of the whole ex vivo expanded culture content, conserving full short- and long-term HSC activity. Copyright © 2016 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  5. Spatial distribution of cosmetic-procedure businesses in two U.S. cities: a pilot mapping and validation study.

    PubMed

    Austin, S Bryn; Gordon, Allegra R; Kennedy, Grace A; Sonneville, Kendrin R; Blossom, Jeffrey; Blood, Emily A

    2013-12-06

    Cosmetic procedures have proliferated rapidly over the past few decades, with over $11 billion spent on cosmetic surgeries and other minimally invasive procedures and another $2.9 billion spent on U.V. indoor tanning in 2012 in the United States alone. While research interest is increasing in tandem with the growth of the industry, methods have yet to be developed to identify and geographically locate the myriad types of businesses purveying cosmetic procedures. Geographic location of cosmetic-procedure businesses is a critical element in understanding the public health impact of this industry; however no studies we are aware of have developed valid and feasible methods for spatial analyses of these types of businesses. The aim of this pilot validation study was to establish the feasibility of identifying businesses offering surgical and minimally invasive cosmetic procedures and to characterize the spatial distribution of these businesses. We developed and tested three methods for creating a geocoded list of cosmetic-procedure businesses in Boston (MA) and Seattle (WA), USA, comparing each method on sensitivity and staff time required per confirmed cosmetic-procedure business. Methods varied substantially. Our findings represent an important step toward enabling rigorous health-linked spatial analyses of the health implications of this little-understood industry.

  6. Spatial Distribution of Cosmetic-Procedure Businesses in Two U.S. Cities: A Pilot Mapping and Validation Study

    PubMed Central

    Austin, S. Bryn; Gordon, Allegra R.; Kennedy, Grace A.; Sonneville, Kendrin R.; Blossom, Jeffrey; Blood, Emily A.

    2013-01-01

    Cosmetic procedures have proliferated rapidly over the past few decades, with over $11 billion spent on cosmetic surgeries and other minimally invasive procedures and another $2.9 billion spent on U.V. indoor tanning in 2012 in the United States alone. While research interest is increasing in tandem with the growth of the industry, methods have yet to be developed to identify and geographically locate the myriad types of businesses purveying cosmetic procedures. Geographic location of cosmetic-procedure businesses is a critical element in understanding the public health impact of this industry; however no studies we are aware of have developed valid and feasible methods for spatial analyses of these types of businesses. The aim of this pilot validation study was to establish the feasibility of identifying businesses offering surgical and minimally invasive cosmetic procedures and to characterize the spatial distribution of these businesses. We developed and tested three methods for creating a geocoded list of cosmetic-procedure businesses in Boston (MA) and Seattle (WA), USA, comparing each method on sensitivity and staff time required per confirmed cosmetic-procedure business. Methods varied substantially. Our findings represent an important step toward enabling rigorous health-linked spatial analyses of the health implications of this little-understood industry. PMID:24322394

  7. Consistency of FMEA used in the validation of analytical procedures.

    PubMed

    Oldenhof, M T; van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Vredenbregt, M J; Weda, M; Barends, D M

    2011-02-20

    In order to explore the consistency of the outcome of a Failure Mode and Effects Analysis (FMEA) in the validation of analytical procedures, an FMEA was carried out by two different teams. The two teams applied two separate FMEAs to a High Performance Liquid Chromatography-Diode Array Detection-Mass Spectrometry (HPLC-DAD-MS) analytical procedure used in the quality control of medicines. Each team was free to define their own ranking scales for the probability of severity (S), occurrence (O), and detection (D) of failure modes. We calculated Risk Priority Numbers (RPNs) and we identified the failure modes above the 90th percentile of RPN values as failure modes needing urgent corrective action; failure modes falling between the 75th and 90th percentile of RPN values were identified as failure modes needing necessary corrective action, respectively. Team 1 and Team 2 identified five and six failure modes needing urgent corrective action respectively, with two being commonly identified. Of the failure modes needing necessary corrective actions, about a third were commonly identified by both teams. These results show inconsistency in the outcome of the FMEA. To improve consistency, we recommend that FMEA is always carried out under the supervision of an experienced FMEA-facilitator and that the FMEA team has at least two members with competence in the analytical method to be validated. However, the FMEAs of both teams contained valuable information that was not identified by the other team, indicating that this inconsistency is not always a drawback. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. Enhanced Oceanic Operations Human-In-The-Loop In-Trail Procedure Validation Simulation Study

    NASA Technical Reports Server (NTRS)

    Murdoch, Jennifer L.; Bussink, Frank J. L.; Chamberlain, James P.; Chartrand, Ryan C.; Palmer, Michael T.; Palmer, Susan O.

    2008-01-01

    The Enhanced Oceanic Operations Human-In-The-Loop In-Trail Procedure (ITP) Validation Simulation Study investigated the viability of an ITP designed to enable oceanic flight level changes that would not otherwise be possible. Twelve commercial airline pilots with current oceanic experience flew a series of simulated scenarios involving either standard or ITP flight level change maneuvers and provided subjective workload ratings, assessments of ITP validity and acceptability, and objective performance measures associated with the appropriate selection, request, and execution of ITP flight level change maneuvers. In the majority of scenarios, subject pilots correctly assessed the traffic situation, selected an appropriate response (i.e., either a standard flight level change request, an ITP request, or no request), and executed their selected flight level change procedure, if any, without error. Workload ratings for ITP maneuvers were acceptable and not substantially higher than for standard flight level change maneuvers, and, for the majority of scenarios and subject pilots, subjective acceptability ratings and comments for ITP were generally high and positive. Qualitatively, the ITP was found to be valid and acceptable. However, the error rates for ITP maneuvers were higher than for standard flight level changes, and these errors may have design implications for both the ITP and the study's prototype traffic display. These errors and their implications are discussed.

  9. Assessing Women's Responses to Sexual Threat: Validity of a Virtual Role-Play Procedure

    ERIC Educational Resources Information Center

    Jouriles, Ernest N.; Rowe, Lorelei Simpson; McDonald, Renee; Platt, Cora G.; Gomez, Gabriella S.

    2011-01-01

    This study evaluated the validity of a role-play procedure that uses virtual reality technology to assess women's responses to sexual threat. Forty-eight female undergraduate students were randomly assigned to either a standard, face-to-face role-play (RP) or a virtual role-play (VRP) of a sexually coercive situation. A multimethod assessment…

  10. Feasibility and validity of International Classification of Diseases based case mix indices.

    PubMed

    Yang, Che-Ming; Reinke, William

    2006-10-06

    Severity of illness is an omnipresent confounder in health services research. Resource consumption can be applied as a proxy of severity. The most commonly cited hospital resource consumption measure is the case mix index (CMI) and the best-known illustration of the CMI is the Diagnosis Related Group (DRG) CMI used by Medicare in the U.S. For countries that do not have DRG type CMIs, the adjustment for severity has been troublesome for either reimbursement or research purposes. The research objective of this study is to ascertain the construct validity of CMIs derived from International Classification of Diseases (ICD) in comparison with DRG CMI. The study population included 551 acute care hospitals in Taiwan and 2,462,006 inpatient reimbursement claims. The 18th version of GROUPER, the Medicare DRG classification software, was applied to Taiwan's 1998 National Health Insurance (NHI) inpatient claim data to derive the Medicare DRG CMI. The same weighting principles were then applied to determine the ICD principal diagnoses and procedures based costliness and length of stay (LOS) CMIs. Further analyses were conducted based on stratifications according to teaching status, accreditation levels, and ownership categories. The best ICD-based substitute for the DRG costliness CMI (DRGCMI) is the ICD principal diagnosis costliness CMI (ICDCMI-DC) in general and in most categories with Spearman's correlation coefficients ranging from 0.938-0.462. The highest correlation appeared in the non-profit sector. ICD procedure costliness CMI (ICDCMI-PC) outperformed ICDCMI-DC only at the medical center level, which consists of tertiary care hospitals and is more procedure intensive. The results of our study indicate that an ICD-based CMI can quite fairly approximate the DRGCMI, especially ICDCMI-DC. Therefore, substituting ICDs for DRGs in computing the CMI ought to be feasible and valid in countries that have not implemented DRGs.

  11. Effectiveness of internet-based affect induction procedures: A systematic review and meta-analysis.

    PubMed

    Ferrer, Rebecca A; Grenen, Emily G; Taber, Jennifer M

    2015-12-01

    Procedures used to induce affect in a laboratory are effective and well-validated. Given recent methodological and technological advances in Internet research, it is important to determine whether affect can be effectively induced using Internet methodology. We conducted a meta-analysis and systematic review of prior research that has used Internet-based affect induction procedures, and examined potential moderators of the effectiveness of affect induction procedures. Twenty-six studies were included in final analyses, with 89 independent effect sizes. Affect induction procedures effectively induced general positive affect, general negative affect, fear, disgust, anger, sadness, and guilt, but did not significantly induce happiness. Contamination of other nontarget affect did not appear to be a major concern. Video inductions resulted in greater effect sizes. Overall, results indicate that affect can be effectively induced in Internet studies, suggesting an important venue for the acceleration of affective science. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  12. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johanna H Oxstrand; Katya L Le Blanc

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts wemore » are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking

  13. Validation of virtual-reality-based simulations for endoscopic sinus surgery.

    PubMed

    Dharmawardana, N; Ruthenbeck, G; Woods, C; Elmiyeh, B; Diment, L; Ooi, E H; Reynolds, K; Carney, A S

    2015-12-01

    Virtual reality (VR) simulators provide an alternative to real patients for practicing surgical skills but require validation to ensure accuracy. Here, we validate the use of a virtual reality sinus surgery simulator with haptic feedback for training in Otorhinolaryngology - Head & Neck Surgery (OHNS). Participants were recruited from final-year medical students, interns, resident medical officers (RMOs), OHNS registrars and consultants. All participants completed an online questionnaire after performing four separate simulation tasks. These were then used to assess face, content and construct validity. anova with post hoc correlation was used for statistical analysis. The following groups were compared: (i) medical students/interns, (ii) RMOs, (iii) registrars and (iv) consultants. Face validity results had a statistically significant (P < 0.05) difference between the consultant group and others, while there was no significant difference between medical student/intern and RMOs. Variability within groups was not significant. Content validity results based on consultant scoring and comments indicated that the simulations need further development in several areas to be effective for registrar-level teaching. However, students, interns and RMOs indicated that the simulations provide a useful tool for learning OHNS-related anatomy and as an introduction to ENT-specific procedures. The VR simulations have been validated for teaching sinus anatomy and nasendoscopy to medical students, interns and RMOs. However, they require further development before they can be regarded as a valid tool for more advanced surgical training. © 2015 John Wiley & Sons Ltd.

  14. Proposed Core Competencies and Empirical Validation Procedure in Competency Modeling: Confirmation and Classification.

    PubMed

    Baczyńska, Anna K; Rowiński, Tomasz; Cybis, Natalia

    2016-01-01

    Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach's alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.

  15. Proposed Core Competencies and Empirical Validation Procedure in Competency Modeling: Confirmation and Classification

    PubMed Central

    Baczyńska, Anna K.; Rowiński, Tomasz; Cybis, Natalia

    2016-01-01

    Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach’s alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed. PMID:27014111

  16. Development of simulation-based learning programme for improving adherence to time-out protocol on high-risk invasive procedures outside of operating room.

    PubMed

    Jeong, Eun Ju; Chung, Hyun Soo; Choi, Jeong Yun; Kim, In Sook; Hong, Seong Hee; Yoo, Kyung Sook; Kim, Mi Kyoung; Won, Mi Yeol; Eum, So Yeon; Cho, Young Soon

    2017-06-01

    The aim of this study was to develop a simulation-based time-out learning programme targeted to nurses participating in high-risk invasive procedures and to figure out the effects of application of the new programme on acceptance of nurses. This study was performed using a simulation-based learning predesign and postdesign to figure out the effects of implementation of this programme. It was targeted to 48 registered nurses working in the general ward and the emergency department in a tertiary teaching hospital. Difference between acceptance and performance rates has been figured out by using mean, standard deviation, and Wilcoxon-signed rank test. The perception survey and score sheet have been validated through content validation index, and the reliability of evaluator has been verified by using intraclass correlation coefficient. Results showed high level of acceptance of high-risk invasive procedure (P<.01). Further, improvement was consistent regardless of clinical experience, workplace, or experience in simulation-based learning. The face validity of the programme showed over 4.0 out of 5.0. This simulation-based learning programme was effective in improving the recognition of time-out protocol and has given the participants the opportunity to become proactive in cases of high-risk invasive procedures performed outside of operating room. © 2017 John Wiley & Sons Australia, Ltd.

  17. Acceptability of Flight Deck-Based Interval Management Crew Procedures

    NASA Technical Reports Server (NTRS)

    Murdock, Jennifer L.; Wilson, Sara R.; Hubbs, Clay E.; Smail, James W.

    2013-01-01

    The Interval Management for Near-term Operations Validation of Acceptability (IM-NOVA) experiment was conducted at the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) in support of the NASA Next Generation Air Transportation System (NextGen) Airspace Systems Program's Air Traffic Management Technology Demonstration - 1 (ATD-1). ATD-1 is intended to showcase an integrated set of technologies that provide an efficient arrival solution for managing aircraft using NextGen surveillance, navigation, procedures, and automation for both airborne and ground-based systems. The goal of the IM-NOVA experiment was to assess if procedures outlined by the ATD-1 Concept of Operations, when used with a minimum set of Flight deck-based Interval Management (FIM) equipment and a prototype crew interface, were acceptable to and feasible for use by flight crews in a voice communications environment. To investigate an integrated arrival solution using ground-based air traffic control tools and aircraft automatic dependent surveillance broadcast (ADS-B) tools, the LaRC FIM system and the Traffic Management Advisor with Terminal Metering and Controller Managed Spacing tools developed at the NASA Ames Research Center (ARC) were integrated in LaRC's Air Traffic Operations Laboratory. Data were collected from 10 crews of current, qualified 757/767 pilots asked to fly a high-fidelity, fixed based simulator during scenarios conducted within an airspace environment modeled on the Dallas-Fort Worth (DFW) Terminal Radar Approach Control area. The aircraft simulator was equipped with the Airborne Spacing for Terminal Area Routes algorithm and a FIM crew interface consisting of electronic flight bags and ADS-B guidance displays. Researchers used "pseudo-pilot" stations to control 24 simulated aircraft that provided multiple air traffic flows into DFW, and recently retired DFW air traffic controllers served as confederate Center, Feeder, Final, and Tower

  18. Validation of Antibody-Based Strategies for Diagnosis of Pediatric Celiac Disease Without Biopsy.

    PubMed

    Wolf, Johannes; Petroff, David; Richter, Thomas; Auth, Marcus K H; Uhlig, Holm H; Laass, Martin W; Lauenstein, Peter; Krahl, Andreas; Händel, Norman; de Laffolie, Jan; Hauer, Almuthe C; Kehler, Thomas; Flemming, Gunter; Schmidt, Frank; Rodrigues, Astor; Hasenclever, Dirk; Mothes, Thomas

    2017-08-01

    A diagnosis of celiac disease is made based on clinical, genetic, serologic, and duodenal morphology features. Recent pediatric guidelines, based largely on retrospective data, propose omitting biopsy analysis for patients with concentrations of IgA against tissue transglutaminase (IgA-TTG) >10-fold the upper limit of normal (ULN) and if further criteria are met. A retrospective study concluded that measurements of IgA-TTG and total IgA, or IgA-TTG and IgG against deamidated gliadin (IgG-DGL) could identify patients with and without celiac disease. Patients were assigned to categories of no celiac disease, celiac disease, or biopsy required, based entirely on antibody assays. We aimed to validate the positive and negative predictive values (PPV and NPV) of these diagnostic procedures. We performed a prospective study of 898 children undergoing duodenal biopsy analysis to confirm or rule out celiac disease at 13 centers in Europe. We compared findings from serologic analysis with findings from biopsy analyses, follow-up data, and diagnoses made by the pediatric gastroenterologists (celiac disease, no celiac disease, or no final diagnosis). Assays to measure IgA-TTG, IgG-DGL, and endomysium antibodies were performed by blinded researchers, and tissue sections were analyzed by local and blinded reference pathologists. We validated 2 procedures for diagnosis: total-IgA and IgA-TTG (the TTG-IgA procedure), as well as IgG-DGL with IgA-TTG (TTG-DGL procedure). Patients were assigned to categories of no celiac disease if all assays found antibody concentrations <1-fold the ULN, or celiac disease if at least 1 assay measured antibody concentrations >10-fold the ULN. All other cases were considered to require biopsy analysis. ULN values were calculated using the cutoff levels suggested by the test kit manufacturers. HLA typing was performed for 449 participants. We used models that considered how specificity values change with prevalence to extrapolate the PPV and NPV to

  19. Reimbursement for office-based gynecologic procedures.

    PubMed

    Pritzker, Jordan

    2013-12-01

    Reimbursement for office-based gynecologic procedures varies with the contractual obligations that the physician has with the payers involved with the care of the particular patient. The payers may be patients without health insurance coverage (self-pay) or patients with third-party health insurance coverage, such as an employer-based commercial insurance carrier or a government program (eg, Medicare [federal] or Medicaid [state based]). This article discusses the reimbursement for office-based gynecologic procedures by third-party payers. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Quality control of colonoscopy procedures: a prospective validated method for the evaluation of professional practices applicable to all endoscopic units.

    PubMed

    Coriat, R; Pommaret, E; Chryssostalis, A; Viennot, S; Gaudric, M; Brezault, C; Lamarque, D; Roche, H; Verdier, D; Parlier, D; Prat, F; Chaussade, S

    2009-02-01

    To produce valid information, an evaluation of professional practices has to assess the quality of all practices before, during and after the procedure under study. Several auditing techniques have been proposed for colonoscopy. The purpose of this work is to describe a straightforward original validated method for the prospective evaluation of professional practices in the field of colonoscopy applicable in all endoscopy units without increasing the staff work load. Pertinent quality-control criteria (14 items) were identified by the endoscopists at the Cochin Hospital and were compatible with: findings in the available literature; guidelines proposed by the Superior Health Authority; and application in any endoscopy unit. Prospective routine data were collected and the methodology validated by evaluating 50 colonoscopies every quarter for one year. The relevance of the criteria was assessed using data collected during four separate periods. The standard checklist was complete for 57% of the colonoscopy procedures. The colonoscopy procedure was appropriate according to national guidelines in 94% of cases. These observations were particularly noteworthy: the quality of the colonic preparation was insufficient for 9% of the procedures; complete colonoscopy was achieved for 93% of patients; and 0.38 adenomas and 0.045 carcinomas were identified per colonoscopy. This simple and reproducible method can be used for valid quality-control audits in all endoscopy units. In France, unit-wide application of this method enables endoscopists to validate 100 of the 250 points required for continuous medical training. This is a quality-control tool that can be applied annually, using a random month to evaluate any changes in routine practices.

  1. IFCC approved HPLC reference measurement procedure for the alcohol consumption biomarker carbohydrate-deficient transferrin (CDT): Its validation and use.

    PubMed

    Schellenberg, François; Wielders, Jos; Anton, Raymond; Bianchi, Vincenza; Deenmamode, Jean; Weykamp, Cas; Whitfield, John; Jeppsson, Jan-Olof; Helander, Anders

    2017-02-01

    Carbohydrate-deficient transferrin (CDT) is used as a biomarker of sustained high alcohol consumption. The currently available measurement procedures for CDT are based on various analytical techniques (HPLC, capillary electrophoresis, nephelometry), some differing in the definition of the analyte and using different reference intervals and cut-off values. The Working Group on Standardization of CDT (WG-CDT), initiated by the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC), has validated an HPLC candidate reference measurement procedure (cRMP) for CDT (% disialotransferrin to total transferrin based on peak areas), demonstrating that it is suitable as a reference measurement procedure (RMP) for CDT. Presented is a detailed description of the cRMP and its calibration. Practical aspects on how to treat genetic variant and so-called di-tri bridge samples are described. Results of method performance characteristics, as demanded by ISO 15189 and ISO 15193, are given, as well as the reference interval and measurement uncertainty and how to deal with that in routine use. The correlation of the cRMP with commercial CDT procedures and the performance of the cRMP in a network of laboratories are also presented. The performance of the CDT cRMP in combination with previously developed commutable calibrators allows for standardization of the currently available commercial measurement procedures for CDT. The cRMP has recently been approved by the IFCC and will be from now on be known as the IFCC-RMP for CDT, while CDT results standardized according to this RMP should be indicated as CDT IFCC . Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Evaluation of three different validation procedures regarding the accuracy of template-guided implant placement: an in vitro study.

    PubMed

    Vasak, Christoph; Strbac, Georg D; Huber, Christian D; Lettner, Stefan; Gahleitner, André; Zechner, Werner

    2015-02-01

    The study aims to evaluate the accuracy of the NobelGuide™ (Medicim/Nobel Biocare, Göteborg, Sweden) concept maximally reducing the influence of clinical and surgical parameters. Moreover, the study was to compare and validate two validation procedures versus a reference method. Overall, 60 implants were placed in 10 artificial edentulous mandibles according to the NobelGuide™ protocol. For merging the pre- and postoperative DICOM data sets, three different fusion methods (Triple Scan Technique, NobelGuide™ Validation software, and AMIRA® software [VSG - Visualization Sciences Group, Burlington, MA, USA] as reference) were applied. Discrepancies between the virtual and the actual implant positions were measured. The mean deviations measured with AMIRA® were 0.49 mm (implant shoulder), 0.69 mm (implant apex), and 1.98°mm (implant axis). The Triple Scan Technique as well as the NobelGuide™ Validation software revealed similar deviations compared with the reference method. A significant correlation between angular and apical deviations was seen (r = 0.53; p < .001). A greater implant diameter was associated with greater deviations (p = .03). The Triple Scan Technique as a system-independent validation procedure as well as the NobelGuide™ Validation software are in accordance with the AMIRA® software. The NobelGuide™ system showed similar or less spatial and angular deviations compared with others. © 2013 Wiley Periodicals, Inc.

  3. Reliability and Validity of a Procedure to Measure Diagnostic Reasoning and Problem-Solving Skills Taught in Predoctoral Orthodontic Education.

    ERIC Educational Resources Information Center

    Albanese, Mark A.; Jacobs, Richard M.

    1990-01-01

    The reliability and validity of a procedure to measure diagnostic-reasoning and problem-solving skills taught in predoctoral orthodontic education were studied using 68 second year dental students. The procedure includes stimulus material and 33 multiple-choice items. It is a feasible way of assessing problem-solving skills in dentistry education…

  4. Development of a diagnosis- and procedure-based risk model for 30-day outcome after pediatric cardiac surgery.

    PubMed

    Crowe, Sonya; Brown, Kate L; Pagel, Christina; Muthialu, Nagarajan; Cunningham, David; Gibbs, John; Bull, Catherine; Franklin, Rodney; Utley, Martin; Tsang, Victor T

    2013-05-01

    The study objective was to develop a risk model incorporating diagnostic information to adjust for case-mix severity during routine monitoring of outcomes for pediatric cardiac surgery. Data from the Central Cardiac Audit Database for all pediatric cardiac surgery procedures performed in the United Kingdom between 2000 and 2010 were included: 70% for model development and 30% for validation. Units of analysis were 30-day episodes after the first surgical procedure. We used logistic regression for 30-day mortality. Risk factors considered included procedural information based on Central Cardiac Audit Database "specific procedures," diagnostic information defined by 24 "primary" cardiac diagnoses and "univentricular" status, and other patient characteristics. Of the 27,140 30-day episodes in the development set, 25,613 were survivals, 834 were deaths, and 693 were of unknown status (mortality, 3.2%). The risk model includes procedure, cardiac diagnosis, univentricular status, age band (neonate, infant, child), continuous age, continuous weight, presence of non-Down syndrome comorbidity, bypass, and year of operation 2007 or later (because of decreasing mortality). A risk score was calculated for 95% of cases in the validation set (weight missing in 5%). The model discriminated well; the C-index for validation set was 0.77 (0.81 for post-2007 data). Removal of all but procedural information gave a reduced C-index of 0.72. The model performed well across the spectrum of predicted risk, but there was evidence of underestimation of mortality risk in neonates undergoing operation from 2007. The risk model performs well. Diagnostic information added useful discriminatory power. A future application is risk adjustment during routine monitoring of outcomes in the United Kingdom to assist quality assurance. Copyright © 2013 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  5. 40 CFR 761.392 - Preparing validation study samples.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Preparing validation study samples..., AND USE PROHIBITIONS Comparison Study for Validating a New Performance-Based Decontamination Solvent Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to...

  6. 40 CFR 761.392 - Preparing validation study samples.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Preparing validation study samples..., AND USE PROHIBITIONS Comparison Study for Validating a New Performance-Based Decontamination Solvent Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to...

  7. Assessing the Reliability and Validity of Multi-Attribute Utility Procedures: An Application of the Theory of Generalizability

    DTIC Science & Technology

    1975-07-01

    I WIWIHIHlipi pqpv<Hi^«^Rii.i ii mmw AD-A016 282 ASSESSING THE REALIBILITY AND VALIDITY OF MULTI-ATTRIBUTE UTILITY PROCEDURES: AN...more complicated and use data from actual experiments. Example 1: Analysis of raters making Importance judgments about attributes. In MAU studies...generaluablllty of JUDGE as contrasted to ÜASC. To do this, we win reanaIyze the data for each syste™ separately. This 1. valid since the initial

  8. Development and validation of procedures for assessment of competency of non-pharmacists in extemporaneous dispensing.

    PubMed

    Donnelly, Ryan F; McNally, Martin J; Barry, Johanne G

    2009-02-01

    To develop and validate procedures that may be suitable for assessment of competency of two groups of non-pharmacist staff (pharmacy students and trainee support staff) in extemporaneous dispensing. This is important given the prospect of remote supervision of community pharmacies in the UK. Analytical methods were validated according to International Conference on Harmonisation specifications and procedures were optimized to allow efficient drug extraction. This permitted straightforward determination of drug content in extemporaneously prepared lidocaine hydrochloride mouthwashes and norfloxacin creams and suspensions prepared by 10 participants recruited to represent the two groups of non-pharmacist staff. All 10 participants had completed the extemporaneous dispensing of all three products within 90 min. Extraction and analysis took approximately 15 min for each lidocaine hydrochloride mouthwash and 30 min for each diluted norfloxacin cream and norfloxacin suspension. The mean drug concentrations in lidocaine hydrochloride mouthwashes and diluted norfloxacin creams were within what are generally accepted as being pharmaceutically acceptable limits for drug content (100 +/- 5%) for both groups of participants. There was no significant difference in the mean drug concentration of norfloxacin suspensions prepared by the participant groups. However, it was notable that only one participant prepared a suspension containing a norfloxacin concentration that was within pharmaceutically acceptable limits (101.51%). A laboratory possessing suitable equipment and appropriately trained staff could cope readily with the large number of products prepared, for example, by a cohort of pre-registration students. Consequently, the validated procedures developed here could usefully be incorporated into the pre-registration examination for pharmacy students and a final qualifying examination for dispensers and pharmacy technicians. We believe that this is essential if the public

  9. ProMIS augmented reality training of laparoscopic procedures face validity.

    PubMed

    Botden, Sanne M B I; Buzink, Sonja N; Schijven, Marlies P; Jakimowicz, Jack J

    2008-01-01

    Conventional video trainers lack the ability to assess the trainee objectively, but offer modalities that are often missing in virtual reality simulation, such as realistic haptic feedback. The ProMIS augmented reality laparoscopic simulator retains the benefit of a traditional box trainer, by using original laparoscopic instruments and tactile tasks, but additionally generates objective measures of performance. Fifty-five participants performed a "basic skills" and "suturing and knot-tying" task on ProMIS, after which they filled out a questionnaire regarding realism, haptics, and didactic value of the simulator, on a 5-point-Likert scale. The participants were allotted to 2 experience groups: "experienced" (>50 procedures and >5 sutures; N = 27), and "moderately experienced" (<50 procedures and <5 sutures; N = 28). General consensus among all participants, particularly the experienced, was that ProMIS is a useful tool for training (mean: 4.67, SD: 0.48). It was considered very realistic (mean: 4.44, SD: 0.66), with good haptics (mean: 4.10, SD: 0.97) and didactic value (mean 4.10, SD: 0.65). This study established the face validity of the ProMIS augmented reality simulator for "basic skills" and "suturing and knot-tying" tasks. ProMIS was considered a good tool for training in laparoscopic skills for surgical residents and surgeons.

  10. Adaptive graph-based multiple testing procedures

    PubMed Central

    Klinglmueller, Florian; Posch, Martin; Koenig, Franz

    2016-01-01

    Multiple testing procedures defined by directed, weighted graphs have recently been proposed as an intuitive visual tool for constructing multiple testing strategies that reflect the often complex contextual relations between hypotheses in clinical trials. Many well-known sequentially rejective tests, such as (parallel) gatekeeping tests or hierarchical testing procedures are special cases of the graph based tests. We generalize these graph-based multiple testing procedures to adaptive trial designs with an interim analysis. These designs permit mid-trial design modifications based on unblinded interim data as well as external information, while providing strong family wise error rate control. To maintain the familywise error rate, it is not required to prespecify the adaption rule in detail. Because the adaptive test does not require knowledge of the multivariate distribution of test statistics, it is applicable in a wide range of scenarios including trials with multiple treatment comparisons, endpoints or subgroups, or combinations thereof. Examples of adaptations are dropping of treatment arms, selection of subpopulations, and sample size reassessment. If, in the interim analysis, it is decided to continue the trial as planned, the adaptive test reduces to the originally planned multiple testing procedure. Only if adaptations are actually implemented, an adjusted test needs to be applied. The procedure is illustrated with a case study and its operating characteristics are investigated by simulations. PMID:25319733

  11. Network-Based Management Procedures.

    ERIC Educational Resources Information Center

    Buckner, Allen L.

    Network-based management procedures serve as valuable aids in organizational management, achievement of objectives, problem solving, and decisionmaking. Network techniques especially applicable to educational management systems are the program evaluation and review technique (PERT) and the critical path method (CPM). Other network charting…

  12. Validating Vegetable Production Unit (VPU) Plants, Protocols, Procedures and Requirements (P3R) using Currently Existing Flight Resources

    NASA Technical Reports Server (NTRS)

    Bingham, Gail; Bates, Scott; Bugbee, Bruce; Garland, Jay; Podolski, Igor; Levinskikh, Rita; Sychev, Vladimir; Gushin, Vadim

    2009-01-01

    Validating Vegetable Production Unit (VPU) Plants, Protocols, Procedures and Requirements (P3R) Using Currently Existing Flight Resources (Lada-VPU-P3R) is a study to advance the technology required for plant growth in microgravity and to research related food safety issues. Lada-VPU-P3R also investigates the non-nutritional value to the flight crew of developing plants on-orbit. The Lada-VPU-P3R uses the Lada hardware on the ISS and falls under a cooperative agreement between National Aeronautics and Space Administration (NASA) and the Russian Federal Space Association (FSA). Research Summary: Validating Vegetable Production Unit (VPU) Plants, Protocols, Procedures and Requirements (P3R) Using Currently Existing Flight Resources (Lada-VPU-P3R) will optimize hardware and

  13. A hypothesis-driven physical examination learning and assessment procedure for medical students: initial validity evidence.

    PubMed

    Yudkowsky, Rachel; Otaki, Junji; Lowenstein, Tali; Riddle, Janet; Nishigori, Hiroshi; Bordage, Georges

    2009-08-01

    Diagnostic accuracy is maximised by having clinical signs and diagnostic hypotheses in mind during the physical examination (PE). This diagnostic reasoning approach contrasts with the rote, hypothesis-free screening PE learned by many medical students. A hypothesis-driven PE (HDPE) learning and assessment procedure was developed to provide targeted practice and assessment in anticipating, eliciting and interpreting critical aspects of the PE in the context of diagnostic challenges. This study was designed to obtain initial content validity evidence, performance and reliability estimates, and impact data for the HDPE procedure. Nineteen clinical scenarios were developed, covering 160 PE manoeuvres. A total of 66 Year 3 medical students prepared for and encountered three clinical scenarios during required formative assessments. For each case, students listed anticipated positive PE findings for two plausible diagnoses before examining the patient; examined a standardised patient (SP) simulating one of the diagnoses; received immediate feedback from the SP, and documented their findings and working diagnosis. The same students later encountered some of the scenarios during their Year 4 clinical skills examination. On average, Year 3 students anticipated 65% of the positive findings, correctly performed 88% of the PE manoeuvres and documented 61% of the findings. Year 4 students anticipated and elicited fewer findings overall, but achieved proportionally more discriminating findings, thereby more efficiently achieving a diagnostic accuracy equivalent to that of students in Year 3. Year 4 students performed better on cases on which they had received feedback as Year 3 students. Twelve cases would provide a reliability of 0.80, based on discriminating checklist items only. The HDPE provided medical students with a thoughtful, deliberate approach to learning and assessing PE skills in a valid and reliable manner.

  14. qPR: An adaptive partial-report procedure based on Bayesian inference.

    PubMed

    Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin

    2016-08-01

    Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6-8 cue delays or 600-800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations.

  15. qPR: An adaptive partial-report procedure based on Bayesian inference

    PubMed Central

    Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin

    2016-01-01

    Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6–8 cue delays or 600–800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations. PMID:27580045

  16. Bootstrap-based procedures for inference in nonparametric receiver-operating characteristic curve regression analysis.

    PubMed

    Rodríguez-Álvarez, María Xosé; Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen; Tahoces, Pablo G

    2018-03-01

    Prior to using a diagnostic test in a routine clinical setting, the rigorous evaluation of its diagnostic accuracy is essential. The receiver-operating characteristic curve is the measure of accuracy most widely used for continuous diagnostic tests. However, the possible impact of extra information about the patient (or even the environment) on diagnostic accuracy also needs to be assessed. In this paper, we focus on an estimator for the covariate-specific receiver-operating characteristic curve based on direct regression modelling and nonparametric smoothing techniques. This approach defines the class of generalised additive models for the receiver-operating characteristic curve. The main aim of the paper is to offer new inferential procedures for testing the effect of covariates on the conditional receiver-operating characteristic curve within the above-mentioned class. Specifically, two different bootstrap-based tests are suggested to check (a) the possible effect of continuous covariates on the receiver-operating characteristic curve and (b) the presence of factor-by-curve interaction terms. The validity of the proposed bootstrap-based procedures is supported by simulations. To facilitate the application of these new procedures in practice, an R-package, known as npROCRegression, is provided and briefly described. Finally, data derived from a computer-aided diagnostic system for the automatic detection of tumour masses in breast cancer is analysed.

  17. Validity evidence based on test content.

    PubMed

    Sireci, Stephen; Faulkner-Bond, Molly

    2014-01-01

    Validity evidence based on test content is one of the five forms of validity evidence stipulated in the Standards for Educational and Psychological Testing developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. In this paper, we describe the logic and theory underlying such evidence and describe traditional and modern methods for gathering and analyzing content validity data. A comprehensive review of the literature and of the aforementioned Standards is presented. For educational tests and other assessments targeting knowledge and skill possessed by examinees, validity evidence based on test content is necessary for building a validity argument to support the use of a test for a particular purpose. By following the methods described in this article, practitioners have a wide arsenal of tools available for determining how well the content of an assessment is congruent with and appropriate for the specific testing purposes.

  18. Validity of temporomandibular disorder examination procedures for assessment of temporomandibular joint status.

    PubMed

    Schmitter, Marc; Kress, Bodo; Leckel, Michael; Henschel, Volkmar; Ohlmann, Brigitte; Rammelsberg, Peter

    2008-06-01

    This hypothesis-generating study was performed to determine which items in the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) and additional diagnostic tests have the best predictive accuracy for joint-related diagnoses. One hundred forty-nine TMD patients and 43 symptom-free subjects were examined in clinical examinations and with magnetic resonance imaging (MRI). The importance of each variable of the clinical examination for correct joint-related diagnosis was assessed by using MRI diagnoses. For this purpose, "random forest" statistical software (based on classification trees) was used. Maximum unassisted jaw opening, maximum assisted jaw opening, history of locked jaw, joint sound with and without compression, joint pain, facial pain, pain on palpation of the lateral pterygoid area, and overjet proved suitable for distinguishing between subtypes of joint-related TMD. Measurement of excursion, protrusion, and midline deviation were less important. The validity of clinical TMD examination procedures can be enhanced by using the 16 variables of greatest importance identified in this study. In addition to other variables, maximum unassisted and assisted opening and a history of locked jaw were important when assessing the status of the TMJ.

  19. Validation of an updated fractionation and indirect speciation procedure for inorganic arsenic in oxic and suboxic soils and sediments.

    PubMed

    Lock, Alan; Wallschläger, Dirk; McMurdo, Colin; Tyler, Laura; Belzile, Nelson; Spiers, Graeme

    2016-12-01

    A sequential extraction procedure (SEP) for the speciation analysis of As(III) and As(V) in oxic and suboxic soils and sediments was validated using a natural lake sediment and three certified reference materials, as well as spike recoveries of As(III) and As(V). Many of the extraction steps have been previously validated making the procedure useful for comparisons to similar previous SEP studies. The novel aspect of this research is the validation for the SEP to maintain As(III) and As(V) species. The proposed five step extraction procedure includes the extraction agents (NH 4 ) 2 SO 4 , NH 4 H 2 PO 4 , H 3 PO 4  + NH 2 OH·HCl, oxalate + ascorbic acid (heated), and HNO 3  + HCl + HF, targeting operationally defined easily exchangeable, strongly sorbed, amorphous Fe oxide bound, crystalline Fe oxide bound, and residual As fractions, respectively. The third extraction step, H 3 PO 4  + NH 2 OH·HCl, has not been previously validated for fraction selectivity. We present evidence for this extraction step to target As complexed with amorphous Fe oxides when used in the SEP proposed here. All solutions were analyzed on ICP-MS. The greatest concentrations of As were extracted from the amorphous Fe oxide fraction and the dominant species was As(V). Lake sediment materials were found to have higher As(III) concentrations than the soil materials. Because different soils/sediments have different chemical characteristics, maintenance of As species during extractions must be validated for specific soil/sediment types using spiking experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Behavioral Assessment of Hearing in 2 to 4 Year-old Children: A Two-interval, Observer-based Procedure Using Conditioned Play-based Responses.

    PubMed

    Bonino, Angela Yarnell; Leibold, Lori J

    2017-01-23

    Collecting reliable behavioral data from toddlers and preschoolers is challenging. As a result, there are significant gaps in our understanding of human auditory development for these age groups. This paper describes an observer-based procedure for measuring hearing sensitivity with a two-interval, two-alternative forced-choice paradigm. Young children are trained to perform a play-based, motor response (e.g., putting a block in a bucket) whenever they hear a target signal. An experimenter observes the child's behavior and makes a judgment about whether the signal was presented during the first or second observation interval; the experimenter is blinded to the true signal interval, so this judgment is based solely on the child's behavior. These procedures were used to test 2 to 4 year-olds (n = 33) with no known hearing problems. The signal was a 1,000 Hz warble tone presented in quiet, and the signal level was adjusted to estimate a threshold corresponding to 71%-correct detection. A valid threshold was obtained for 82% of children. These results indicate that the two-interval procedure is both feasible and reliable for use with toddlers and preschoolers. The two-interval, observer-based procedure described in this paper is a powerful tool for evaluating hearing in young children because it guards against response bias on the part of the experimenter.

  1. Evaluation of Operational Procedures for Using a Time-Based Airborne Inter-arrival Spacing Tool

    NASA Technical Reports Server (NTRS)

    Oseguera-Lohr, Rosa M.; Lohr, Gary W.; Abbott, Terence S.; Eischeid, Todd M.

    2002-01-01

    An airborne tool has been developed based on the concept of an aircraft maintaining a time-based spacing interval from the preceding aircraft. The Advanced Terminal Area Approach Spacing (ATAAS) tool uses Automatic Dependent Surveillance-Broadcast (ADS-B) aircraft state data to compute a speed command for the ATAAS-equipped aircraft to obtain a required time interval behind another aircraft. The tool and candidate operational procedures were tested in a high-fidelity, full mission simulator with active airline subject pilots flying an arrival scenario using three different modes for speed control. The objectives of this study were to validate the results of a prior Monte Carlo analysis of the ATAAS algorithm and to evaluate the concept from the standpoint of pilot acceptability and workload. Results showed that the aircraft was able to consistently achieve the target spacing interval within one second (the equivalent of approximately 220 ft at a final approach speed of 130 kt) when the ATAAS speed guidance was autothrottle-coupled, and a slightly greater (4-5 seconds), but consistent interval with the pilot-controlled speed modes. The subject pilots generally rated the workload level with the ATAAS procedure as similar to that with standard procedures, and also rated most aspects of the procedure high in terms of acceptability. Although pilots indicated that the head-down time was higher with ATAAS, the acceptability of head-down time was rated high. Oculometer data indicated slight changes in instrument scan patterns, but no significant change in the amount of time spent looking out the window between the ATAAS procedure versus standard procedures.

  2. 41 CFR 60-3.9 - No assumption of validity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... general reputation of a test or other selection procedures, its author or its publisher, or casual reports... of validity based on a procedure's name or descriptive labels; all forms of promotional literature...

  3. 41 CFR 60-3.9 - No assumption of validity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... general reputation of a test or other selection procedures, its author or its publisher, or casual reports... of validity based on a procedure's name or descriptive labels; all forms of promotional literature...

  4. Office-based procedures for diagnosis and treatment of esophageal pathology.

    PubMed

    Wellenstein, David J; Schutte, Henrieke W; Marres, Henri A M; Honings, Jimmie; Belafsky, Peter C; Postma, Gregory N; Takes, Robert P; van den Broek, Guido B

    2017-09-01

    Diagnostic and therapeutic office-based procedures under topical anesthesia are emerging in the daily practice of laryngologists and head and neck surgeons. Since the introduction of the transnasal esophagoscope, office-based procedures for the esophagus are increasingly performed. We conducted a systematic review of literature on office-based procedures under topical anesthesia for the esophagus. Transnasal esophagoscopy is an extensively investigated office-based procedure. This procedure shows better patient tolerability and equivalent accuracy compared to conventional transoral esophagoscopy, as well as time and cost savings. Secondary tracheoesophageal puncture, esophageal dilatation, esophageal sphincter injection, and foreign body removal are less investigated, but show promising results. With the introduction of the transnasal esophagoscope, an increasing number of diagnostic and therapeutic office-based procedures for the esophagus are possible, with multiple advantages. Further investigation must prove the clinical feasibility and effectiveness of the therapeutic office-based procedures. © 2017 Wiley Periodicals, Inc.

  5. Retrospective validation of renewal-based, medium-term earthquake forecasts

    NASA Astrophysics Data System (ADS)

    Rotondi, R.

    2013-10-01

    In this paper, some methods for scoring the performances of an earthquake forecasting probability model are applied retrospectively for different goals. The time-dependent occurrence probabilities of a renewal process are tested against earthquakes of Mw ≥ 5.3 recorded in Italy according to decades of the past century. An aim was to check the capability of the model to reproduce the data by which the model was calibrated. The scoring procedures used can be distinguished on the basis of the requirement (or absence) of a reference model and of probability thresholds. Overall, a rank-based score, information gain, gambling scores, indices used in binary predictions and their loss functions are considered. The definition of various probability thresholds as percentages of the hazard functions allows proposals of the values associated with the best forecasting performance as alarm level in procedures for seismic risk mitigation. Some improvements are then made to the input data concerning the completeness of the historical catalogue and the consistency of the composite seismogenic sources with the hypotheses of the probability model. Another purpose of this study was thus to obtain hints on what is the most influential factor and on the suitability of adopting the consequent changes of the data sets. This is achieved by repeating the estimation procedure of the occurrence probabilities and the retrospective validation of the forecasts obtained under the new assumptions. According to the rank-based score, the completeness appears to be the most influential factor, while there are no clear indications of the usefulness of the decomposition of some composite sources, although in some cases, it has led to improvements of the forecast.

  6. Web based tools for data manipulation, visualisation and validation with interactive georeferenced graphs

    NASA Astrophysics Data System (ADS)

    Ivankovic, D.; Dadic, V.

    2009-04-01

    Some of oceanographic parameters have to be manually inserted into database; some (for example data from CTD probe) are inserted from various files. All this parameters requires visualization, validation and manipulation from research vessel or scientific institution, and also public presentation. For these purposes is developed web based system, containing dynamic sql procedures and java applets. Technology background is Oracle 10g relational database, and Oracle application server. Web interfaces are developed using PL/SQL stored database procedures (mod PL/SQL). Additional parts for data visualization include use of Java applets and JavaScript. Mapping tool is Google maps API (javascript) and as alternative java applet. Graph is realized as dynamically generated web page containing java applet. Mapping tool and graph are georeferenced. That means that click on some part of graph, automatically initiate zoom or marker onto location where parameter was measured. This feature is very useful for data validation. Code for data manipulation and visualization are partially realized with dynamic SQL and that allow as to separate data definition and code for data manipulation. Adding new parameter in system requires only data definition and description without programming interface for this kind of data.

  7. A validity generalization procedure to test relations between intrinsic and extrinsic motivation and influence tactics.

    PubMed

    Barbuto, John E; Moss, Jennifer A

    2006-08-01

    The relations of intrinsic and extrinsic motivation with use of consultative, legitimating, and pressure influence tactics were examined using validity generalization procedures. 5 to 7 field studies with cumulative samples exceeding 800 were used to test each relationship. Significance was found for relation between agents' intrinsic motivation and their use of consultative influence tactics and agents' extrinsic motivation and their use of legitimating influence tactics.

  8. Graphene Oxide Annealing Procedures for Graphene-Based Supercapacitors

    DTIC Science & Technology

    2015-09-01

    Annealing Procedures for Graphene-Based Supercapacitors by Louis B Levine and Matthew H Ervin Sensors and Electron Devices Directorate, ARL...SUBTITLE Graphene Oxide Annealing Procedures for Graphene-Based Supercapacitors 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  9. Development of a new calibration procedure and its experimental validation applied to a human motion capture system.

    PubMed

    Royo Sánchez, Ana Cristina; Aguilar Martín, Juan José; Santolaria Mazo, Jorge

    2014-12-01

    Motion capture systems are often used for checking and analyzing human motion in biomechanical applications. It is important, in this context, that the systems provide the best possible accuracy. Among existing capture systems, optical systems are those with the highest accuracy. In this paper, the development of a new calibration procedure for optical human motion capture systems is presented. The performance and effectiveness of that new calibration procedure are also checked by experimental validation. The new calibration procedure consists of two stages. In the first stage, initial estimators of intrinsic and extrinsic parameters are sought. The camera calibration method used in this stage is the one proposed by Tsai. These parameters are determined from the camera characteristics, the spatial position of the camera, and the center of the capture volume. In the second stage, a simultaneous nonlinear optimization of all parameters is performed to identify the optimal values, which minimize the objective function. The objective function, in this case, minimizes two errors. The first error is the distance error between two markers placed in a wand. The second error is the error of position and orientation of the retroreflective markers of a static calibration object. The real co-ordinates of the two objects are calibrated in a co-ordinate measuring machine (CMM). The OrthoBio system is used to validate the new calibration procedure. Results are 90% lower than those from the previous calibration software and broadly comparable with results from a similarly configured Vicon system.

  10. Validation of model-based deformation correction in image-guided liver surgery via tracked intraoperative ultrasound: preliminary method and results

    NASA Astrophysics Data System (ADS)

    Clements, Logan W.; Collins, Jarrod A.; Wu, Yifei; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.

    2015-03-01

    Soft tissue deformation represents a significant error source in current surgical navigation systems used for open hepatic procedures. While numerous algorithms have been proposed to rectify the tissue deformation that is encountered during open liver surgery, clinical validation of the proposed methods has been limited to surface based metrics and sub-surface validation has largely been performed via phantom experiments. Tracked intraoperative ultrasound (iUS) provides a means to digitize sub-surface anatomical landmarks during clinical procedures. The proposed method involves the validation of a deformation correction algorithm for open hepatic image-guided surgery systems via sub-surface targets digitized with tracked iUS. Intraoperative surface digitizations were acquired via a laser range scanner and an optically tracked stylus for the purposes of computing the physical-to-image space registration within the guidance system and for use in retrospective deformation correction. Upon completion of surface digitization, the organ was interrogated with a tracked iUS transducer where the iUS images and corresponding tracked locations were recorded. After the procedure, the clinician reviewed the iUS images to delineate contours of anatomical target features for use in the validation procedure. Mean closest point distances between the feature contours delineated in the iUS images and corresponding 3-D anatomical model generated from the preoperative tomograms were computed to quantify the extent to which the deformation correction algorithm improved registration accuracy. The preliminary results for two patients indicate that the deformation correction method resulted in a reduction in target error of approximately 50%.

  11. Reliability and Validity of Curriculum-Based Informal Reading Inventories.

    ERIC Educational Resources Information Center

    Fuchs, Lynn; And Others

    A study was conducted to explore the reliability and validity of three prominent procedures used in informal reading inventories (IRIs): (1) choosing a 95% word recognition accuracy standard for determining student instructional level, (2) arbitrarily selecting a passage to represent the difficulty level of a basal reader, and (3) employing…

  12. Guideline for obtaining valid consent for gastrointestinal endoscopy procedures.

    PubMed

    Everett, Simon M; Griffiths, Helen; Nandasoma, U; Ayres, Katie; Bell, Graham; Cohen, Mike; Thomas-Gibson, Siwan; Thomson, Mike; Naylor, Kevin M T

    2016-10-01

    Much has changed since the last guideline of 2008, both in endoscopy and in the practice of obtaining informed consent, and it is vital that all endoscopists who are responsible for performing invasive and increasingly risky procedures are aware of the requirements for obtaining valid consent. This guideline is restricted to GI endoscopy but we cover elective and acute or emergency procedures. Few clinical trials have been carried out in relation to informed consent but most areas are informed by guidance from the General Medical Counsel (GMC) and/or are enshrined in legislation. Following an iterative voting process a series of recommendations have been drawn up that cover the majority of situations that will be encountered by endoscopists. This is not exhaustive and where doubt exists we have described where legal advice is likely to be required. This document relates to the law and endoscopy practice in the UK-where there is variation between the four devolved countries this is pointed out and endoscopists must be aware of the law where they practice. The recommendations are divided into consent for patients with and without capacity and we provide sections on provision of information and the consent process for patients in a variety of situations. This guideline is intended for use by all practitioners who request or perform GI endoscopy, or are involved in the pathway of such patients. If followed, we hope this document will enhance the experience of patients attending for endoscopy in UK units. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. Procedure for the Selection and Validation of a Calibration Model I-Description and Application.

    PubMed

    Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D

    2017-05-01

    Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. 28 CFR 50.14 - Guidelines on employee selection procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...

  15. 28 CFR 50.14 - Guidelines on employee selection procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...

  16. 28 CFR 50.14 - Guidelines on employee selection procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...

  17. 28 CFR 50.14 - Guidelines on employee selection procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...

  18. 28 CFR 50.14 - Guidelines on employee selection procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...

  19. Situating Standard Setting within Argument-Based Validity

    ERIC Educational Resources Information Center

    Papageorgiou, Spiros; Tannenbaum, Richard J.

    2016-01-01

    Although there has been substantial work on argument-based approaches to validation as well as standard-setting methodologies, it might not always be clear how standard setting fits into argument-based validity. The purpose of this article is to address this lack in the literature, with a specific focus on topics related to argument-based…

  20. Constant temperature hot wire anemometry data reduction procedure

    NASA Technical Reports Server (NTRS)

    Klopfer, G. H.

    1974-01-01

    The theory and data reduction procedure for constant temperature hot wire anemometry are presented. The procedure is valid for all Mach and Prandtl numbers, but limited to Reynolds numbers based on wire diameter between 0.1 and 300. The fluids are limited to gases which approximate ideal gas behavior. Losses due to radiation, free convection and conduction are included.

  1. The SCALE Verified, Archived Library of Inputs and Data - VALID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Rearden, Bradley T

    The Verified, Archived Library of Inputs and Data (VALID) at ORNL contains high quality, independently reviewed models and results that improve confidence in analysis. VALID is developed and maintained according to a procedure of the SCALE quality assurance (QA) plan. This paper reviews the origins of the procedure and its intended purpose, the philosophy of the procedure, some highlights of its implementation, and the future of the procedure and associated VALID library. The original focus of the procedure was the generation of high-quality models that could be archived at ORNL and applied to many studies. The review process associated withmore » model generation minimized the chances of errors in these archived models. Subsequently, the scope of the library and procedure was expanded to provide high quality, reviewed sensitivity data files for deployment through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Sensitivity data files for approximately 400 such models are currently available. The VALID procedure and library continue fulfilling these multiple roles. The VALID procedure is based on the quality assurance principles of ISO 9001 and nuclear safety analysis. Some of these key concepts include: independent generation and review of information, generation and review by qualified individuals, use of appropriate references for design data and documentation, and retrievability of the models, results, and documentation associated with entries in the library. Some highlights of the detailed procedure are discussed to provide background on its implementation and to indicate limitations of data extracted from VALID for use by the broader community. Specifically, external users of data generated within VALID must take responsibility for ensuring that the files are used within the QA framework of their organization and that use is appropriate. The future plans for the VALID library include expansion to include additional

  2. Office-Based Procedures for the Diagnosis and Treatment of Laryngeal Pathology.

    PubMed

    Wellenstein, David J; Schutte, Henrieke W; Takes, Robert P; Honings, Jimmie; Marres, Henri A M; Burns, James A; van den Broek, Guido B

    2017-09-18

    Since the development of distal chip endoscopes with a working channel, diagnostic and therapeutic possibilities in the outpatient clinic in the management of laryngeal pathology have increased. Which of these office-based procedures are currently available, and their clinical indications and possible advantages, remains unclear. Review of literature on office-based procedures in laryngology and head and neck oncology. Flexible endoscopic biopsy (FEB), vocal cord injection, and laser surgery are well-established office-based procedures that can be performed under topical anesthesia. These procedures demonstrate good patient tolerability and multiple advantages. Office-based procedures under topical anesthesia are currently an established method in the management of laryngeal pathology. These procedures offer medical and economic advantages compared with operating room-performed procedures. Furthermore, office-based procedures enhance the speed and timing of the diagnostic and therapeutic process. Copyright © 2017 The Voice Foundation. All rights reserved.

  3. Reliable and valid assessment of Lichtenstein hernia repair skills.

    PubMed

    Carlsen, C G; Lindorff-Larsen, K; Funch-Jensen, P; Lund, L; Charles, P; Konge, L

    2014-08-01

    Lichtenstein hernia repair is a common surgical procedure and one of the first procedures performed by a surgical trainee. However, formal assessment tools developed for this procedure are few and sparsely validated. The aim of this study was to determine the reliability and validity of an assessment tool designed to measure surgical skills in Lichtenstein hernia repair. Key issues were identified through a focus group interview. On this basis, an assessment tool with eight items was designed. Ten surgeons and surgical trainees were video recorded while performing Lichtenstein hernia repair, (four experts, three intermediates, and three novices). The videos were blindly and individually assessed by three raters (surgical consultants) using the assessment tool. Based on these assessments, validity and reliability were explored. The internal consistency of the items was high (Cronbach's alpha = 0.97). The inter-rater reliability was very good with an intra-class correlation coefficient (ICC) = 0.93. Generalizability analysis showed a coefficient above 0.8 even with one rater. The coefficient improved to 0.92 if three raters were used. One-way analysis of variance found a significant difference between the three groups which indicates construct validity, p < 0.001. Lichtenstein hernia repair skills can be assessed blindly by a single rater in a reliable and valid fashion with the new procedure-specific assessment tool. We recommend this tool for future assessment of trainees performing Lichtenstein hernia repair to ensure that the objectives of competency-based surgical training are met.

  4. Validity of Basic Electronic 1 Module Integrated Character Value Based on Conceptual Change Teaching Model to Increase Students Physics Competency in STKIP PGRI West Sumatera

    NASA Astrophysics Data System (ADS)

    Hidayati, A.; Rahmi, A.; Yohandri; Ratnawulan

    2018-04-01

    The importance of teaching materials in accordance with the characteristics of students became the main reason for the development of basic electronics I module integrated character values based on conceptual change teaching model. The module development in this research follows the development procedure of Plomp which includes preliminary research, prototyping phase and assessment phase. In the first year of this research, the module is validated. Content validity is seen from the conformity of the module with the development theory in accordance with the demands of learning model characteristics. The validity of the construct is seen from the linkage and consistency of each module component developed with the characteristic of the integrated learning model of character values obtained through validator assessment. The average validation value assessed by the validator belongs to a very valid category. Based on the validator assessment then revised the basic electronics I module integrated character values based on conceptual change teaching model.

  5. A function-based approach to cockpit procedure aids

    NASA Technical Reports Server (NTRS)

    Phatak, Anil V.; Jain, Parveen; Palmer, Everett

    1990-01-01

    The objective of this research is to develop and test a cockpit procedural aid that can compose and present procedures that are appropriate for the given flight situation. The procedure would indicate the status of the aircraft engineering systems, and the environmental conditions. Prescribed procedures already exist for normal as well as for a number of non-normal and emergency situations, and can be presented to the crew using an interactive cockpit display. However, no procedures are prescribed or recommended for a host of plausible flight situations involving multiple malfunctions compounded by adverse environmental conditions. Under these circumstances, the cockpit procedural aid must review the prescribed procedures for the individual malfunction (when available), evaluate the alternatives or options, and present one or more composite procedures (prioritized or unprioritized) in response to the given situation. A top-down function-based conceptual approach towards composing and presenting cockpit procedures is being investigated. This approach is based upon the thought process that an operating crew must go through while attempting to meet the flight objectives given the current flight situation. In order to accomplish the flight objectives, certain critical functions must be maintained during each phase of the flight, using the appropriate procedures or success paths. The viability of these procedures depends upon the availability of required resources. If resources available are not sufficient to meet the requirements, alternative procedures (success paths) using the available resources must be constructed to maintain the critical functions and the corresponding objectives. If no success path exists that can satisfy the critical functions/objectives, then the next level of critical functions/objectives must be selected and the process repeated. Information is given in viewgraph form.

  6. Analytic Validation of Immunohistochemical Assays: A Comparison of Laboratory Practices Before and After Introduction of an Evidence-Based Guideline.

    PubMed

    Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Souers, Rhona J; Fatheree, Lisa A; Volmar, Keith E; Stuart, Lauren N; Nowak, Jan A; Astles, J Rex; Nakhleh, Raouf E

    2017-09-01

    - Laboratories must demonstrate analytic validity before any test can be used clinically, but studies have shown inconsistent practices in immunohistochemical assay validation. - To assess changes in immunohistochemistry analytic validation practices after publication of an evidence-based laboratory practice guideline. - A survey on current immunohistochemistry assay validation practices and on the awareness and adoption of a recently published guideline was sent to subscribers enrolled in one of 3 relevant College of American Pathologists proficiency testing programs and to additional nonsubscribing laboratories that perform immunohistochemical testing. The results were compared with an earlier survey of validation practices. - Analysis was based on responses from 1085 laboratories that perform immunohistochemical staining. Of 1057 responses, 65.4% (691) were aware of the guideline recommendations before this survey was sent and 79.9% (550 of 688) of those have already adopted some or all of the recommendations. Compared with the 2010 survey, a significant number of laboratories now have written validation procedures for both predictive and nonpredictive marker assays and specifications for the minimum numbers of cases needed for validation. There was also significant improvement in compliance with validation requirements, with 99% (100 of 102) having validated their most recently introduced predictive marker assay, compared with 74.9% (326 of 435) in 2010. The difficulty in finding validation cases for rare antigens and resource limitations were cited as the biggest challenges in implementing the guideline. - Dissemination of the 2014 evidence-based guideline validation practices had a positive impact on laboratory performance; some or all of the recommendations have been adopted by nearly 80% of respondents.

  7. The Effect of the Extinction Procedure in Function-Based Intervention

    ERIC Educational Resources Information Center

    Janney, Donna M.; Umbreit, John; Ferro, Jolenea B.; Liaupsin, Carl J.; Lane, Kathleen L.

    2013-01-01

    In this study, we examined the contribution of the extinction procedure in function-based interventions implemented in the general education classrooms of three at-risk elementary-aged students. Function-based interventions included antecedent adjustments, reinforcement procedures, and function-matched extinction procedures. Using a combined ABC…

  8. Teaching and assessing procedural skills using simulation: metrics and methodology.

    PubMed

    Lammers, Richard L; Davenport, Moira; Korley, Frederick; Griswold-Theodorson, Sharon; Fitch, Michael T; Narang, Aneesh T; Evans, Leigh V; Gross, Amy; Rodriguez, Elliot; Dodge, Kelly L; Hamann, Cara J; Robey, Walter C

    2008-11-01

    Simulation allows educators to develop learner-focused training and outcomes-based assessments. However, the effectiveness and validity of simulation-based training in emergency medicine (EM) requires further investigation. Teaching and testing technical skills require methods and assessment instruments that are somewhat different than those used for cognitive or team skills. Drawing from work published by other medical disciplines as well as educational, behavioral, and human factors research, the authors developed six research themes: measurement of procedural skills; development of performance standards; assessment and validation of training methods, simulator models, and assessment tools; optimization of training methods; transfer of skills learned on simulator models to patients; and prevention of skill decay over time. The article reviews relevant and established educational research methodologies and identifies gaps in our knowledge of how physicians learn procedures. The authors present questions requiring further research that, once answered, will advance understanding of simulation-based procedural training and assessment in EM.

  9. Validation of biomarkers of food intake-critical assessment of candidate biomarkers.

    PubMed

    Dragsted, L O; Gao, Q; Scalbert, A; Vergères, G; Kolehmainen, M; Manach, C; Brennan, L; Afman, L A; Wishart, D S; Andres Lacueva, C; Garcia-Aloy, M; Verhagen, H; Feskens, E J M; Praticò, G

    2018-01-01

    Biomarkers of food intake (BFIs) are a promising tool for limiting misclassification in nutrition research where more subjective dietary assessment instruments are used. They may also be used to assess compliance to dietary guidelines or to a dietary intervention. Biomarkers therefore hold promise for direct and objective measurement of food intake. However, the number of comprehensively validated biomarkers of food intake is limited to just a few. Many new candidate biomarkers emerge from metabolic profiling studies and from advances in food chemistry. Furthermore, candidate food intake biomarkers may also be identified based on extensive literature reviews such as described in the guidelines for Biomarker of Food Intake Reviews (BFIRev). To systematically and critically assess the validity of candidate biomarkers of food intake, it is necessary to outline and streamline an optimal and reproducible validation process. A consensus-based procedure was used to provide and evaluate a set of the most important criteria for systematic validation of BFIs. As a result, a validation procedure was developed including eight criteria, plausibility, dose-response, time-response, robustness, reliability, stability, analytical performance, and inter-laboratory reproducibility. The validation has a dual purpose: (1) to estimate the current level of validation of candidate biomarkers of food intake based on an objective and systematic approach and (2) to pinpoint which additional studies are needed to provide full validation of each candidate biomarker of food intake. This position paper on biomarker of food intake validation outlines the second step of the BFIRev procedure but may also be used as such for validation of new candidate biomarkers identified, e.g., in food metabolomic studies.

  10. Validation of a scenario-based assessment of critical thinking using an externally validated tool.

    PubMed

    Buur, Jennifer L; Schmidt, Peggy; Smylie, Dean; Irizarry, Kris; Crocker, Carlos; Tyler, John; Barr, Margaret

    2012-01-01

    With medical education transitioning from knowledge-based curricula to competency-based curricula, critical thinking skills have emerged as a major competency. While there are validated external instruments for assessing critical thinking, many educators have created their own custom assessments of critical thinking. However, the face validity of these assessments has not been challenged. The purpose of this study was to compare results from a custom assessment of critical thinking with the results from a validated external instrument of critical thinking. Students from the College of Veterinary Medicine at Western University of Health Sciences were administered a custom assessment of critical thinking (ACT) examination and the externally validated instrument, California Critical Thinking Skills Test (CCTST), in the spring of 2011. Total scores and sub-scores from each exam were analyzed for significant correlations using Pearson correlation coefficients. Significant correlations between ACT Blooms 2 and deductive reasoning and total ACT score and deductive reasoning were demonstrated with correlation coefficients of 0.24 and 0.22, respectively. No other statistically significant correlations were found. The lack of significant correlation between the two examinations illustrates the need in medical education to externally validate internal custom assessments. Ultimately, the development and validation of custom assessments of non-knowledge-based competencies will produce higher quality medical professionals.

  11. Validation of Procedures for Monitoring Crewmember Immune Function

    NASA Technical Reports Server (NTRS)

    Crucian, Brian; Stowe, Raymond; Mehta, Satish; Uchakin, Peter; Quiriarte, Heather; Pierson, Duane; Sams, Clarence

    2008-01-01

    There is ample evidence to suggest that space flight leads to immune system dysregulation. This may be a result of microgravity, confinement, physiological stress, radiation, environment or other mission-associated factors. The clinical risk (if any) from prolonged immune dysregulation during exploration-class space flight has not yet been determined, but may include increased incidence of infection, allergy, hypersensitivity, hematological malignancy or altered wound healing. Each of the clinical events resulting from immune dysfunction has the potential to impact mission critical objectives during exploration-class missions. To date, precious little in-flight immune data has been generated to assess this phenomenon. The majority of recent flight immune studies have been post-flight assessments, which may not accurately reflect the in-flight status of immunity as it resolves over prolonged flight. There are no procedures currently in place to monitor immune function or its effect on crew health. The objective of this Supplemental Medical Objective (SMO) is to develop and validate an immune monitoring strategy consistent with operational flight requirements and constraints. This SMO will assess immunity, latent viral reactivation and physiological stress during both short and long duration flights. Upon completion, it is expected that any clinical risks resulting from the adverse effects of space flight on the human immune system will have been determined. In addition, a flight-compatible immune monitoring strategy will have been developed with which countermeasures validation could be performed. This study will determine, to the best level allowed by current technology, the in-flight status of crewmembers' immune systems. The in-flight samples will allow a distinction between legitimate in-flight alterations and the physiological stresses of landing and readaptation which are believed to alter R+0 assessments. The overall status of the immune system during flight

  12. The validation of procedures to assess prevocational task preferences in retarded adults.

    PubMed

    Mithaug, D E; Hanawalt, D A

    1978-01-01

    Three severely retarded young adults between the ages of 19 and 21 years participated in a prevocational training program, and worked regularly on six different tasks during the scheduled six-hour day. The study attempted to assess each subject's preferences for the six tasks: collating, stuffing, sorting, pulley assembly, flour-sifter assembly, and circuit-board stuffing. In Phase I, the procedure consisted of randomly pairing each task with all other tasks in a two-choice situation that required the subjects to select one task from each pair combination to work for a seven-minute period. The selection procedure consisted of presenting two representative task objects on a tray and requesting the subject to pick up one object and place it on the work table. The object selected represented the task worked for that period. The 15 possible pair combinations were presented randomly every two days for a period of 34 days to determine the preferences. During the validation phase (Phase II), each subject's least- and most-preferred tasks were paired separately with moderately-preferred tasks. As expected, these manipulations confirmed the baseline data, as choices for the moderately-preferred tasks decreased when consistently paired with the preferred tasks and increased when consistently paired with the least-preferred tasks.

  13. Procedure-specific assessment tool for flexible pharyngo-laryngoscopy: gathering validity evidence and setting pass-fail standards.

    PubMed

    Melchiors, Jacob; Petersen, K; Todsen, T; Bohr, A; Konge, Lars; von Buchwald, Christian

    2018-06-01

    The attainment of specific identifiable competencies is the primary measure of progress in the modern medical education system. The system, therefore, requires a method for accurately assessing competence to be feasible. Evidence of validity needs to be gathered before an assessment tool can be implemented in the training and assessment of physicians. This evidence of validity must according to the contemporary theory on validity be gathered from specific sources in a structured and rigorous manner. The flexible pharyngo-laryngoscopy (FPL) is central to the otorhinolaryngologist. We aim to evaluate the flexible pharyngo-laryngoscopy assessment tool (FLEXPAT) created in a previous study and to establish a pass-fail level for proficiency. Eighteen physicians with different levels of experience (novices, intermediates, and experienced) were recruited to the study. Each performed an FPL on two patients. These procedures were video recorded, blinded, and assessed by two specialists. The score was expressed as the percentage of a possible max score. Cronbach's α was used to analyze internal consistency of the data, and a generalizability analysis was performed. The scores of the three different groups were explored, and a pass-fail level was determined using the contrasting groups' standard setting method. Internal consistency was strong with a Cronbach's α of 0.86. We found a generalizability coefficient of 0.72 sufficient for moderate stakes assessment. We found a significant difference between the novice and experienced groups (p < 0.001) and strong correlation between experience and score (Pearson's r = 0.75). The pass/fail level was established at 72% of the maximum score. Applying this pass-fail level in the test population resulted in half of the intermediary group receiving a failing score. We gathered validity evidence for the FLEXPAT according to the contemporary framework as described by Messick. Our results support a claim of validity and are

  14. Video-augmented feedback for procedural performance.

    PubMed

    Wittler, Mary; Hartman, Nicholas; Manthey, David; Hiestand, Brian; Askew, Kim

    2016-06-01

    Resident programs must assess residents' achievement of core competencies for clinical and procedural skills. Video-augmented feedback may facilitate procedural skill acquisition and promote more accurate self-assessment. A randomized controlled study to investigate whether video-augmented verbal feedback leads to increased procedural skill and improved accuracy of self-assessment compared to verbal only feedback. Participants were evaluated during procedural training for ultrasound guided internal jugular central venous catheter (US IJ CVC) placement. All participants received feedback based on a validated 30-point checklist for US IJ CVC placement and validated 6-point procedural global rating scale. Scores in both groups improved by a mean of 9.6 points (95% CI: 7.8-11.4) on the 30-point checklist, with no difference between groups in mean score improvement on the global rating scale. In regards to self-assessment, participant self-rating diverged from faculty scoring, increasingly so after receiving feedback. Residents rated highly by faculty underestimated their skill, while those rated more poorly demonstrated increasing overestimation. Accuracy of self-assessment was not improved by addition of video. While feedback advanced the skill of the resident, video-augmented feedback did not enhance skill acquisition or improve accuracy of resident self-assessment compared to standard feedback.

  15. Design Guidance for Computer-Based Procedures for Field Workers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Le Blanc, Katya; Bly, Aaron

    Nearly all activities that involve human interaction with nuclear power plant systems are guided by procedures, instructions, or checklists. Paper-based procedures (PBPs) currently used by most utilities have a demonstrated history of ensuring safety; however, improving procedure use could yield significant savings in increased efficiency, as well as improved safety through human performance gains. The nuclear industry is constantly trying to find ways to decrease human error rates, especially human error rates associated with procedure use. As a step toward the goal of improving field workers’ procedure use and adherence and hence improve human performance and overall system reliability, themore » U.S. Department of Energy Light Water Reactor Sustainability (LWRS) Program researchers, together with the nuclear industry, have been investigating the possibility and feasibility of replacing current paper-based procedures with computer-based procedures (CBPs). PBPs have ensured safe operation of plants for decades, but limitations in paper-based systems do not allow them to reach the full potential for procedures to prevent human errors. The environment in a nuclear power plant is constantly changing, depending on current plant status and operating mode. PBPs, which are static by nature, are being applied to a constantly changing context. This constraint often results in PBPs that are written in a manner that is intended to cover many potential operating scenarios. Hence, the procedure layout forces the operator to search through a large amount of irrelevant information to locate the pieces of information relevant for the task and situation at hand, which has potential consequences of taking up valuable time when operators must be responding to the situation, and potentially leading operators down an incorrect response path. Other challenges related to use of PBPs are management of multiple procedures, place-keeping, finding the correct procedure for a task, and

  16. Competency-Based Training and Simulation: Making a "Valid" Argument.

    PubMed

    Noureldin, Yasser A; Lee, Jason Y; McDougall, Elspeth M; Sweet, Robert M

    2018-02-01

    The use of simulation as an assessment tool is much more controversial than is its utility as an educational tool. However, without valid simulation-based assessment tools, the ability to objectively assess technical skill competencies in a competency-based medical education framework will remain challenging. The current literature in urologic simulation-based training and assessment uses a definition and framework of validity that is now outdated. This is probably due to the absence of awareness rather than an absence of comprehension. The following review article provides the urologic community an updated taxonomy on validity theory as it relates to simulation-based training and assessments and translates our simulation literature to date into this framework. While the old taxonomy considered validity as distinct subcategories and focused on the simulator itself, the modern taxonomy, for which we translate the literature evidence, considers validity as a unitary construct with a focus on interpretation of simulator data/scores.

  17. Experimental Verification of Buffet Calculation Procedure Using Unsteady PSP

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta

    2016-01-01

    Typically a limited number of dynamic pressure sensors are employed to determine the unsteady aerodynamic forces on large, slender aerospace structures. The estimated forces are known to be very sensitive to the number of the dynamic pressure sensors and the details of the integration scheme. This report describes a robust calculation procedure, based on frequency-specific correlation lengths, that is found to produce good estimation of fluctuating forces from a few dynamic pressure sensors. The validation test was conducted on a flat panel, placed on the floor of a wind tunnel, and was subjected to vortex shedding from a rectangular bluff-body. The panel was coated with fast response Pressure Sensitive Paint (PSP), which allowed time-resolved measurements of unsteady pressure fluctuations on a dense grid of spatial points. The first part of the report describes the detail procedure used to analyze the high-speed, PSP camera images. The procedure includes steps to reduce contamination by electronic shot noise, correction for spatial non-uniformities, and lamp brightness variation, and finally conversion of fluctuating light intensity to fluctuating pressure. The latter involved applying calibration constants from a few dynamic pressure sensors placed at selective points on the plate. Excellent comparison in the spectra, coherence and phase, calculated via PSP and dynamic pressure sensors validated the PSP processing steps. The second part of the report describes the buffet validation process, for which the first step was to use pressure histories from all PSP points to determine the "true" force fluctuations. In the next step only a selected number of pixels were chosen as "virtual sensors" and a correlation-length based buffet calculation procedure was applied to determine "modeled" force fluctuations. By progressively decreasing the number of virtual sensors it was observed that the present calculation procedure was able to make a close estimate of the "true

  18. 41 CFR 60-3.9 - No assumption of validity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 41 Public Contracts and Property Management 1 2012-07-01 2009-07-01 true No assumption of validity. 60-3.9 Section 60-3.9 Public Contracts and Property Management Other Provisions Relating to Public... of validity based on a procedure's name or descriptive labels; all forms of promotional literature...

  19. Validity evidence for an OSCE to assess competency in systems-based practice and practice-based learning and improvement: a preliminary investigation.

    PubMed

    Varkey, Prathibha; Natt, Neena; Lesnick, Timothy; Downing, Steven; Yudkowsky, Rachel

    2008-08-01

    To determine the psychometric properties and validity of an OSCE to assess the competencies of Practice-Based Learning and Improvement (PBLI) and Systems-Based Practice (SBP) in graduate medical education. An eight-station OSCE was piloted at the end of a three-week Quality Improvement elective for nine preventive medicine and endocrinology fellows at Mayo Clinic. The stations assessed performance in quality measurement, root cause analysis, evidence-based medicine, insurance systems, team collaboration, prescription errors, Nolan's model, and negotiation. Fellows' performance in each of the stations was assessed by three faculty experts using checklists and a five-point global competency scale. A modified Angoff procedure was used to set standards. Evidence for the OSCE's validity, feasibility, and acceptability was gathered. Evidence for content and response process validity was judged as excellent by institutional content experts. Interrater reliability of scores ranged from 0.85 to 1 for most stations. Interstation correlation coefficients ranged from -0.62 to 0.99, reflecting case specificity. Implementation cost was approximately $255 per fellow. All faculty members agreed that the OSCE was realistic and capable of providing accurate assessments. The OSCE provides an opportunity to systematically sample the different subdomains of Quality Improvement. Furthermore, the OSCE provides an opportunity for the demonstration of skills rather than the testing of knowledge alone, thus making it a potentially powerful assessment tool for SBP and PBLI. The study OSCE was well suited to assess SBP and PBLI. The evidence gathered through this study lays the foundation for future validation work.

  20. Procedural training and assessment of competency utilizing simulation.

    PubMed

    Sawyer, Taylor; Gray, Megan M

    2016-11-01

    This review examines the current environment of neonatal procedural learning, describes an updated model of skills training, defines the role of simulation in assessing competency, and discusses potential future directions for simulation-based competency assessment. In order to maximize impact, simulation-based procedural training programs should follow a standardized and evidence-based approach to designing and evaluating educational activities. Simulation can be used to facilitate the evaluation of competency, but must incorporate validated assessment tools to ensure quality and consistency. True competency evaluation cannot be accomplished with simulation alone: competency assessment must also include evaluations of procedural skill during actual clinical care. Future work in this area is needed to measure and track clinically meaningful patient outcomes resulting from simulation-based training, examine the use of simulation to assist physicians undergoing re-entry to practice, and to examine the use of procedural skills simulation as part of a maintenance of competency and life-long learning. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Methodology for testing and validating knowledge bases

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  2. Delphi Method Validation of a Procedural Performance Checklist for Insertion of an Ultrasound-Guided Internal Jugular Central Line.

    PubMed

    Hartman, Nicholas; Wittler, Mary; Askew, Kim; Manthey, David

    2016-01-01

    Placement of ultrasound-guided central lines is a critical skill for physicians in several specialties. Improving the quality of care delivered surrounding this procedure demands rigorous measurement of competency, and validated tools to assess performance are essential. Using the iterative, modified Delphi technique and experts in multiple disciplines across the United States, the study team created a 30-item checklist designed to assess competency in the placement of ultrasound-guided internal jugular central lines. Cronbach α was .94, indicating an excellent degree of internal consistency. Further validation of this checklist will require its implementation in simulated and clinical environments. © The Author(s) 2014.

  3. Building validation tools for knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Stachowitz, R. A.; Chang, C. L.; Stock, T. S.; Combs, J. B.

    1987-01-01

    The Expert Systems Validation Associate (EVA), a validation system under development at the Lockheed Artificial Intelligence Center for more than a year, provides a wide range of validation tools to check the correctness, consistency and completeness of a knowledge-based system. A declarative meta-language (higher-order language), is used to create a generic version of EVA to validate applications written in arbitrary expert system shells. The architecture and functionality of EVA are presented. The functionality includes Structure Check, Logic Check, Extended Structure Check (using semantic information), Extended Logic Check, Semantic Check, Omission Check, Rule Refinement, Control Check, Test Case Generation, Error Localization, and Behavior Verification.

  4. Visual Contrast Sensitivity Functions Obtained from Untrained Observers Using Tracking and Staircase Procedures. Final Report.

    ERIC Educational Resources Information Center

    Geri, George A.; Hubbard, David C.

    Two adaptive psychophysical procedures (tracking and "yes-no" staircase) for obtaining human visual contrast sensitivity functions (CSF) were evaluated. The procedures were chosen based on their proven validity and the desire to evaluate the practical effects of stimulus transients, since tracking procedures traditionally employ gradual…

  5. A Proposal on the Validation Model of Equivalence between PBLT and CBLT

    ERIC Educational Resources Information Center

    Chen, Huilin

    2014-01-01

    The validity of the computer-based language test is possibly affected by three factors: computer familiarity, audio-visual cognitive competence, and other discrepancies in construct. Therefore, validating the equivalence between the paper-and-pencil language test and the computer-based language test is a key step in the procedure of designing a…

  6. Construct validity of individual and summary performance metrics associated with a computer-based laparoscopic simulator.

    PubMed

    Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason

    2014-06-01

    Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.

  7. Hierarchical calibration and validation of computational fluid dynamics models for solid sorbent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Canhai; Xu, Zhijie; Pan, Wenxiao

    2016-01-01

    To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesianmore » calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.« less

  8. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR PERFORMANCE OF COMPUTER SOFTWARE: VERIFICATION AND VALIDATION (UA-D-2.0)

    EPA Science Inventory

    The purpose of this SOP is to define the procedures used for the initial and periodic verification and validation of computer programs used during the Arizona NHEXAS project and the "Border" study. Keywords: Computers; Software; QA/QC.

    The National Human Exposure Assessment Sur...

  9. SATS HVO Concept Validation Experiment

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria; Williams, Daniel; Murdoch, Jennifer; Adams, Catherine

    2005-01-01

    A human-in-the-loop simulation experiment was conducted at the NASA Langley Research Center s (LaRC) Air Traffic Operations Lab (ATOL) in an effort to comprehensively validate tools and procedures intended to enable the Small Aircraft Transportation System, Higher Volume Operations (SATS HVO) concept of operations. The SATS HVO procedures were developed to increase the rate of operations at non-towered, non-radar airports in near all-weather conditions. A key element of the design is the establishment of a volume of airspace around designated airports where pilots accept responsibility for self-separation. Flights operating at these airports, are given approach sequencing information computed by a ground based automated system. The SATS HVO validation experiment was conducted in the ATOL during the spring of 2004 in order to determine if a pilot can safely and proficiently fly an airplane while performing SATS HVO procedures. Comparative measures of flight path error, perceived workload and situation awareness were obtained for two types of scenarios. Baseline scenarios were representative of today s system utilizing procedure separation, where air traffic control grants one approach or departure clearance at a time. SATS HVO scenarios represented approaches and departure procedures as described in the SATS HVO concept of operations. Results from the experiment indicate that low time pilots were able to fly SATS HVO procedures and maintain self-separation as safely and proficiently as flying today's procedures.

  10. Sensitivity-Uncertainty Based Nuclear Criticality Safety Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-09-20

    These are slides from a seminar given to the University of Mexico Nuclear Engineering Department. Whisper is a statistical analysis package developed to support nuclear criticality safety validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit for the application. Whisper and its associated benchmark files are developed and maintained as part of MCNP6, and will be distributed with all future releases of MCNP6. Although sensitivity-uncertainty methods for NCS validation have been under development for 20 years, continuous-energy Monte Carlo codes such asmore » MCNP could not determine the required adjoint-weighted tallies for sensitivity profiles. The recent introduction of the iterated fission probability method into MCNP led to the rapid development of sensitivity analysis capabilities for MCNP6 and the development of Whisper. Sensitivity-uncertainty based methods represent the future for NCS validation – making full use of today’s computer power to codify past approaches based largely on expert judgment. Validation results are defensible, auditable, and repeatable as needed with different assumptions and process models. The new methods can supplement, support, and extend traditional validation approaches.« less

  11. Modal-pushover-based ground-motion scaling procedure

    USGS Publications Warehouse

    Kalkan, Erol; Chopra, Anil K.

    2011-01-01

    Earthquake engineering is increasingly using nonlinear response history analysis (RHA) to demonstrate the performance of structures. This rigorous method of analysis requires selection and scaling of ground motions appropriate to design hazard levels. This paper presents a modal-pushover-based scaling (MPS) procedure to scale ground motions for use in a nonlinear RHA of buildings. In the MPS method, the ground motions are scaled to match to a specified tolerance, a target value of the inelastic deformation of the first-mode inelastic single-degree-of-freedom (SDF) system whose properties are determined by the first-mode pushover analysis. Appropriate for first-mode dominated structures, this approach is extended for structures with significant contributions of higher modes by considering elastic deformation of second-mode SDF systems in selecting a subset of the scaled ground motions. Based on results presented for three actual buildings-4, 6, and 13-story-the accuracy and efficiency of the MPS procedure are established and its superiority over the ASCE/SEI 7-05 scaling procedure is demonstrated.

  12. Refining and validating a two-stage and web-based cancer risk assessment tool for village doctors in China.

    PubMed

    Shen, Xing-Rong; Chai, Jing; Feng, Rui; Liu, Tong-Zhu; Tong, Gui-Xian; Cheng, Jing; Li, Kai-Chun; Xie, Shao-Yu; Shi, Yong; Wang, De-Bin

    2014-01-01

    The big gap between efficacy of population level prevention and expectations due to heterogeneity and complexity of cancer etiologic factors calls for selective yet personalized interventions based on effective risk assessment. This paper documents our research protocol aimed at refining and validating a two-stage and web- based cancer risk assessment tool, from a tentative one in use by an ongoing project, capable of identifying individuals at elevated risk for one or more types of the 80% leading cancers in rural China with adequate sensitivity and specificity and featuring low cost, easy application and cultural and technical sensitivity for farmers and village doctors. The protocol adopted a modified population-based case control design using 72, 000 non-patients as controls, 2, 200 cancer patients as cases, and another 600 patients as cases for external validation. Factors taken into account comprised 8 domains including diet and nutrition, risk behaviors, family history, precancerous diseases, related medical procedures, exposure to environment hazards, mood and feelings, physical activities and anthropologic and biologic factors. Modeling stresses explored various methodologies like empirical analysis, logistic regression, neuro-network analysis, decision theory and both internal and external validation using concordance statistics, predictive values, etc..

  13. Model-Based Verification and Validation of Spacecraft Avionics

    NASA Technical Reports Server (NTRS)

    Khan, Mohammed Omair

    2012-01-01

    Our simulation was able to mimic the results of 30 tests on the actual hardware. This shows that simulations have the potential to enable early design validation - well before actual hardware exists. Although simulations focused around data processing procedures at subsystem and device level, they can also be applied to system level analysis to simulate mission scenarios and consumable tracking (e.g. power, propellant, etc.). Simulation engine plug-in developments are continually improving the product, but handling time for time-sensitive operations (like those of the remote engineering unit and bus controller) can be cumbersome.

  14. Determining procedures for simulation-based training in radiology: a nationwide needs assessment.

    PubMed

    Nayahangan, Leizl Joy; Nielsen, Kristina Rue; Albrecht-Beste, Elisabeth; Bachmann Nielsen, Michael; Paltved, Charlotte; Lindorff-Larsen, Karen Gilboe; Nielsen, Bjørn Ulrik; Konge, Lars

    2018-06-01

    New training modalities such as simulation are widely accepted in radiology; however, development of effective simulation-based training programs is challenging. They are often unstructured and based on convenience or coincidence. The study objective was to perform a nationwide needs assessment to identify and prioritize technical procedures that should be included in a simulation-based curriculum. A needs assessment using the Delphi method was completed among 91 key leaders in radiology. Round 1 identified technical procedures that radiologists should learn. Round 2 explored frequency of procedure, number of radiologists performing the procedure, risk and/or discomfort for patients, and feasibility for simulation. Round 3 was elimination and prioritization of procedures. Response rates were 67 %, 70 % and 66 %, respectively. In Round 1, 22 technical procedures were included. Round 2 resulted in pre-prioritization of procedures. In round 3, 13 procedures were included in the final prioritized list. The three highly prioritized procedures were ultrasound-guided (US) histological biopsy and fine-needle aspiration, US-guided needle puncture and catheter drainage, and basic abdominal ultrasound. A needs assessment identified and prioritized 13 technical procedures to include in a simulation-based curriculum. The list may be used as guide for development of training programs. • Simulation-based training can supplement training on patients in radiology. • Development of simulation-based training should follow a structured approach. • The CAMES Needs Assessment Formula explores needs for simulation training. • A national Delphi study identified and prioritized procedures suitable for simulation training. • The prioritized list serves as guide for development of courses in radiology.

  15. Office-based endovascular suite is safe for most procedures.

    PubMed

    Jain, Krishna; Munn, John; Rummel, Mark C; Johnston, Dan; Longton, Chris

    2014-01-01

    This study was conducted to identify the safety of endovascular procedures in the office endovascular suite and to assess patient satisfaction in this setting. Between May 22, 2007, and December 31, 2012, 2822 patients underwent 6458 percutaneous procedures in an office-based endovascular suite. Demographics of the patients, complications, hospital transfers, and 30-day mortality were documented in a prospective manner. Follow-up calls were made, and a satisfaction survey was conducted. Almost all dialysis procedures were done under local anesthesia and peripheral arterial procedures under conscious sedation. All patients, except those undergoing catheter removals, received hydrocodone and acetaminophen (5/325 mg), diazepam (5-10 mg), and one dose of an oral antibiotic preprocedure and three doses postprocedure. Patients who required conscious sedation received fentanyl and midazolam. Conscious sedation was used almost exclusively in patients having an arterial procedure. Measurements of blood urea nitrogen, creatinine, international normalized ratio, and partial thromboplastin time were performed before peripheral arteriograms. All other patients had no preoperative laboratory tests. Patients considered high risk (American Society of Anesthesiologists Physical Status Classification 4), those who could not tolerate the procedure with mild to moderate conscious sedation, patients with a previous bad experience, or patients who weighed >400 pounds were not candidates for office based procedures. There were 54 total complications (0.8%): venous, 2.2%; aortogram without interventions, 1%; aortogram with interventions, 2.7%; fistulogram, 0.5%; catheters, 0.3%; and venous filter-related, 2%. Twenty-six patients required hospital transfer from the office. Ten patients needed an operative intervention because of a complication. No procedure-related deaths occurred. There were 18 deaths in a 30-day period. Of patients surveyed, 99% indicated that they would come back to the

  16. MotiveValidator: interactive web-based validation of ligand and residue structure in biomolecular complexes.

    PubMed

    Vařeková, Radka Svobodová; Jaiswal, Deepti; Sehnal, David; Ionescu, Crina-Maria; Geidl, Stanislav; Pravda, Lukáš; Horský, Vladimír; Wimmerová, Michaela; Koča, Jaroslav

    2014-07-01

    Structure validation has become a major issue in the structural biology community, and an essential step is checking the ligand structure. This paper introduces MotiveValidator, a web-based application for the validation of ligands and residues in PDB or PDBx/mmCIF format files provided by the user. Specifically, MotiveValidator is able to evaluate in a straightforward manner whether the ligand or residue being studied has a correct annotation (3-letter code), i.e. if it has the same topology and stereochemistry as the model ligand or residue with this annotation. If not, MotiveValidator explicitly describes the differences. MotiveValidator offers a user-friendly, interactive and platform-independent environment for validating structures obtained by any type of experiment. The results of the validation are presented in both tabular and graphical form, facilitating their interpretation. MotiveValidator can process thousands of ligands or residues in a single validation run that takes no more than a few minutes. MotiveValidator can be used for testing single structures, or the analysis of large sets of ligands or fragments prepared for binding site analysis, docking or virtual screening. MotiveValidator is freely available via the Internet at http://ncbr.muni.cz/MotiveValidator. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Safety validation test equipment operation

    NASA Astrophysics Data System (ADS)

    Kurosaki, Tadaaki; Watanabe, Takashi

    1992-08-01

    An overview of the activities conducted on safety validation test equipment operation for materials used for NASA manned missions is presented. Safety validation tests, such as flammability, odor, offgassing, and so forth were conducted in accordance with NASA-NHB-8060.1C using test subjects common with those used by NASA, and the equipment used were qualified for their functions and performances in accordance with NASDA-CR-99124 'Safety Validation Test Qualification Procedures.' Test procedure systems were established by preparing 'Common Procedures for Safety Validation Test' as well as test procedures for flammability, offgassing, and odor tests. The test operation organization chaired by the General Manager of the Parts and Material Laboratory of NASDA (National Space Development Agency of Japan) was established, and the test leaders and operators in the organization were qualified in accordance with the specified procedures. One-hundred-one tests had been conducted so far by the Parts and Material Laboratory according to the request submitted by the manufacturers through the Space Station Group and the Safety and Product Assurance for Manned Systems Office.

  18. Technical skills assessment toolbox: a review using the unitary framework of validity.

    PubMed

    Ghaderi, Iman; Manji, Farouq; Park, Yoon Soo; Juul, Dorthea; Ott, Michael; Harris, Ilene; Farrell, Timothy M

    2015-02-01

    The purpose of this study was to create a technical skills assessment toolbox for 35 basic and advanced skills/procedures that comprise the American College of Surgeons (ACS)/Association of Program Directors in Surgery (APDS) surgical skills curriculum and to provide a critical appraisal of the included tools, using contemporary framework of validity. Competency-based training has become the predominant model in surgical education and assessment of performance is an essential component. Assessment methods must produce valid results to accurately determine the level of competency. A search was performed, using PubMed and Google Scholar, to identify tools that have been developed for assessment of the targeted technical skills. A total of 23 assessment tools for the 35 ACS/APDS skills modules were identified. Some tools, such as Operative Performance Rating System (OSATS) and Objective Structured Assessment of Technical Skill (OPRS), have been tested for more than 1 procedure. Therefore, 30 modules had at least 1 assessment tool, with some common surgical procedures being addressed by several tools. Five modules had none. Only 3 studies used Messick's framework to design their validity studies. The remaining studies used an outdated framework on the basis of "types of validity." When analyzed using the contemporary framework, few of these studies demonstrated validity for content, internal structure, and relationship to other variables. This study provides an assessment toolbox for common surgical skills/procedures. Our review shows that few authors have used the contemporary unitary concept of validity for development of their assessment tools. As we progress toward competency-based training, future studies should provide evidence for various sources of validity using the contemporary framework.

  19. The Shedler-Westen Assessment Procedure (SWAP): Evaluating Psychometric Questions about Its Reliability, Validity, and Impact of Its Fixed Score Distribution

    ERIC Educational Resources Information Center

    Blagov, Pavel S.; Bi, Wu; Shedler, Jonathan; Westen, Drew

    2012-01-01

    The Shedler-Westen Assessment Procedure (SWAP) is a personality assessment instrument designed for use by expert clinical assessors. Critics have raised questions about its psychometrics, most notably its validity across observers and situations, the impact of its fixed score distribution on research findings, and its test-retest reliability. We…

  20. A design procedure and handling quality criteria for lateral directional flight control systems

    NASA Technical Reports Server (NTRS)

    Stein, G.; Henke, A. H.

    1972-01-01

    A practical design procedure for aircraft augmentation systems is described based on quadratic optimal control technology and handling-quality-oriented cost functionals. The procedure is applied to the design of a lateral-directional control system for the F4C aircraft. The design criteria, design procedure, and final control system are validated with a program of formal pilot evaluation experiments.

  1. Examination of efficacious, efficient, and socially valid error-correction procedures to teach sight words and prepositions to children with autism spectrum disorder.

    PubMed

    Kodak, Tiffany; Campbell, Vincent; Bergmann, Samantha; LeBlanc, Brittany; Kurtz-Nelson, Eva; Cariveau, Tom; Haq, Shaji; Zemantic, Patricia; Mahon, Jacob

    2016-09-01

    Prior research shows that learners have idiosyncratic responses to error-correction procedures during instruction. Thus, assessments that identify error-correction strategies to include in instruction can aid practitioners in selecting individualized, efficacious, and efficient interventions. The current investigation conducted an assessment to compare 5 error-correction procedures that have been evaluated in the extant literature and are common in instructional practice for children with autism spectrum disorder (ASD). Results showed that the assessment identified efficacious and efficient error-correction procedures for all participants, and 1 procedure was efficient for 4 of the 5 participants. To examine the social validity of error-correction procedures, participants selected among efficacious and efficient interventions in a concurrent-chains assessment. We discuss the results in relation to prior research on error-correction procedures and current instructional practices for learners with ASD. © 2016 Society for the Experimental Analysis of Behavior.

  2. A Partial Least Squares Based Procedure for Upstream Sequence Classification in Prokaryotes.

    PubMed

    Mehmood, Tahir; Bohlin, Jon; Snipen, Lars

    2015-01-01

    The upstream region of coding genes is important for several reasons, for instance locating transcription factor, binding sites, and start site initiation in genomic DNA. Motivated by a recently conducted study, where multivariate approach was successfully applied to coding sequence modeling, we have introduced a partial least squares (PLS) based procedure for the classification of true upstream prokaryotic sequence from background upstream sequence. The upstream sequences of conserved coding genes over genomes were considered in analysis, where conserved coding genes were found by using pan-genomics concept for each considered prokaryotic species. PLS uses position specific scoring matrix (PSSM) to study the characteristics of upstream region. Results obtained by PLS based method were compared with Gini importance of random forest (RF) and support vector machine (SVM), which is much used method for sequence classification. The upstream sequence classification performance was evaluated by using cross validation, and suggested approach identifies prokaryotic upstream region significantly better to RF (p-value < 0.01) and SVM (p-value < 0.01). Further, the proposed method also produced results that concurred with known biological characteristics of the upstream region.

  3. 40 CFR 761.392 - Preparing validation study samples.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... establish a surface concentration to be included in the standard operating procedure. The surface levels of... Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to... surfaces must be ≥20 µg/100 cm2. (2) To validate a procedure to decontaminate a specified surface...

  4. 40 CFR 761.392 - Preparing validation study samples.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... establish a surface concentration to be included in the standard operating procedure. The surface levels of... Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to... surfaces must be ≥20 µg/100 cm2. (2) To validate a procedure to decontaminate a specified surface...

  5. 40 CFR 761.392 - Preparing validation study samples.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... establish a surface concentration to be included in the standard operating procedure. The surface levels of... Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to... surfaces must be ≥20 µg/100 cm2. (2) To validate a procedure to decontaminate a specified surface...

  6. Procedures For Microbial-Ecology Laboratory

    NASA Technical Reports Server (NTRS)

    Huff, Timothy L.

    1993-01-01

    Microbial Ecology Laboratory Procedures Manual provides concise and well-defined instructions on routine technical procedures to be followed in microbiological laboratory to ensure safety, analytical control, and validity of results.

  7. Validation Process Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John E.; English, Christine M.; Gesick, Joshua C.

    This report documents the validation process as applied to projects awarded through Funding Opportunity Announcements (FOAs) within the U.S. Department of Energy Bioenergy Technologies Office (DOE-BETO). It describes the procedures used to protect and verify project data, as well as the systematic framework used to evaluate and track performance metrics throughout the life of the project. This report also describes the procedures used to validate the proposed process design, cost data, analysis methodologies, and supporting documentation provided by the recipients.

  8. Validation of the Modified Surgeon Periorbital Rating of Edema and Ecchymosis (SPREE) Questionnaire: A Prospective Analysis of Facial Plastic and Reconstructive Surgery Procedures.

    PubMed

    Oliver, Jeremie D; Menapace, Deanna; Younes, Ahmed; Recker, Chelsey; Hamilton, Grant; Friedman, Oren

    2018-02-01

    Although periorbital edema and ecchymosis are commonly encountered after facial plastic and reconstructive surgery procedures, there is currently no validated grading scale to qualify these findings. In this study, the modified "Surgeon Periorbital Rating of Edema and Ecchymosis (SPREE)" questionnaire is used as a grading scale for patients undergoing facial plastic surgery procedures. This article aims to validate a uniform grading scale for periorbital edema and ecchymosis using the modified SPREE questionnaire in the postoperative period. This is a prospective study including 82 patients at two different routine postoperative visits (second and seventh postoperative days), wherein the staff and resident physicians, physician assistants (PAs), patients, and any accompanying adults were asked to use the modified SPREE questionnaire to score edema and ecchymosis of each eye of the patient who had undergone a plastic surgery procedure. Interrater and intrarater agreements were then examined. Cohen's kappa coefficient was calculated to measure intrarater and interrater agreement between health care professionals (staff physicians and resident physicians); staff physicians and PAs; and staff physicians, patients, and accompanying adults. Good to excellent agreement was identified between staff physicians and resident physicians as well as between staff physicians and PAs. There was, however, poor agreement between staff physicians, patients, and accompanying adults. In addition, excellent agreement was found for intraobserver reliability during same-day visits. The modified SPREE questionnaire is a validated grading system for use by health care professionals to reliably rate periorbital edema and ecchymosis in the postoperative period. Validation of the modified SPREE questionnaire may improve ubiquity in medical literature reporting and related outcomes reporting in future. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  9. Laboratory Analytical Procedures | Bioenergy | NREL

    Science.gov Websites

    analytical procedures (LAPs) to provide validated methods for biofuels and pyrolysis bio-oils research . Biomass Compositional Analysis These lab procedures provide tested and accepted methods for performing

  10. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Reliability based fatigue design and maintenance procedures

    NASA Technical Reports Server (NTRS)

    Hanagud, S.

    1977-01-01

    A stochastic model has been developed to describe a probability for fatigue process by assuming a varying hazard rate. This stochastic model can be used to obtain the desired probability of a crack of certain length at a given location after a certain number of cycles or time. Quantitative estimation of the developed model was also discussed. Application of the model to develop a procedure for reliability-based cost-effective fail-safe structural design is presented. This design procedure includes the reliability improvement due to inspection and repair. Methods of obtaining optimum inspection and maintenance schemes are treated.

  12. Resampling procedures to identify important SNPs using a consensus approach.

    PubMed

    Pardy, Christopher; Motyer, Allan; Wilson, Susan

    2011-11-29

    Our goal is to identify common single-nucleotide polymorphisms (SNPs) (minor allele frequency > 1%) that add predictive accuracy above that gained by knowledge of easily measured clinical variables. We take an algorithmic approach to predict each phenotypic variable using a combination of phenotypic and genotypic predictors. We perform our procedure on the first simulated replicate and then validate against the others. Our procedure performs well when predicting Q1 but is less successful for the other outcomes. We use resampling procedures where possible to guard against false positives and to improve generalizability. The approach is based on finding a consensus regarding important SNPs by applying random forests and the least absolute shrinkage and selection operator (LASSO) on multiple subsamples. Random forests are used first to discard unimportant predictors, narrowing our focus to roughly 100 important SNPs. A cross-validation LASSO is then used to further select variables. We combine these procedures to guarantee that cross-validation can be used to choose a shrinkage parameter for the LASSO. If the clinical variables were unavailable, this prefiltering step would be essential. We perform the SNP-based analyses simultaneously rather than one at a time to estimate SNP effects in the presence of other causal variants. We analyzed the first simulated replicate of Genetic Analysis Workshop 17 without knowledge of the true model. Post-conference knowledge of the simulation parameters allowed us to investigate the limitations of our approach. We found that many of the false positives we identified were substantially correlated with genuine causal SNPs.

  13. Situation awareness and trust in computer-based procedures in nuclear power plant operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Throneburg, E. B.; Jones, J. M.

    2006-07-01

    Situation awareness and trust are two issues that need to be addressed in the design of computer-based procedures for nuclear power plants. Situation awareness, in relation to computer-based procedures, concerns the operators' knowledge of the plant's state while following the procedures. Trust concerns the amount of faith that the operators put into the automated procedures, which can affect situation awareness. This paper first discusses the advantages and disadvantages of computer-based procedures. It then discusses the known aspects of situation awareness and trust as applied to computer-based procedures in nuclear power plants. An outline of a proposed experiment is then presentedmore » that includes methods of measuring situation awareness and trust so that these aspects can be analyzed for further study. (authors)« less

  14. Combining tabular, rule-based, and procedural knowledge in computer-based guidelines for childhood immunization.

    PubMed

    Miller, P L; Frawley, S J; Sayward, F G; Yasnoff, W A; Duncan, L; Fleming, D W

    1997-06-01

    IMM/Serve is a computer program which implements the clinical guidelines for childhood immunization. IMM/Serve accepts as input a child's immunization history. It then indicates which vaccinations are due and which vaccinations should be scheduled next. The clinical guidelines for immunization are quite complex and are modified quite frequently. As a result, it is important that IMM/Serve's knowledge be represented in a format that facilitates the maintenance of that knowledge as the field evolves over time. To achieve this goal, IMM/Serve uses four representations for different parts of its knowledge base: (1) Immunization forecasting parameters that specify the minimum ages and wait-intervals for each dose are stored in tabular form. (2) The clinical logic that determines which set of forecasting parameters applies for a particular patient in each vaccine series is represented using if-then rules. (3) The temporal logic that combines dates, ages, and intervals to calculate recommended dates, is expressed procedurally. (4) The screening logic that checks each previous dose for validity is performed using a decision table that combines minimum ages and wait intervals with a small amount of clinical logic. A knowledge maintenance tool, IMM/Def, has been developed to help maintain the rule-based logic. The paper describes the design of IMM/Serve and the rationale and role of the different forms of knowledge used.

  15. Development of a base mix design procedure.

    DOT National Transportation Integrated Search

    1974-01-01

    This paper reports an investigation into the development of a compaction procedure for base mixes containing aggregates of 1 1/2" (38 mm) maximum size. The specimens were made in a manner similar to that given in ASTM Designation 1561-71, except that...

  16. Model-based verification and validation of the SMAP uplink processes

    NASA Astrophysics Data System (ADS)

    Khan, M. O.; Dubos, G. F.; Tirona, J.; Standley, S.

    Model-Based Systems Engineering (MBSE) is being used increasingly within the spacecraft design community because of its benefits when compared to document-based approaches. As the complexity of projects expands dramatically with continually increasing computational power and technology infusion, the time and effort needed for verification and validation (V& V) increases geometrically. Using simulation to perform design validation with system-level models earlier in the life cycle stands to bridge the gap between design of the system (based on system-level requirements) and verifying those requirements/validating the system as a whole. This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V& V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process. Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based development efforts.

  17. Using the Multiple-Choice Procedure to Measure the Relative Reinforcing Efficacy of Gambling: Initial Validity Evidence Among College Students.

    PubMed

    Butler, Leon H; Irons, Jessica G; Bassett, Drew T; Correia, Christopher J

    2018-06-01

    The multiple choice procedure (MCP) is used to assess the relative reinforcing value of concurrently available stimuli. The MCP was originally developed to assess the reinforcing value of drugs; the current within-subjects study employed the MCP to assess the reinforcing value of gambling behavior. Participants (N = 323) completed six versions of the MCP that presented hypothetical choices between money to be used while gambling ($10 or $25) versus escalating amounts of guaranteed money available immediately or after delays of either 1 week or 1 month. Results suggest that choices on the MCP are correlated with other measures of gambling behavior, thus providing concurrent validity data for using the MCP to quantify the relative reinforcing value of gambling. The MCP for gambling also displayed sensitivity to reinforcer magnitude and delay effects, which provides evidence of criterion validity. The results are consistent with a behavioral economic model of addiction and suggest that the MCP could be a valid tool for future research on gambling behavior.

  18. Policy and Validity Prospects for Performance-Based Assessment.

    ERIC Educational Resources Information Center

    Baker, Eva L.; And Others

    1994-01-01

    This article describes performance-based assessment as expounded by its proponents, comments on these conceptions, reviews evidence regarding the technical quality of performance-based assessment, and considers its validity under various policy options. (JDD)

  19. SDG and qualitative trend based model multiple scale validation

    NASA Astrophysics Data System (ADS)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  20. Analytic Validation of Immunohistochemistry Assays: New Benchmark Data From a Survey of 1085 Laboratories.

    PubMed

    Stuart, Lauren N; Volmar, Keith E; Nowak, Jan A; Fatheree, Lisa A; Souers, Rhona J; Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Astles, J Rex; Nakhleh, Raouf E

    2017-09-01

    - A cooperative agreement between the College of American Pathologists (CAP) and the United States Centers for Disease Control and Prevention was undertaken to measure laboratories' awareness and implementation of an evidence-based laboratory practice guideline (LPG) on immunohistochemical (IHC) validation practices published in 2014. - To establish new benchmark data on IHC laboratory practices. - A 2015 survey on IHC assay validation practices was sent to laboratories subscribed to specific CAP proficiency testing programs and to additional nonsubscribing laboratories that perform IHC testing. Specific questions were designed to capture laboratory practices not addressed in a 2010 survey. - The analysis was based on responses from 1085 laboratories that perform IHC staining. Ninety-six percent (809 of 844) always documented validation of IHC assays. Sixty percent (648 of 1078) had separate procedures for predictive and nonpredictive markers, 42.7% (220 of 515) had procedures for laboratory-developed tests, 50% (349 of 697) had procedures for testing cytologic specimens, and 46.2% (363 of 785) had procedures for testing decalcified specimens. Minimum case numbers were specified by 85.9% (720 of 838) of laboratories for nonpredictive markers and 76% (584 of 768) for predictive markers. Median concordance requirements were 95% for both types. For initial validation, 75.4% (538 of 714) of laboratories adopted the 20-case minimum for nonpredictive markers and 45.9% (266 of 579) adopted the 40-case minimum for predictive markers as outlined in the 2014 LPG. The most common method for validation was correlation with morphology and expected results. Laboratories also reported which assay changes necessitated revalidation and their minimum case requirements. - Benchmark data on current IHC validation practices and procedures may help laboratories understand the issues and influence further refinement of LPG recommendations.

  1. Brief surgical procedure code lists for outcomes measurement and quality improvement in resource-limited settings.

    PubMed

    Liu, Charles; Kayima, Peter; Riesel, Johanna; Situma, Martin; Chang, David; Firth, Paul

    2017-11-01

    The lack of a classification system for surgical procedures in resource-limited settings hinders outcomes measurement and reporting. Existing procedure coding systems are prohibitively large and expensive to implement. We describe the creation and prospective validation of 3 brief procedure code lists applicable in low-resource settings, based on analysis of surgical procedures performed at Mbarara Regional Referral Hospital, Uganda's second largest public hospital. We reviewed operating room logbooks to identify all surgical operations performed at Mbarara Regional Referral Hospital during 2014. Based on the documented indication for surgery and procedure(s) performed, we assigned each operation up to 4 procedure codes from the International Classification of Diseases, 9th Revision, Clinical Modification. Coding of procedures was performed by 2 investigators, and a random 20% of procedures were coded by both investigators. These codes were aggregated to generate procedure code lists. During 2014, 6,464 surgical procedures were performed at Mbarara Regional Referral Hospital, to which we assigned 435 unique procedure codes. Substantial inter-rater reliability was achieved (κ = 0.7037). The 111 most common procedure codes accounted for 90% of all codes assigned, 180 accounted for 95%, and 278 accounted for 98%. We considered these sets of codes as 3 procedure code lists. In a prospective validation, we found that these lists described 83.2%, 89.2%, and 92.6% of surgical procedures performed at Mbarara Regional Referral Hospital during August to September of 2015, respectively. Empirically generated brief procedure code lists based on International Classification of Diseases, 9th Revision, Clinical Modification can be used to classify almost all surgical procedures performed at a Ugandan referral hospital. Such a standardized procedure coding system may enable better surgical data collection for administration, research, and quality improvement in resource

  2. 21 CFR 1270.31 - Written procedures.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... procedures prepared and followed for all significant steps in the infectious disease testing process under... procedures prepared, validated, and followed for prevention of infectious disease contamination or cross...

  3. 21 CFR 1270.31 - Written procedures.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... procedures prepared and followed for all significant steps in the infectious disease testing process under... procedures prepared, validated, and followed for prevention of infectious disease contamination or cross...

  4. 21 CFR 1270.31 - Written procedures.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... procedures prepared and followed for all significant steps in the infectious disease testing process under... procedures prepared, validated, and followed for prevention of infectious disease contamination or cross...

  5. 21 CFR 1270.31 - Written procedures.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... procedures prepared and followed for all significant steps in the infectious disease testing process under... procedures prepared, validated, and followed for prevention of infectious disease contamination or cross...

  6. Large scale validation of an efficient CRISPR/Cas-based multi gene editing protocol in Escherichia coli.

    PubMed

    Zerbini, Francesca; Zanella, Ilaria; Fraccascia, Davide; König, Enrico; Irene, Carmela; Frattini, Luca F; Tomasi, Michele; Fantappiè, Laura; Ganfini, Luisa; Caproni, Elena; Parri, Matteo; Grandi, Alberto; Grandi, Guido

    2017-04-24

    The exploitation of the CRISPR/Cas9 machinery coupled to lambda (λ) recombinase-mediated homologous recombination (recombineering) is becoming the method of choice for genome editing in E. coli. First proposed by Jiang and co-workers, the strategy has been subsequently fine-tuned by several authors who demonstrated, by using few selected loci, that the efficiency of mutagenesis (number of mutant colonies over total number of colonies analyzed) can be extremely high (up to 100%). However, from published data it is difficult to appreciate the robustness of the technology, defined as the number of successfully mutated loci over the total number of targeted loci. This information is particularly relevant in high-throughput genome editing, where repetition of experiments to rescue missing mutants would be impractical. This work describes a "brute force" validation activity, which culminated in the definition of a robust, simple and rapid protocol for single or multiple gene deletions. We first set up our own version of the CRISPR/Cas9 protocol and then we evaluated the mutagenesis efficiency by changing different parameters including sequence of guide RNAs, length and concentration of donor DNAs, and use of single stranded and double stranded donor DNAs. We then validated the optimized conditions targeting 78 "dispensable" genes. This work led to the definition of a protocol, featuring the use of double stranded synthetic donor DNAs, which guarantees mutagenesis efficiencies consistently higher than 10% and a robustness of 100%. The procedure can be applied also for simultaneous gene deletions. This work defines for the first time the robustness of a CRISPR/Cas9-based protocol based on a large sample size. Since the technical solutions here proposed can be applied to other similar procedures, the data could be of general interest for the scientific community working on bacterial genome editing and, in particular, for those involved in synthetic biology projects

  7. Development and validation of a measurement procedure based on ultra-high performance liquid chromatography-tandem mass spectrometry for simultaneous measurement of β-lactam antibiotic concentration in human plasma.

    PubMed

    Rigo-Bonnin, Raül; Ribera, Alba; Arbiol-Roca, Ariadna; Cobo-Sacristán, Sara; Padullés, Ariadna; Murillo, Òscar; Shaw, Evelyn; Granada, Rosa; Pérez-Fernández, Xosé L; Tubau, Fe; Alía, Pedro

    2017-05-01

    The administration of β-lactam antibiotics in continuous infusion could let optimize the pharmacokinetic/pharmacodynamic parameters, especially in the treatment of serious bacterial infections. In this context, and also due to variability in their plasmatic concentrations, therapeutic drug monitoring (TDM) may be useful to optimize dosing and, therefore, be useful for the clinicians. We developed and validated a measurement procedure based on ultra-high performance liquid chromatography-tandem mass spectrometry for simultaneous measurement of amoxicillin, ampicillin, cloxacillin, piperacillin, cefepime, ceftazidime, cefuroxime, aztreonam and meropenem concentrations in plasma. The chromatographic separation was achieved using an Acquity®-UPLC® BEH™ (2.1×100mm id, 1.7μm) reverse-phase C 18 column, with a water/acetonitrile linear gradient containing 0.1% formic acid at a 0.4mL/min flow rate. β-Lactam antibiotics and their internal standards were detected by electrospray ionization mass spectrometry in multiple reaction monitoring mode. Chromatography run time was 7.0min and β-lactam antibiotics eluted at retention times ranging between 1.08 and 1.91min. The lower limits of quantification were between 0.50 and 1.00mg/L. Coefficients of variation and relative bias absolute values were <13.3% and 14.7%, respectively. Recovery values ranged from 55.7% to 84.8%. Evaluation of the matrix effect showed ion enhancement for all antibiotics. No interferences or carry-over were observed. Our measurement procedure could be applied to daily clinical laboratory practice to measure the concentration of β-lactam antibiotics in plasma, for instance in patients with bone and joint infections and critically ill patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Development and validation of a smartphone-based digits-in-noise hearing test in South African English.

    PubMed

    Potgieter, Jenni-Marí; Swanepoel, De Wet; Myburgh, Hermanus Carel; Hopper, Thomas Christopher; Smits, Cas

    2015-07-01

    The objective of this study was to develop and validate a smartphone-based digits-in-noise hearing test for South African English. Single digits (0-9) were recorded and spoken by a first language English female speaker. Level corrections were applied to create a set of homogeneous digits with steep speech recognition functions. A smartphone application was created to utilize 120 digit-triplets in noise as test material. An adaptive test procedure determined the speech reception threshold (SRT). Experiments were performed to determine headphones effects on the SRT and to establish normative data. Participants consisted of 40 normal-hearing subjects with thresholds ≤15 dB across the frequency spectrum (250-8000 Hz) and 186 subjects with normal-hearing in both ears, or normal-hearing in the better ear. The results show steep speech recognition functions with a slope of 20%/dB for digit-triplets presented in noise using the smartphone application. The results of five headphone types indicate that the smartphone-based hearing test is reliable and can be conducted using standard Android smartphone headphones or clinical headphones. A digits-in-noise hearing test was developed and validated for South Africa. The mean SRT and speech recognition functions correspond to previous developed telephone-based digits-in-noise tests.

  9. Performance Evaluation of a Data Validation System

    NASA Technical Reports Server (NTRS)

    Wong, Edmond (Technical Monitor); Sowers, T. Shane; Santi, L. Michael; Bickford, Randall L.

    2005-01-01

    Online data validation is a performance-enhancing component of modern control and health management systems. It is essential that performance of the data validation system be verified prior to its use in a control and health management system. A new Data Qualification and Validation (DQV) Test-bed application was developed to provide a systematic test environment for this performance verification. The DQV Test-bed was used to evaluate a model-based data validation package known as the Data Quality Validation Studio (DQVS). DQVS was employed as the primary data validation component of a rocket engine health management (EHM) system developed under NASA's NGLT (Next Generation Launch Technology) program. In this paper, the DQVS and DQV Test-bed software applications are described, and the DQV Test-bed verification procedure for this EHM system application is presented. Test-bed results are summarized and implications for EHM system performance improvements are discussed.

  10. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  11. Validation of the Female Sexual Function Index (FSFI) for web-based administration.

    PubMed

    Crisp, Catrina C; Fellner, Angela N; Pauls, Rachel N

    2015-02-01

    Web-based questionnaires are becoming increasingly valuable for clinical research. The Female Sexual Function Index (FSFI) is the gold standard for evaluating female sexual function; yet, it has not been validated in this format. We sought to validate the Female Sexual Function Index (FSFI) for web-based administration. Subjects enrolled in a web-based research survey of sexual function from the general population were invited to participate in this validation study. The first 151 respondents were included. Validation participants completed the web-based version of the FSFI followed by a mailed paper-based version. Demographic data were collected for all subjects. Scores were compared using the paired t test and the intraclass correlation coefficient. One hundred fifty-one subjects completed both web- and paper-based versions of the FSFI. Those subjects participating in the validation study did not differ in demographics or FSFI scores from the remaining subjects in the general population study. Total web-based and paper-based FSFI scores were not significantly different (mean 20.31 and 20.29 respectively, p = 0.931). The six domains or subscales of the FSFI were similar when comparing web and paper scores. Finally, intraclass correlation analysis revealed a high degree of correlation between total and subscale scores, r = 0.848-0.943, p < 0.001. Web-based administration of the FSFI is a valid alternative to the paper-based version.

  12. Computer aided manual validation of mass spectrometry-based proteomic data.

    PubMed

    Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M

    2013-06-15

    Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Validation of an Innovative Satellite-Based UV Dosimeter

    NASA Astrophysics Data System (ADS)

    Morelli, Marco; Masini, Andrea; Simeone, Emilio; Khazova, Marina

    2016-08-01

    We present an innovative satellite-based UV (ultraviolet) radiation dosimeter with a mobile app interface that has been validated by exploiting both ground-based measurements and an in-vivo assessment of the erythemal effects on some volunteers having a controlled exposure to solar radiation.Both validations showed that the satellite-based UV dosimeter has a good accuracy and reliability needed for health-related applications.The app with this satellite-based UV dosimeter also includes other related functionalities such as the provision of safe sun exposure time updated in real-time and end exposure visual/sound alert. This app will be launched on the global market by siHealth Ltd in May 2016 under the name of "HappySun" and available both for Android and for iOS devices (more info on http://www.happysun.co.uk).Extensive R&D activities are on-going for further improvement of the satellite-based UV dosimeter's accuracy.

  14. Analysis procedures and subjective flight results of a simulator validation and cue fidelity experiment

    NASA Technical Reports Server (NTRS)

    Carr, Peter C.; Mckissick, Burnell T.

    1988-01-01

    A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.

  15. Validation of a Career Information Retrieval Procedure.

    ERIC Educational Resources Information Center

    Kenoyer, Charles E.

    Research examined the relative desirability of categories of information supplied by a keysort search procedure, as judged by users. The Career Information System was used as the framework for obtaining occupational information organized around the concept of the Worker Trait Group (WTG). Four groups of students in each of grades 9 through 12…

  16. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR PERFORMANCE OF COMPUTER SOFTWARE: VERIFICATION AND VALIDATION (IIT-A-2.0)

    EPA Science Inventory

    The purpose of this SOP is to define the procedures for the initial and periodic verification and validation of computer programs. The programs are used during the Arizona NHEXAS project and Border study at the Illinois Institute of Technology (IIT) site. Keywords: computers; s...

  17. Risk-based Methodology for Validation of Pharmaceutical Batch Processes.

    PubMed

    Wiles, Frederick

    2013-01-01

    or runs required to demonstrate that a pharmaceutical process is operating in a validated state should be based on sound statistical principles. The old rule of "three consecutive batches and you're done" is no longer sufficient. The guidance, however, does not provide any specific methodology for determining the number of runs required, and little has been published to augment this shortcoming. The paper titled "Risk-based Methodology for Validation of Pharmaceutical Batch Processes" describes a statistically sound methodology for determining when a statistically valid number of validation runs has been acquired based on risk assessment and calculation of process capability.

  18. Operating Policies and Procedures of Computer Data-Base Systems.

    ERIC Educational Resources Information Center

    Anderson, David O.

    Speaking on the operating policies and procedures of computer data bases containing information on students, the author divides his remarks into three parts: content decisions, data base security, and user access. He offers nine recommended practices that should increase the data base's usefulness to the user community: (1) the cost of developing…

  19. Development and Validation of the Meaning of Work Inventory among French Workers

    ERIC Educational Resources Information Center

    Arnoux-Nicolas, Caroline; Sovet, Laurent; Lhotellier, Lin; Bernaud, Jean-Luc

    2017-01-01

    The purpose of this study was to validate a psychometric instrument among French workers for assessing the meaning of work. Following an empirical framework, a two-step procedure consisted of exploring and then validating the scale among distinctive samples. The consequent Meaning of Work Inventory is a 15-item scale based on a four-factor model,…

  20. Using Ground-Based Measurements and Retrievals to Validate Satellite Data

    NASA Technical Reports Server (NTRS)

    Dong, Xiquan

    2002-01-01

    The proposed research is to use the DOE ARM ground-based measurements and retrievals as the ground-truth references for validating satellite cloud results and retrieving algorithms. This validation effort includes four different ways: (1) cloud properties on different satellites, therefore different sensors, TRMM VIRS and TERRA MODIS; (2) cloud properties at different climatic regions, such as DOE ARM SGP, NSA, and TWP sites; (3) different cloud types, low and high level cloud properties; and (4) day and night retrieving algorithms. Validation of satellite-retrieved cloud properties is very difficult and a long-term effort because of significant spatial and temporal differences between the surface and satellite observing platforms. The ground-based measurements and retrievals, only carefully analyzed and validated, can provide a baseline for estimating errors in the satellite products. Even though the validation effort is so difficult, a significant progress has been made during the proposed study period, and the major accomplishments are summarized in the follow.

  1. Mixture-based gatekeeping procedures in adaptive clinical trials.

    PubMed

    Kordzakhia, George; Dmitrienko, Alex; Ishida, Eiji

    2018-01-01

    Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.

  2. Designing and validation of a yoga-based intervention for schizophrenia.

    PubMed

    Govindaraj, Ramajayam; Varambally, Shivarama; Sharma, Manjunath; Gangadhar, Bangalore Nanjundaiah

    2016-06-01

    Schizophrenia is a chronic mental illness which causes significant distress and dysfunction. Yoga has been found to be effective as an add-on therapy in schizophrenia. Modules of yoga used in previous studies were based on individual researcher's experience. This study aimed to develop and validate a specific generic yoga-based intervention module for patients with schizophrenia. The study was conducted at NIMHANS Integrated Centre for Yoga (NICY). A yoga module was designed based on traditional and contemporary yoga literature as well as published studies. The yoga module along with three case vignettes of adult patients with schizophrenia was sent to 10 yoga experts for their validation. Experts (n = 10) gave their opinion on the usefulness of a yoga module for patients with schizophrenia with some modifications. In total, 87% (13 of 15 items) of the items in the initial module were retained, with modification in the remainder as suggested by the experts. A specific yoga-based module for schizophrenia was designed and validated by experts. Further studies are needed to confirm efficacy and clinical utility of the module. Additional clinical validation is suggested.

  3. Validity and Practitality of Acid-Base Module Based on Guided Discovery Learning for Senior High School

    NASA Astrophysics Data System (ADS)

    Yerimadesi; Bayharti; Jannah, S. M.; Lufri; Festiyed; Kiram, Y.

    2018-04-01

    This Research and Development(R&D) aims to produce guided discovery learning based module on topic of acid-base and determine its validity and practicality in learning. Module development used Four D (4-D) model (define, design, develop and disseminate).This research was performed until development stage. Research’s instruments were validity and practicality questionnaires. Module was validated by five experts (three chemistry lecturers of Universitas Negeri Padang and two chemistry teachers of SMAN 9 Padang). Practicality test was done by two chemistry teachers and 30 students of SMAN 9 Padang. Kappa Cohen’s was used to analyze validity and practicality. The average moment kappa was 0.86 for validity and those for practicality were 0.85 by teachers and 0.76 by students revealing high category. It can be concluded that validity and practicality was proven for high school chemistry learning.

  4. Validating Analytical Methods

    ERIC Educational Resources Information Center

    Ember, Lois R.

    1977-01-01

    The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)

  5. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR PERFORMANCE OF COMPUTER SOFTWARE: VERIFICATION AND VALIDATION (UA-D-2.0)

    EPA Science Inventory

    The purpose of this SOP is to define the procedures used for the initial and periodic verification and validation of computer programs used during the Arizona NHEXAS project and the Border study. Keywords: Computers; Software; QA/QC.

    The U.S.-Mexico Border Program is sponsored ...

  6. Determining the Scoring Validity of a Co-Constructed CEFR-Based Rating Scale

    ERIC Educational Resources Information Center

    Deygers, Bart; Van Gorp, Koen

    2015-01-01

    Considering scoring validity as encompassing both reliable rating scale use and valid descriptor interpretation, this study reports on the validation of a CEFR-based scale that was co-constructed and used by novice raters. The research questions this paper wishes to answer are (a) whether it is possible to construct a CEFR-based rating scale with…

  7. Latency-Based and Psychophysiological Measures of Sexual Interest Show Convergent and Concurrent Validity.

    PubMed

    Ó Ciardha, Caoilte; Attard-Johnson, Janice; Bindemann, Markus

    2018-04-01

    Latency-based measures of sexual interest require additional evidence of validity, as do newer pupil dilation approaches. A total of 102 community men completed six latency-based measures of sexual interest. Pupillary responses were recorded during three of these tasks and in an additional task where no participant response was required. For adult stimuli, there was a high degree of intercorrelation between measures, suggesting that tasks may be measuring the same underlying construct (convergent validity). In addition to being correlated with one another, measures also predicted participants' self-reported sexual interest, demonstrating concurrent validity (i.e., the ability of a task to predict a more validated, simultaneously recorded, measure). Latency-based and pupillometric approaches also showed preliminary evidence of concurrent validity in predicting both self-reported interest in child molestation and viewing pornographic material containing children. Taken together, the study findings build on the evidence base for the validity of latency-based and pupillometric measures of sexual interest.

  8. Bio-Oil Analysis Laboratory Procedures | Bioenergy | NREL

    Science.gov Websites

    Bio-Oil Analysis Laboratory Procedures Bio-Oil Analysis Laboratory Procedures NREL develops standard procedures have been validated and allow for reliable bio-oil analysis. Procedures Determination different hydroxyl groups (-OH) in pyrolysis bio-oil: aliphatic-OH, phenolic-OH, and carboxylic-OH. Download

  9. Radiofrequency Procedures to Relieve Chronic Knee Pain: An Evidence-Based Narrative Review.

    PubMed

    Bhatia, Anuj; Peng, Philip; Cohen, Steven P

    2016-01-01

    Chronic knee pain from osteoarthritis or following arthroplasty is a common problem. A number of publications have reported analgesic success of radiofrequency (RF) procedures on nerves innervating the knee, but interpretation is hampered by lack of clarity regarding indications, clinical protocols, targets, and longevity of benefit from RF procedures. We reviewed the following medical literature databases for publications on RF procedures on the knee joint for chronic pain: MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and Google Scholar up to August 9, 2015. Data on scores for pain, validated scores for measuring physical disability, and adverse effects measured at any timepoint after 1 month following the interventions were collected, analyzed, and reported in this narrative review. Thirteen publications on ablative or pulsed RF treatments of innervation of the knee joint were identified. A high success rate of these procedures in relieving chronic pain of the knee joint was reported at 1 to 12 months after the procedures, but only 2 of the publications were randomized controlled trials. There was evidence for improvement in function and a lack of serious adverse events of RF treatments. Radiofrequency treatments on the knee joint (major or periarticular nerve supply or intra-articular branches) have the potential to reduce pain from osteoarthritis or persistent postarthroplasty pain. Ongoing concerns regarding the quality, procedural aspects, and monitoring of outcomes in publications on this topic remain. Randomized controlled trials of high methodological quality are required to further elaborate role of these interventions in this population.

  10. Slice-thickness evaluation in CT and MRI: an alternative computerised procedure.

    PubMed

    Acri, G; Tripepi, M G; Causa, F; Testagrossa, B; Novario, R; Vermiglio, G

    2012-04-01

    The efficient use of computed tomography (CT) and magnetic resonance imaging (MRI) equipment necessitates establishing adequate quality-control (QC) procedures. In particular, the accuracy of slice thickness (ST) requires scan exploration of phantoms containing test objects (plane, cone or spiral). To simplify such procedures, a novel phantom and a computerised LabView-based procedure have been devised, enabling determination of full width at half maximum (FWHM) in real time. The phantom consists of a polymethyl methacrylate (PMMA) box, diagonally crossed by a PMMA septum dividing the box into two sections. The phantom images were acquired and processed using the LabView-based procedure. The LabView (LV) results were compared with those obtained by processing the same phantom images with commercial software, and the Fisher exact test (F test) was conducted on the resulting data sets to validate the proposed methodology. In all cases, there was no statistically significant variation between the two different procedures and the LV procedure, which can therefore be proposed as a valuable alternative to other commonly used procedures and be reliably used on any CT and MRI scanner.

  11. Development and validation of the simulation-based learning evaluation scale.

    PubMed

    Hung, Chang-Chiao; Liu, Hsiu-Chen; Lin, Chun-Chih; Lee, Bih-O

    2016-05-01

    The instruments that evaluate a student's perception of receiving simulated training are English versions and have not been tested for reliability or validity. The aim of this study was to develop and validate a Chinese version Simulation-Based Learning Evaluation Scale (SBLES). Four stages were conducted to develop and validate the SBLES. First, specific desired competencies were identified according to the National League for Nursing and Taiwan Nursing Accreditation Council core competencies. Next, the initial item pool was comprised of 50 items related to simulation that were drawn from the literature of core competencies. Content validity was established by use of an expert panel. Finally, exploratory factor analysis and confirmatory factor analysis were conducted for construct validity, and Cronbach's coefficient alpha determined the scale's internal consistency reliability. Two hundred and fifty students who had experienced simulation-based learning were invited to participate in this study. Two hundred and twenty-five students completed and returned questionnaires (response rate=90%). Six items were deleted from the initial item pool and one was added after an expert panel review. Exploratory factor analysis with varimax rotation revealed 37 items remaining in five factors which accounted for 67% of the variance. The construct validity of SBLES was substantiated in a confirmatory factor analysis that revealed a good fit of the hypothesized factor structure. The findings tally with the criterion of convergent and discriminant validity. The range of internal consistency for five subscales was .90 to .93. Items were rated on a 5-point scale from 1 (strongly disagree) to 5 (strongly agree). The results of this study indicate that the SBLES is valid and reliable. The authors recommend that the scale could be applied in the nursing school to evaluate the effectiveness of simulation-based learning curricula. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Office-based deep sedation for pediatric ophthalmologic procedures using a sedation service model.

    PubMed

    Lalwani, Kirk; Tomlinson, Matthew; Koh, Jeffrey; Wheeler, David

    2012-01-01

    Aims. (1) To assess the efficacy and safety of pediatric office-based sedation for ophthalmologic procedures using a pediatric sedation service model. (2) To assess the reduction in hospital charges of this model of care delivery compared to the operating room (OR) setting for similar procedures. Background. Sedation is used to facilitate pediatric procedures and to immobilize patients for imaging and examination. We believe that the pediatric sedation service model can be used to facilitate office-based deep sedation for brief ophthalmologic procedures and examinations. Methods. After IRB approval, all children who underwent office-based ophthalmologic procedures at our institution between January 1, 2000 and July 31, 2008 were identified using the sedation service database and the electronic health record. A comparison of hospital charges between similar procedures in the operating room was performed. Results. A total of 855 procedures were reviewed. Procedure completion rate was 100% (C.I. 99.62-100). There were no serious complications or unanticipated admissions. Our analysis showed a significant reduction in hospital charges (average of $1287 per patient) as a result of absent OR and recovery unit charges. Conclusions. Pediatric ophthalmologic minor procedures can be performed using a sedation service model with significant reductions in hospital charges.

  13. Procedural validity of the AUDADIS-5 depression, anxiety and post-traumatic stress disorder modules: substance abusers and others in the general population*

    PubMed Central

    Hasin, Deborah S.; Shmulewitz, Dvora; Stohl, Malka; Greenstein, Eliana; Aivadyan, Christina; Morita, Kara; Saha, Tulshi; Aharonovich, Efrat; Jung, Jeesun; Zhang, Haitao; Nunes, Edward V.; Grant, Bridget F.

    2016-01-01

    Background Little is known about the procedural validity of lay-administered, fully-structured assessments of depressive, anxiety and post-traumatic stress (PTSD) disorders in the general population as determined by comparison to clinical re-appraisal, and whether this differs between current regular substance abusers and others. We evaluated the procedural validity of the Alcohol Use Disorder and Associated Disabilities Interview Schedule, DSM-5 Version (AUDADIS-5) assessment of these disorders through clinician re-interviews. Methods Test-retest design among respondents from the National Epidemiologic Survey on Alcohol and Related Conditions-III (NESARC-III): (264 current regular substance abusers, 447 others). Clinicians blinded to AUDADIS-5 results administered the semi-structured Psychiatric Research Interview for Substance and Mental Disorders, DSM-5 version (PRISM-5). AUDADIS-5/PRISM-5 concordance was indicated by kappa (κ) for diagnoses and intraclass correlation coefficients (ICC) for dimensional measures (DSM-5 symptom or criterion counts). Results were compared between current regular substance abusers and others. Results AUDADIS-5 and PRISM-5 concordance for DSM-5 depressive disorders, anxiety disorders and PTSD was generally fair to moderate (κ =0.24–0.59), with concordance on dimensional scales much better (ICC=0.53–0.81). Concordance differed little between regular substance abusers and others. Conclusions AUDADIS-5/PRISM-5 concordance indicated procedural validity for the AUDADIS-5 among substance abusers and others, suggesting that AUDADIS-5 diagnoses of DSM-5 depressive, anxiety and PTSD diagnoses are informative measures in both groups in epidemiologic studies. The stronger concordance on dimensional measures supports the current movement towards dimensional psychopathology measures, suggesting that such measures provide important information for research in the NESARC-III and other datasets, and possibly for clinical purposes as well. PMID

  14. Interlaboratory study for the validation of an ecotoxicological procedure to monitor the quality of septic sludge received at a wastewater treatment plant.

    PubMed

    Robidoux, P Y; Choucri, A; Bastien, C; Sunahara, G I; López-Gastey, J

    2001-01-01

    Septic tank sludge is regularly hauled to the Montreal Urban Community (MUC) wastewater treatment plant. It is then discharged and mixed with the wastewater inflow before entering the primary chemical treatment process. An ecotoxicological procedure integrating chemical and toxicological analyses has been recently developed and applied to screen for the illicit discharge of toxic substances in septic sludge. The toxicity tests used were the Microtox, the bacterial-respiration, and the lettuce (Lactuca sativa) root elongation tests. In order to validate the applicability of the proposed procedure, a two-year interlaboratory study was carried out. In general, the results obtained by two independent laboratories (MUC and the Centre d'expertise en analyse environnementale du Quebec) were comparable and reproducible. Some differences were found using the Microtox test. Organic (e.g., phenol and formaldehyde) and inorganic (e.g., nickel and cyanide) spiked septic sludge were detected with good reliability and high efficiency. The relative efficiency to detect spiked substances was > 70% and confirms the results of previous studies. In addition, the respiration test was the most efficient toxicological tool to detect spiked substances, whereas the Microtox was the least efficient (< 15%). Efficiencies to detect spiked contaminants were also similar for both laboratories. These results support previous data presented earlier and contribute to the validation of the ecotoxicological procedure used by the MUC to screen toxicity in septic sludge.

  15. Mathematical calibration procedure of a capacitive sensor-based indexed metrology platform

    NASA Astrophysics Data System (ADS)

    Brau-Avila, A.; Santolaria, J.; Acero, R.; Valenzuela-Galvan, M.; Herrera-Jimenez, V. M.; Aguilar, J. J.

    2017-03-01

    The demand for faster and more reliable measuring tasks for the control and quality assurance of modern production systems has created new challenges for the field of coordinate metrology. Thus, the search for new solutions in coordinate metrology systems and the need for the development of existing ones still persists. One example of such a system is the portable coordinate measuring machine (PCMM), the use of which in industry has considerably increased in recent years, mostly due to its flexibility for accomplishing in-line measuring tasks as well as its reduced cost and operational advantages compared to traditional coordinate measuring machines. Nevertheless, PCMMs have a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification and optimization procedures. In this work the mathematical calibration procedure of a capacitive sensor-based indexed metrology platform (IMP) is presented. This calibration procedure is based on the readings and geometric features of six capacitive sensors and their targets with nanometer resolution. The final goal of the IMP calibration procedure is to optimize the geometric features of the capacitive sensors and their targets in order to use the optimized data in the verification procedures of PCMMs.

  16. Implementing Computer-Based Procedures: Thinking Outside the Paper Margins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Bly, Aaron

    In the past year there has been increased interest from the nuclear industry in adopting the use of electronic work packages and computer-based procedures (CBPs) in the field. The goal is to incorporate the use of technology in order to meet the Nuclear Promise requirements of reducing costs and improve efficiency and decrease human error rates of plant operations. Researchers, together with the nuclear industry, have been investigating the benefits an electronic work package system and specifically CBPs would have over current paper-based procedure practices. There are several classifications of CBPs ranging from a straight copy of the paper-based proceduremore » in PDF format to a more intelligent dynamic CBP. A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools (e.g., placekeeping and correct component verification), and dynamic step presentation. The latter means that the CBP system could only display relevant steps based on operating mode, plant status, and the task at hand. The improvements can lead to reduction of the worker’s workload and human error by allowing the work to focus on the task at hand more. A team of human factors researchers at the Idaho National Laboratory studied and developed design concepts for CBPs for field workers between 2012 and 2016. The focus of the research was to present information in a procedure in a manner that leveraged the dynamic and computational capabilities of a handheld device allowing the worker to focus more on the task at hand than on the administrative processes currently applied when conducting work in the plant. As a part of the research the team identified type of work, instructions, and scenarios where the transition to a dynamic CBP system might not be as beneficial as it would for other types of work in the plant. In most cases the decision to use a dynamic CBP system and utilize the dynamic capabilities gained will be beneficial to the

  17. Evaluation of Flight Deck-Based Interval Management Crew Procedure Feasibility

    NASA Technical Reports Server (NTRS)

    Wilson, Sara R.; Murdoch, Jennifer L.; Hubbs, Clay E.; Swieringa, Kurt A.

    2013-01-01

    Air traffic demand is predicted to increase over the next 20 years, creating a need for new technologies and procedures to support this growth in a safe and efficient manner. The National Aeronautics and Space Administration's (NASA) Air Traffic Management Technology Demonstration - 1 (ATD-1) will operationally demonstrate the feasibility of efficient arrival operations combining ground-based and airborne NASA technologies. The integration of these technologies will increase throughput, reduce delay, conserve fuel, and minimize environmental impacts. The ground-based tools include Traffic Management Advisor with Terminal Metering for precise time-based scheduling and Controller Managed Spacing decision support tools for better managing aircraft delay with speed control. The core airborne technology in ATD-1 is Flight deck-based Interval Management (FIM). FIM tools provide pilots with speed commands calculated using information from Automatic Dependent Surveillance - Broadcast. The precise merging and spacing enabled by FIM avionics and flight crew procedures will reduce excess spacing buffers and result in higher terminal throughput. This paper describes a human-in-the-loop experiment designed to assess the acceptability and feasibility of the ATD-1 procedures used in a voice communications environment. This experiment utilized the ATD-1 integrated system of ground-based and airborne technologies. Pilot participants flew a high-fidelity fixed base simulator equipped with an airborne spacing algorithm and a FIM crew interface. Experiment scenarios involved multiple air traffic flows into the Dallas-Fort Worth Terminal Radar Control airspace. Results indicate that the proposed procedures were feasible for use by flight crews in a voice communications environment. The delivery accuracy at the achieve-by point was within +/- five seconds and the delivery precision was less than five seconds. Furthermore, FIM speed commands occurred at a rate of less than one per minute

  18. Numerical Procedures for Inlet/Diffuser/Nozzle Flows

    NASA Technical Reports Server (NTRS)

    Rubin, Stanley G.

    1998-01-01

    Two primitive variable, pressure based, flux-split, RNS/NS solution procedures for viscous flows are presented. Both methods are uniformly valid across the full Mach number range, Le., from the incompressible limit to high supersonic speeds. The first method is an 'optimized' version of a previously developed global pressure relaxation RNS procedure. Considerable reduction in the number of relatively expensive matrix inversion, and thereby in the computational time, has been achieved with this procedure. CPU times are reduced by a factor of 15 for predominantly elliptic flows (incompressible and low subsonic). The second method is a time-marching, 'linearized' convection RNS/NS procedure. The key to the efficiency of this procedure is the reduction to a single LU inversion at the inflow cross-plane. The remainder of the algorithm simply requires back-substitution with this LU and the corresponding residual vector at any cross-plane location. This method is not time-consistent, but has a convective-type CFL stability limitation. Both formulations are robust and provide accurate solutions for a variety of internal viscous flows to be provided herein.

  19. Supporting the future nuclear workforce with computer-based procedures

    DOE PAGES

    Oxstrand, Johanna; Le Blanc, Katya

    2016-05-01

    Here we see that computer-based tools have dramatically increased ease and efficiency of everyday tasks. Gone are the days of paging through a paper catalog, transcribing product numbers, and calculating totals. Today, a consumer can find a product online with a simple search engine, and then purchase it in a matter of a few clicks. Paper catalogs have their place, but it is hard to imagine life without on-line shopping sites. All tasks conducted in a nuclear power plant are guided by procedures, which helps ensure safe and reliable operation of the plants. One prominent goal of the nuclear industrymore » is to minimize the risk of human errors. To achieve this goal one has to ensure tasks are correctly and consistently executed. This is partly achieved by training and by a structured approach to task execution, which is provided by procedures and work instructions. Procedures are used in the nuclear industry to direct workers' actions in a proper sequence. The governing idea is to minimize the reliance on memory and choices made in the field. However, the procedure document may not contain sufficient information to successfully complete the task. Therefore, the worker might have to carry additional documents such as turnover sheets, operation experience, drawings, and other procedures to the work site. The nuclear industry is operated with paper procedures like paper catalogs of the past. A field worker may carry a large stack of documents needed to complete a task to the field. Even though the paper process has helped keep the industry safe for decades, there are limitations to using paper. Paper procedures are static (i.e., the content does not change after the document is printed), difficult to search, and rely heavily on the field worker’s situational awareness and ability to consistently meet the high expectation of human performance excellence. With computer-based procedures (CBPs) that stack of papers may be reduced to the size of a small tablet or even

  20. Supporting the future nuclear workforce with computer-based procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Le Blanc, Katya

    Here we see that computer-based tools have dramatically increased ease and efficiency of everyday tasks. Gone are the days of paging through a paper catalog, transcribing product numbers, and calculating totals. Today, a consumer can find a product online with a simple search engine, and then purchase it in a matter of a few clicks. Paper catalogs have their place, but it is hard to imagine life without on-line shopping sites. All tasks conducted in a nuclear power plant are guided by procedures, which helps ensure safe and reliable operation of the plants. One prominent goal of the nuclear industrymore » is to minimize the risk of human errors. To achieve this goal one has to ensure tasks are correctly and consistently executed. This is partly achieved by training and by a structured approach to task execution, which is provided by procedures and work instructions. Procedures are used in the nuclear industry to direct workers' actions in a proper sequence. The governing idea is to minimize the reliance on memory and choices made in the field. However, the procedure document may not contain sufficient information to successfully complete the task. Therefore, the worker might have to carry additional documents such as turnover sheets, operation experience, drawings, and other procedures to the work site. The nuclear industry is operated with paper procedures like paper catalogs of the past. A field worker may carry a large stack of documents needed to complete a task to the field. Even though the paper process has helped keep the industry safe for decades, there are limitations to using paper. Paper procedures are static (i.e., the content does not change after the document is printed), difficult to search, and rely heavily on the field worker’s situational awareness and ability to consistently meet the high expectation of human performance excellence. With computer-based procedures (CBPs) that stack of papers may be reduced to the size of a small tablet or even

  1. Validating the applicability of the GUM procedure

    NASA Astrophysics Data System (ADS)

    Cox, Maurice G.; Harris, Peter M.

    2014-08-01

    This paper is directed at practitioners seeking a degree of assurance in the quality of the results of an uncertainty evaluation when using the procedure in the Guide to the Expression of Uncertainty in Measurement (GUM) (JCGM 100 : 2008). Such assurance is required in adhering to general standards such as International Standard ISO/IEC 17025 or other sector-specific standards. We investigate the extent to which such assurance can be given. For many practical cases, a measurement result incorporating an evaluated uncertainty that is correct to one significant decimal digit would be acceptable. Any quantification of the numerical precision of an uncertainty statement is naturally relative to the adequacy of the measurement model and the knowledge used of the quantities in that model. For general univariate and multivariate measurement models, we emphasize the use of a Monte Carlo method, as recommended in GUM Supplements 1 and 2. One use of this method is as a benchmark in terms of which measurement results provided by the GUM can be assessed in any particular instance. We mainly consider measurement models that are linear in the input quantities, or have been linearized and the linearization process is deemed to be adequate. When the probability distributions for those quantities are independent, we indicate the use of other approaches such as convolution methods based on the fast Fourier transform and, particularly, Chebyshev polynomials as benchmarks.

  2. Recommendations for standardizing validation procedures assessing physical activity of older persons by monitoring body postures and movements.

    PubMed

    Lindemann, Ulrich; Zijlstra, Wiebren; Aminian, Kamiar; Chastin, Sebastien F M; de Bruin, Eling D; Helbostad, Jorunn L; Bussmann, Johannes B J

    2014-01-10

    Physical activity is an important determinant of health and well-being in older persons and contributes to their social participation and quality of life. Hence, assessment tools are needed to study this physical activity in free-living conditions. Wearable motion sensing technology is used to assess physical activity. However, there is a lack of harmonisation of validation protocols and applied statistics, which make it hard to compare available and future studies. Therefore, the aim of this paper is to formulate recommendations for assessing the validity of sensor-based activity monitoring in older persons with focus on the measurement of body postures and movements. Validation studies of body-worn devices providing parameters on body postures and movements were identified and summarized and an extensive inter-active process between authors resulted in recommendations about: information on the assessed persons, the technical system, and the analysis of relevant parameters of physical activity, based on a standardized and semi-structured protocol. The recommended protocols can be regarded as a first attempt to standardize validity studies in the area of monitoring physical activity.

  3. Online cross-validation-based ensemble learning.

    PubMed

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2018-01-30

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Online Cross-Validation-Based Ensemble Learning

    PubMed Central

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2017-01-01

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. PMID:28474419

  5. Perspectives on procedure-based assessments: a thematic analysis of semistructured interviews with 10 UK surgical trainees.

    PubMed

    Shalhoub, Joseph; Marshall, Dominic C; Ippolito, Kate

    2017-03-24

    The introduction of competency-based training has necessitated development and implementation of accompanying mechanisms for assessment. Procedure-based assessments (PBAs) are an example of workplace-based assessments that are used to examine focal competencies in the workplace. The primary objective was to understand surgical trainees' perspective on the value of PBA. Semistructured interviews with 10 surgical trainees individually interviewed to explore their views. Interviews were audio-recorded and transcribed; following this, they were open and axial coded. Thematic analysis was then performed. Semistructured interviews yielded several topical and recurring themes. In trainees' experience, the use of PBAs as a summative tool limits their educational value. Trainees reported a lack of support from seniors and variation in the usefulness of the tool based on stage of training. Concerns related to the validity of PBAs for evaluating trainees' performance with reports of 'gaming' the system and trainees completing their own assessments. Trainees did identify the significant value of PBAs when used correctly. Benefits included the identification of additional learning opportunities, standardisation of assessment and their role in providing a measure of progress. The UK surgical trainees interviewed identified both limitations and benefits to PBAs; however, we would argue based on their responses and our experience that their use as a summative tool limits their formative use as an educational opportunity. PBAs should either be used exclusively to support learning or solely as a summative tool; if so, further work is needed to audit, validate and standardise them for this purpose. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  6. Validity Issues in Clinical Assessment.

    ERIC Educational Resources Information Center

    Foster, Sharon L.; Cone, John D.

    1995-01-01

    Validation issues that arise with measures of constructs and behavior are addressed with reference to general reasons for using assessment procedures in clinical psychology. A distinction is made between the representational phase of validity assessment and the elaborative validity phase in which the meaning and utility of scores are examined.…

  7. [Computerized system validation of clinical researches].

    PubMed

    Yan, Charles; Chen, Feng; Xia, Jia-lai; Zheng, Qing-shan; Liu, Daniel

    2015-11-01

    Validation is a documented process that provides a high degree of assurance. The computer system does exactly and consistently what it is designed to do in a controlled manner throughout the life. The validation process begins with the system proposal/requirements definition, and continues application and maintenance until system retirement and retention of the e-records based on regulatory rules. The objective to do so is to clearly specify that each application of information technology fulfills its purpose. The computer system validation (CSV) is essential in clinical studies according to the GCP standard, meeting product's pre-determined attributes of the specifications, quality, safety and traceability. This paper describes how to perform the validation process and determine relevant stakeholders within an organization in the light of validation SOPs. Although a specific accountability in the implementation of the validation process might be outsourced, the ultimate responsibility of the CSV remains on the shoulder of the business process owner-sponsor. In order to show that the compliance of the system validation has been properly attained, it is essential to set up comprehensive validation procedures and maintain adequate documentations as well as training records. Quality of the system validation should be controlled using both QC and QA means.

  8. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  9. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  10. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  11. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  12. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  13. QuEChERS-based extraction procedure for multifamily analysis of phytohormones in vegetables by UHPLC-MS/MS.

    PubMed

    Flores, María Isabel Alarcón; Romero-González, Roberto; Frenich, Antonia Garrido; Vidal, José Luis Martínez

    2011-07-01

    A new method has been developed and validated for the simultaneous analysis of different phytohormones (auxins, cytokinins and gibberellins) in vegetables. The compounds were extracted using a QuEChERS-based method (acronym of quick, easy, cheap, effective, rugged and safe). The separation and determination of the selected phytohormones were carried out by ultra high-performance liquid chromatography coupled to tandem mass spectrometry (UHPLC-MS/MS), using electrospray ionization source (ESI) in positive and negative ion modes. The method was validated and mean recoveries were evaluated at three concentration levels (50, 100 and 250 μg/kg), ranging from 75 to 110% at the three levels assayed. Intra- and interday precisions, expressed as relative standard deviations (RSDs), were lower than 20 and 25%, respectively. Limits of quantification (LOQs) were equal or lower than 10 μg/kg. The developed procedure was applied to seven courgette samples, and naphthylacetic acid, naphthylacetamide and benzyladenine were found in some of the analysed samples. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Performance of the Seven-Step Procedure in Problem-Based Hospitality Management Education

    ERIC Educational Resources Information Center

    Zwaal, Wichard; Otting, Hans

    2016-01-01

    The study focuses on the seven-step procedure (SSP) in problem-based learning (PBL). The way students apply the seven-step procedure will help us understand how students work in a problem-based learning curriculum. So far, little is known about how students rate the performance and importance of the different steps, the amount of time they spend…

  15. Temporal validation for landsat-based volume estimation model

    Treesearch

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  16. A GIS-based automated procedure for landslide susceptibility mapping by the Conditional Analysis method: the Baganza valley case study (Italian Northern Apennines)

    NASA Astrophysics Data System (ADS)

    Clerici, Aldo; Perego, Susanna; Tellini, Claudio; Vescovi, Paolo

    2006-08-01

    Among the many GIS based multivariate statistical methods for landslide susceptibility zonation, the so called “Conditional Analysis method” holds a special place for its conceptual simplicity. In fact, in this method landslide susceptibility is simply expressed as landslide density in correspondence with different combinations of instability-factor classes. To overcome the operational complexity connected to the long, tedious and error prone sequence of commands required by the procedure, a shell script mainly based on the GRASS GIS was created. The script, starting from a landslide inventory map and a number of factor maps, automatically carries out the whole procedure resulting in the construction of a map with five landslide susceptibility classes. A validation procedure allows to assess the reliability of the resulting model, while the simple mean deviation of the density values in the factor class combinations, helps to evaluate the goodness of landslide density distribution. The procedure was applied to a relatively small basin (167 km2) in the Italian Northern Apennines considering three landslide types, namely rotational slides, flows and complex landslides, for a total of 1,137 landslides, and five factors, namely lithology, slope angle and aspect, elevation and slope/bedding relations. The analysis of the resulting 31 different models obtained combining the five factors, confirms the role of lithology, slope angle and slope/bedding relations in influencing slope stability.

  17. Validation of Procedures for Monitoring Crewmember Immune Function SDBI-1900, SMO-015 - Integrated Immune

    NASA Technical Reports Server (NTRS)

    Crucian, Brian; Stowe, Raymond; Mehta, Satish; Uchakin, Peter; Nehlsen-Cannarella, Sandra; Morukov, Boris; Pierson, Duane; Sams, Clarence

    2007-01-01

    There is ample evidence to suggest that space flight leads to immune system dysregulation. This may be a result of microgravity, confinement, physiological stress, radiation, environment or other mission-associated factors. The clinical risk from prolonged immune dysregulation during space flight are not yet determined, but may include increased incidence of infection, allergy, hypersensitivity, hematological malignancy or altered wound healing. Each of the clinical events resulting from immune dysfunction has the potential to impact mission critical objectives during exploration-class missions. To date, precious little in-flight immune data has been generated to assess this phenomenon. The majority of recent flight immune studies have been post-flight assessments, which may not accurately reflect the in-flight condition. There are no procedures currently in place to monitor immune function or its effect on crew health. The objective of this Supplemental Medical Objective (SMO) is to develop and validate an immune monitoring strategy consistent with operational flight requirements and constraints. This SMO will assess the clinical risks resulting from the adverse effects of space flight on the human immune system and will validate a flight-compatible immune monitoring strategy. Characterization of the clinical risk and the development of a monitoring strategy are necessary prerequisite activities prior to validating countermeasures. This study will determine, to the best level allowed by current technology, the in-flight status of crewmembers immune system. Pre-flight, in-flight and post-flight assessments of immune status, immune function, viral reactivation and physiological stress will be performed. The in-flight samples will allow a distinction between legitimate in-flight alterations and the physiological stresses of landing and readaptation which are believed to alter landing day assessments. The overall status of the immune system during flight (activation

  18. Colonoscopy procedure simulation: virtual reality training based on a real time computational approach.

    PubMed

    Wen, Tingxi; Medveczky, David; Wu, Jackie; Wu, Jianhuang

    2018-01-25

    Colonoscopy plays an important role in the clinical screening and management of colorectal cancer. The traditional 'see one, do one, teach one' training style for such invasive procedure is resource intensive and ineffective. Given that colonoscopy is difficult, and time-consuming to master, the use of virtual reality simulators to train gastroenterologists in colonoscopy operations offers a promising alternative. In this paper, a realistic and real-time interactive simulator for training colonoscopy procedure is presented, which can even include polypectomy simulation. Our approach models the colonoscopy as thick flexible elastic rods with different resolutions which are dynamically adaptive to the curvature of the colon. More material characteristics of this deformable material are integrated into our discrete model to realistically simulate the behavior of the colonoscope. We present a simulator for training colonoscopy procedure. In addition, we propose a set of key aspects of our simulator that give fast, high fidelity feedback to trainees. We also conducted an initial validation of this colonoscopic simulator to determine its clinical utility and efficacy.

  19. Validity of Administrative Data in Identifying Cancer-related Events in Adolescents and Young Adults: A Population-based Study Using the IMPACT Cohort.

    PubMed

    Gupta, Sumit; Nathan, Paul C; Baxter, Nancy N; Lau, Cindy; Daly, Corinne; Pole, Jason D

    2018-06-01

    Despite the importance of estimating population level cancer outcomes, most registries do not collect critical events such as relapse. Attempts to use health administrative data to identify these events have focused on older adults and have been mostly unsuccessful. We developed and tested administrative data-based algorithms in a population-based cohort of adolescents and young adults with cancer. We identified all Ontario adolescents and young adults 15-21 years old diagnosed with leukemia, lymphoma, sarcoma, or testicular cancer between 1992-2012. Chart abstraction determined the end of initial treatment (EOIT) date and subsequent cancer-related events (progression, relapse, second cancer). Linkage to population-based administrative databases identified fee and procedure codes indicating cancer treatment or palliative care. Algorithms determining EOIT based on a time interval free of treatment-associated codes, and new cancer-related events based on billing codes, were compared with chart-abstracted data. The cohort comprised 1404 patients. Time periods free of treatment-associated codes did not validly identify EOIT dates; using subsequent codes to identify new cancer events was thus associated with low sensitivity (56.2%). However, using administrative data codes that occurred after the EOIT date based on chart abstraction, the first cancer-related event was identified with excellent validity (sensitivity, 87.0%; specificity, 93.3%; positive predictive value, 81.5%; negative predictive value, 95.5%). Although administrative data alone did not validly identify cancer-related events, administrative data in combination with chart collected EOIT dates was associated with excellent validity. The collection of EOIT dates by cancer registries would significantly expand the potential of administrative data linkage to assess cancer outcomes.

  20. Knowledge-based system verification and validation

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1990-01-01

    The objective of this task is to develop and evaluate a methodology for verification and validation (V&V) of knowledge-based systems (KBS) for space station applications with high reliability requirements. The approach consists of three interrelated tasks. The first task is to evaluate the effectiveness of various validation methods for space station applications. The second task is to recommend requirements for KBS V&V for Space Station Freedom (SSF). The third task is to recommend modifications to the SSF to support the development of KBS using effectiveness software engineering and validation techniques. To accomplish the first task, three complementary techniques will be evaluated: (1) Sensitivity Analysis (Worchester Polytechnic Institute); (2) Formal Verification of Safety Properties (SRI International); and (3) Consistency and Completeness Checking (Lockheed AI Center). During FY89 and FY90, each contractor will independently demonstrate the user of his technique on the fault detection, isolation, and reconfiguration (FDIR) KBS or the manned maneuvering unit (MMU), a rule-based system implemented in LISP. During FY91, the application of each of the techniques to other knowledge representations and KBS architectures will be addressed. After evaluation of the results of the first task and examination of Space Station Freedom V&V requirements for conventional software, a comprehensive KBS V&V methodology will be developed and documented. Development of highly reliable KBS's cannot be accomplished without effective software engineering methods. Using the results of current in-house research to develop and assess software engineering methods for KBS's as well as assessment of techniques being developed elsewhere, an effective software engineering methodology for space station KBS's will be developed, and modification of the SSF to support these tools and methods will be addressed.

  1. Validation of a novel laparoscopic adjustable gastric band simulator.

    PubMed

    Sankaranarayanan, Ganesh; Adair, James D; Halic, Tansel; Gromski, Mark A; Lu, Zhonghua; Ahn, Woojin; Jones, Daniel B; De, Suvranu

    2011-04-01

    Morbid obesity accounts for more than 90,000 deaths per year in the United States. Laparoscopic adjustable gastric banding (LAGB) is the second most common weight loss procedure performed in the US and the most common in Europe and Australia. Simulation in surgical training is a rapidly advancing field that has been adopted by many to prepare surgeons for surgical techniques and procedures. The aim of our study was to determine face, construct, and content validity for a novel virtual reality laparoscopic adjustable gastric band simulator. Twenty-eight subjects were categorized into two groups (expert and novice), determined by their skill level in laparoscopic surgery. Experts consisted of subjects who had at least 4 years of laparoscopic training and operative experience. Novices consisted of subjects with medical training but with less than 4 years of laparoscopic training. The subjects used the virtual reality laparoscopic adjustable band surgery simulator. They were automatically scored according to various tasks. The subjects then completed a questionnaire to evaluate face and content validity. On a 5-point Likert scale (1 = lowest score, 5 = highest score), the mean score for visual realism was 4.00 ± 0.67 and the mean score for realism of the interface and tool movements was 4.07 ± 0.77 (face validity). There were significant differences in the performances of the two subject groups (expert and novice) based on total scores (p < 0.001) (construct validity). Mean score for utility of the simulator, as addressed by the expert group, was 4.50 ± 0.71 (content validity). We created a virtual reality laparoscopic adjustable gastric band simulator. Our initial results demonstrate excellent face, construct, and content validity findings. To our knowledge, this is the first virtual reality simulator with haptic feedback for training residents and surgeons in the laparoscopic adjustable gastric banding procedure.

  2. A National Needs Assessment to Identify Technical Procedures in Vascular Surgery for Simulation Based Training.

    PubMed

    Nayahangan, L J; Konge, L; Schroeder, T V; Paltved, C; Lindorff-Larsen, K G; Nielsen, B U; Eiberg, J P

    2017-04-01

    Practical skills training in vascular surgery is facing challenges because of an increased number of endovascular procedures and fewer open procedures, as well as a move away from the traditional principle of "learning by doing." This change has established simulation as a cornerstone in providing trainees with the necessary skills and competences. However, the development of simulation based programs often evolves based on available resources and equipment, reflecting convenience rather than a systematic educational plan. The objective of the present study was to perform a national needs assessment to identify the technical procedures that should be integrated in a simulation based curriculum. A national needs assessment using a Delphi process was initiated by engaging 33 predefined key persons in vascular surgery. Round 1 was a brainstorming phase to identify technical procedures that vascular surgeons should learn. Round 2 was a survey that used a needs assessment formula to explore the frequency of procedures, the number of surgeons performing each procedure, risk and/or discomfort, and feasibility for simulation based training. Round 3 involved elimination and ranking of procedures. The response rate for round 1 was 70%, with 36 procedures identified. Round 2 had a 76% response rate and resulted in a preliminary prioritised list after exploring the need for simulation based training. Round 3 had an 85% response rate; 17 procedures were eliminated, resulting in a final prioritised list of 19 technical procedures. A national needs assessment using a standardised Delphi method identified a list of procedures that are highly suitable and may provide the basis for future simulation based training programs for vascular surgeons in training. Copyright © 2017 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  3. Age validation of canary rockfish (Sebastes pinniger) using two independent otolith techniques: lead-radium and bomb radiocarbon dating.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, A H; Kerr, L A; Cailliet, G M

    2007-11-04

    Canary rockfish (Sebastes pinniger) have long been an important part of recreational and commercial rockfish fishing from southeast Alaska to southern California, but localized stock abundances have declined considerably. Based on age estimates from otoliths and other structures, lifespan estimates vary from about 20 years to over 80 years. For the purpose of monitoring stocks, age composition is routinely estimated by counting growth zones in otoliths; however, age estimation procedures and lifespan estimates remain largely unvalidated. Typical age validation techniques have limited application for canary rockfish because they are deep dwelling and may be long lived. In this study, themore » unaged otolith of the pair from fish aged at the Department of Fisheries and Oceans Canada was used in one of two age validation techniques: (1) lead-radium dating and (2) bomb radiocarbon ({sup 14}C) dating. Age estimate accuracy and the validity of age estimation procedures were validated based on the results from each technique. Lead-radium dating proved successful in determining a minimum estimate of lifespan was 53 years and provided support for age estimation procedures up to about 50-60 years. These findings were further supported by {Delta}{sup 14}C data, which indicated a minimum estimate of lifespan was 44 {+-} 3 years. Both techniques validate, to differing degrees, age estimation procedures and provide support for inferring that canary rockfish can live more than 80 years.« less

  4. Procedural Error and Task Interruption

    DTIC Science & Technology

    2016-09-30

    red for research on errors and individual differences . Results indicate predictive validity for fluid intelligence and specifi c forms of work...TERMS procedural error, task interruption, individual differences , fluid intelligence, sleep deprivation 16. SECURITY CLASSIFICATION OF: 17...and individual differences . It generates rich data on several kinds of errors, including procedural errors in which steps are skipped or repeated

  5. 42 CFR 456.655 - Validation of showings.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Validation of showings. 456.655 Section 456.655... Showing of an Effective Institutional Utilization Control Program § 456.655 Validation of showings. (a) The Administrator will periodically validate showings submitted under § 456.654. Validation procedures...

  6. 42 CFR 456.655 - Validation of showings.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Validation of showings. 456.655 Section 456.655... Showing of an Effective Institutional Utilization Control Program § 456.655 Validation of showings. (a) The Administrator will periodically validate showings submitted under § 456.654. Validation procedures...

  7. 42 CFR 456.655 - Validation of showings.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Validation of showings. 456.655 Section 456.655... Showing of an Effective Institutional Utilization Control Program § 456.655 Validation of showings. (a) The Administrator will periodically validate showings submitted under § 456.654. Validation procedures...

  8. 42 CFR 456.655 - Validation of showings.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Validation of showings. 456.655 Section 456.655... Showing of an Effective Institutional Utilization Control Program § 456.655 Validation of showings. (a) The Administrator will periodically validate showings submitted under § 456.654. Validation procedures...

  9. Validation of design procedure and performance modeling of a heat and fluid transport field experiment in the unsaturated zone

    NASA Astrophysics Data System (ADS)

    Nir, A.; Doughty, C.; Tsang, C. F.

    Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no

  10. Interpreting Variance Components as Evidence for Reliability and Validity.

    ERIC Educational Resources Information Center

    Kane, Michael T.

    The reliability and validity of measurement is analyzed by a sampling model based on generalizability theory. A model for the relationship between a measurement procedure and an attribute is developed from an analysis of how measurements are used and interpreted in science. The model provides a basis for analyzing the concept of an error of…

  11. Reliability and validity of the Japanese version of the Organizational Justice Questionnaire.

    PubMed

    Inoue, Akiomi; Kawakami, Norito; Tsutsumi, Akizumi; Shimazu, Akihito; Tsuchiya, Masao; Ishizaki, Masao; Tabata, Masaji; Akiyama, Miki; Kitazume, Akiko; Kuroda, Mitsuyo; Kivimäki, Mika

    2009-01-01

    Previous European studies reporting low procedural justice and low interactional justice were associated with increased health problems have used a modified version of the Moorman's Organizational Justice Questionnaire (OJQ, Elovainio et al., 2002) to assess organizational justice. We translated the modified OJQ into the Japanese language and examined the internal consistency reliability, and factor-based and construct validity of this measure. A back-translation procedure confirmed that the translation was appropriate, pending a minor revision. A total of 185 men and 58 women at a manufacturing factory in Japan were surveyed using a mailed questionnaire including the OJQ and other job stressors. Cronbach alpha coefficients of the two OJQ subscales were high (0.85-0.94) for both sexes. The hypothesized two factors (i.e., procedural justice and interactional justice) were extracted by the factor analysis for men; for women, procedural justice was further split into two separate dimensions supporting a three- rather than two-factor structure. Convergent validity was supported by expected correlations of the OJQ with job control, supervisor support, effort-reward imbalance, and job future ambiguity in particular among the men. The present study shows that the Japanese version of the OJQ has acceptable levels of reliability and validity at least for male employees.

  12. ICP-MS/MS-Based Ionomics: A Validated Methodology to Investigate the Biological Variability of the Human Ionome.

    PubMed

    Konz, Tobias; Migliavacca, Eugenia; Dayon, Loïc; Bowman, Gene; Oikonomidi, Aikaterini; Popp, Julius; Rezzi, Serge

    2017-05-05

    We here describe the development, validation and application of a quantitative methodology for the simultaneous determination of 29 elements in human serum using state-of-the-art inductively coupled plasma triple quadrupole mass spectrometry (ICP-MS/MS). This new methodology offers high-throughput elemental profiling using simple dilution of minimal quantity of serum samples. We report the outcomes of the validation procedure including limits of detection/quantification, linearity of calibration curves, precision, recovery and measurement uncertainty. ICP-MS/MS-based ionomics was used to analyze human serum of 120 older adults. Following a metabolomic data mining approach, the generated ionome profiles were subjected to principal component analysis revealing gender and age-specific differences. The ionome of female individuals was marked by higher levels of calcium, phosphorus, copper and copper to zinc ratio, while iron concentration was lower with respect to male subjects. Age was associated with lower concentrations of zinc. These findings were complemented with additional readouts to interpret micronutrient status including ceruloplasmin, ferritin and inorganic phosphate. Our data supports a gender-specific compartmentalization of the ionome that may reflect different bone remodelling in female individuals. Our ICP-MS/MS methodology enriches the panel of validated "Omics" approaches to study molecular relationships between the exposome and the ionome in relation with nutrition and health.

  13. Identification of PPARgamma Partial Agonists of Natural Origin (I): Development of a Virtual Screening Procedure and In Vitro Validation

    PubMed Central

    Guasch, Laura; Sala, Esther; Castell-Auví, Anna; Cedó, Lidia; Liedl, Klaus R.; Wolber, Gerhard; Muehlbacher, Markus; Mulero, Miquel; Pinent, Montserrat; Ardévol, Anna; Valls, Cristina; Pujadas, Gerard; Garcia-Vallvé, Santiago

    2012-01-01

    Background Although there are successful examples of the discovery of new PPARγ agonists, it has recently been of great interest to identify new PPARγ partial agonists that do not present the adverse side effects caused by PPARγ full agonists. Consequently, the goal of this work was to design, apply and validate a virtual screening workflow to identify novel PPARγ partial agonists among natural products. Methodology/Principal Findings We have developed a virtual screening procedure based on structure-based pharmacophore construction, protein-ligand docking and electrostatic/shape similarity to discover novel scaffolds of PPARγ partial agonists. From an initial set of 89,165 natural products and natural product derivatives, 135 compounds were identified as potential PPARγ partial agonists with good ADME properties. Ten compounds that represent ten new chemical scaffolds for PPARγ partial agonists were selected for in vitro biological testing, but two of them were not assayed due to solubility problems. Five out of the remaining eight compounds were confirmed as PPARγ partial agonists: they bind to PPARγ, do not or only moderately stimulate the transactivation activity of PPARγ, do not induce adipogenesis of preadipocyte cells and stimulate the insulin-induced glucose uptake of adipocytes. Conclusions/Significance We have demonstrated that our virtual screening protocol was successful in identifying novel scaffolds for PPARγ partial agonists. PMID:23226391

  14. Relations among Conceptual Knowledge, Procedural Knowledge, and Procedural Flexibility in Two Samples Differing in Prior Knowledge

    ERIC Educational Resources Information Center

    Schneider, Michael; Rittle-Johnson, Bethany; Star, Jon R.

    2011-01-01

    Competence in many domains rests on children developing conceptual and procedural knowledge, as well as procedural flexibility. However, research on the developmental relations between these different types of knowledge has yielded unclear results, in part because little attention has been paid to the validity of the measures or to the effects of…

  15. 40 CFR 89.408 - Post-test procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Post-test procedures. 89.408 Section... Procedures § 89.408 Post-test procedures. (a) A hangup check is recommended at the completion of the last...) Record the post-test data specified in § 89.405(f). (e) For a valid test, the zero and span checks...

  16. 40 CFR 89.408 - Post-test procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Post-test procedures. 89.408 Section... Procedures § 89.408 Post-test procedures. (a) A hangup check is recommended at the completion of the last...) Record the post-test data specified in § 89.405(f). (e) For a valid test, the zero and span checks...

  17. 40 CFR 89.408 - Post-test procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Post-test procedures. 89.408 Section... Procedures § 89.408 Post-test procedures. (a) A hangup check is recommended at the completion of the last...) Record the post-test data specified in § 89.405(f). (e) For a valid test, the zero and span checks...

  18. 40 CFR 89.408 - Post-test procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Post-test procedures. 89.408 Section... Procedures § 89.408 Post-test procedures. (a) A hangup check is recommended at the completion of the last...) Record the post-test data specified in § 89.405(f). (e) For a valid test, the zero and span checks...

  19. 40 CFR 89.408 - Post-test procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Post-test procedures. 89.408 Section 89... Procedures § 89.408 Post-test procedures. (a) A hangup check is recommended at the completion of the last...) Record the post-test data specified in § 89.405(f). (e) For a valid test, the zero and span checks...

  20. Haptic interface of web-based training system for interventional radiology procedures

    NASA Astrophysics Data System (ADS)

    Ma, Xin; Lu, Yiping; Loe, KiaFock; Nowinski, Wieslaw L.

    2004-05-01

    The existing web-based medical training systems and surgical simulators can provide affordable and accessible medical training curriculum, but they seldom offer the trainee realistic and affordable haptic feedback. Therefore, they cannot offer the trainee a suitable practicing environment. In this paper, a haptic solution for interventional radiology (IR) procedures is proposed. System architecture of a web-based training system for IR procedures is briefly presented first. Then, the mechanical structure, the working principle and the application of a haptic device are discussed in detail. The haptic device works as an interface between the training environment and the trainees and is placed at the end user side. With the system, the user can be trained on the interventional radiology procedures - navigating catheters, inflating balloons, deploying coils and placing stents on the web and get surgical haptic feedback in real time.

  1. Benefits of computer screen-based simulation in learning cardiac arrest procedures.

    PubMed

    Bonnetain, Elodie; Boucheix, Jean-Michel; Hamet, Maël; Freysz, Marc

    2010-07-01

    What is the best way to train medical students early so that they acquire basic skills in cardiopulmonary resuscitation as effectively as possible? Studies have shown the benefits of high-fidelity patient simulators, but have also demonstrated their limits. New computer screen-based multimedia simulators have fewer constraints than high-fidelity patient simulators. In this area, as yet, there has been no research on the effectiveness of transfer of learning from a computer screen-based simulator to more realistic situations such as those encountered with high-fidelity patient simulators. We tested the benefits of learning cardiac arrest procedures using a multimedia computer screen-based simulator in 28 Year 2 medical students. Just before the end of the traditional resuscitation course, we compared two groups. An experiment group (EG) was first asked to learn to perform the appropriate procedures in a cardiac arrest scenario (CA1) in the computer screen-based learning environment and was then tested on a high-fidelity patient simulator in another cardiac arrest simulation (CA2). While the EG was learning to perform CA1 procedures in the computer screen-based learning environment, a control group (CG) actively continued to learn cardiac arrest procedures using practical exercises in a traditional class environment. Both groups were given the same amount of practice, exercises and trials. The CG was then also tested on the high-fidelity patient simulator for CA2, after which it was asked to perform CA1 using the computer screen-based simulator. Performances with both simulators were scored on a precise 23-point scale. On the test on a high-fidelity patient simulator, the EG trained with a multimedia computer screen-based simulator performed significantly better than the CG trained with traditional exercises and practice (16.21 versus 11.13 of 23 possible points, respectively; p<0.001). Computer screen-based simulation appears to be effective in preparing learners to

  2. Convergent Validity Evidence regarding the Validity of the Chilean Standards-Based Teacher Evaluation System

    ERIC Educational Resources Information Center

    Santelices, Maria Veronica; Taut, Sandy

    2011-01-01

    This paper describes convergent validity evidence regarding the mandatory, standards-based Chilean national teacher evaluation system (NTES). The study examined whether NTES identifies--and thereby rewards or punishes--the "right" teachers as high- or low-performing. We collected in-depth teaching performance data on a sample of 58…

  3. Verification and Validation: High Charge and Energy (HZE) Transport Codes and Future Development

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tripathi, Ram K.; Mertens, Christopher J.; Blattnig, Steve R.; Clowdsley, Martha S.; Cucinotta, Francis A.; Tweed, John; Heinbockel, John H.; Walker, Steven A.; Nealy, John E.

    2005-01-01

    In the present paper, we give the formalism for further developing a fully three-dimensional HZETRN code using marching procedures but also development of a new Green's function code is discussed. The final Green's function code is capable of not only validation in the space environment but also in ground based laboratories with directed beams of ions of specific energy and characterized with detailed diagnostic particle spectrometer devices. Special emphasis is given to verification of the computational procedures and validation of the resultant computational model using laboratory and spaceflight measurements. Due to historical requirements, two parallel development paths for computational model implementation using marching procedures and Green s function techniques are followed. A new version of the HZETRN code capable of simulating HZE ions with either laboratory or space boundary conditions is under development. Validation of computational models at this time is particularly important for President Bush s Initiative to develop infrastructure for human exploration with first target demonstration of the Crew Exploration Vehicle (CEV) in low Earth orbit in 2008.

  4. 29 CFR 1607.7 - Use of other validity studies.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection... described in test manuals. While publishers of selection procedures have a professional obligation to...

  5. 29 CFR 1607.7 - Use of other validity studies.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection... described in test manuals. While publishers of selection procedures have a professional obligation to...

  6. 29 CFR 1607.7 - Use of other validity studies.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection... described in test manuals. While publishers of selection procedures have a professional obligation to...

  7. Computer-Based and Paper-Based Measurement of Recognition Performance.

    ERIC Educational Resources Information Center

    Federico, Pat-Anthony

    To determine the relative reliabilities and validities of paper-based and computer-based measurement procedures, 83 male student pilots and radar intercept officers were administered computer and paper-based tests of aircraft recognition. The subject matter consisted of line drawings of front, side, and top silhouettes of aircraft. Reliabilities…

  8. A multivariate model and statistical method for validating tree grade lumber yield equations

    Treesearch

    Donald W. Seegrist

    1975-01-01

    Lumber yields within lumber grades can be described by a multivariate linear model. A method for validating lumber yield prediction equations when there are several tree grades is presented. The method is based on multivariate simultaneous test procedures.

  9. Diagnostic flexible pharyngo-laryngoscopy: development of a procedure specific assessment tool using a Delphi methodology.

    PubMed

    Melchiors, Jacob; Henriksen, Mikael Johannes Vuokko; Dikkers, Frederik G; Gavilán, Javier; Noordzij, J Pieter; Fried, Marvin P; Novakovic, Daniel; Fagan, Johannes; Charabi, Birgitte W; Konge, Lars; von Buchwald, Christian

    2018-05-01

    Proper training and assessment of skill in flexible pharyngo-laryngoscopy are central in the education of otorhinolaryngologists. To facilitate an evidence-based approach to curriculum development in this field, a structured analysis of what constitutes flexible pharyngo-laryngoscopy is necessary. Our aim was to develop an assessment tool based on this analysis. We conducted an international Delphi study involving experts from twelve countries in five continents. Utilizing reiterative assessment, the panel defined the procedure and reached consensus (defined as 80% agreement) on the phrasing of an assessment tool. FIFTY PANELISTS COMPLETED THE DELPHI PROCESS. THE MEDIAN AGE OF THE PANELISTS WAS 44 YEARS (RANGE 33-64 YEARS). MEDIAN EXPERIENCE IN OTORHINOLARYNGOLOGY WAS 15 YEARS (RANGE 6-35 YEARS). TWENTY-FIVE WERE SPECIALIZED IN LARYNGOLOGY, 16 WERE HEAD AND NECK SURGEONS, AND NINE WERE GENERAL OTORHINOLARYNGOLOGISTS. AN ASSESSMENT TOOL WAS CREATED CONSISTING OF TWELVE DISTINCT ITEMS.: Conclusion The gathering of validity evidence for assessment of core procedural skills within Otorhinolaryngology is central to the development of a competence-based education. The use of an international Delphi panel allows for the creation of an assessment tool which is widely applicable and valid. This work allows for an informed approach to technical skills training for flexible pharyngo-laryngoscopy and as further validity evidence is gathered allows for a valid assessment of clinical performance within this important skillset.

  10. Rapid Web-Based Platform for Assessment of Orthopedic Surgery Patient Care Milestones: A 2-Year Validation.

    PubMed

    Gundle, Kenneth R; Mickelson, Dayne T; Cherones, Arien; Black, Jason; Hanel, Doug P

    To determine the validity, feasibility, and responsiveness of a new web-based platform for rapid milestone-based evaluations of orthopedic surgery residents. Single academic medical center, including a trauma center and pediatrics tertiary hospital. Forty residents (PG1-5) in an orthopedic residency program and their faculty evaluators. Residents and faculty were trained and supported in the use of a novel trainee-initiated web-based evaluation system. Residents were encouraged to use the system to track progress on patient care subcompetencies. Two years of prospectively collected data were reviewed from residents at an academic program. The primary outcome was Spearman's rank correlation between postgraduate year (PGY) and competency level achieved as a measure of validity. Secondary outcomes assessed feasibility, resident self-evaluation versus faculty evaluation, the distributions among subcompetencies, and responsiveness over time. Between February 2014 and February 2016, 856 orthopedic surgery patient care subcompetency evaluations were completed (1.2 evaluations per day). Residents promptly requested feedback after a procedure (median = 0 days, interquartile range: 0-2), and faculty responded within 2 days in 51% (median = 2 days, interquartile range: 0-13). Primary outcome showed a correlation between PGY and competency level (r = 0.78, p < 0.001), with significant differences in competency among PGYs (p < 0.001 by Kruskal-Wallis rank sum test). Self-evaluations by residents substantially agreed with faculty-assigned competency level (weighted Cohen's κ = 0.72, p < 0.001). Resident classes beginning the study as PGY1, 2, and 3 separately demonstrated gains in competency over time (Spearman's rank correlation 0.39, 0.60, 0.59, respectively, each p < 0.001). There was significant variance in the number of evaluations submitted per subcompetency (median = 43, range: 6-113) and competency level assigned (p < 0.01). Rapid tracking of trainee competency with

  11. Validity in work-based assessment: expanding our horizons.

    PubMed

    Govaerts, Marjan; van der Vleuten, Cees P M

    2013-12-01

    Although work-based assessments (WBA) may come closest to assessing habitual performance, their use for summative purposes is not undisputed. Most criticism of WBA stems from approaches to validity consistent with the quantitative psychometric framework. However, there is increasing research evidence that indicates that the assumptions underlying the predictive, deterministic framework of psychometrics may no longer hold. In this discussion paper we argue that meaningfulness and appropriateness of current validity evidence can be called into question and that we need alternative strategies to assessment and validity inquiry that build on current theories of learning and performance in complex and dynamic workplace settings. Drawing from research in various professional fields we outline key issues within the mechanisms of learning, competence and performance in the context of complex social environments and illustrate their relevance to WBA. In reviewing recent socio-cultural learning theory and research on performance and performance interpretations in work settings, we demonstrate that learning, competence (as inferred from performance) as well as performance interpretations are to be seen as inherently contextualised, and can only be under-stood 'in situ'. Assessment in the context of work settings may, therefore, be more usefully viewed as a socially situated interpretive act. We propose constructivist-interpretivist approaches towards WBA in order to capture and understand contextualised learning and performance in work settings. Theoretical assumptions underlying interpretivist assessment approaches call for a validity theory that provides the theoretical framework and conceptual tools to guide the validation process in the qualitative assessment inquiry. Basic principles of rigour specific to qualitative research have been established, and they can and should be used to determine validity in interpretivist assessment approaches. If used properly, these

  12. Validated spectrophotometric methods for determination of sodium valproate based on charge transfer complexation reactions.

    PubMed

    Belal, Tarek S; El-Kafrawy, Dina S; Mahrous, Mohamed S; Abdel-Khalek, Magdi M; Abo-Gharam, Amira H

    2016-02-15

    This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415nm. The fourth method involves the formation of a yellow complex peaking at 361nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Validated spectrophotometric methods for determination of sodium valproate based on charge transfer complexation reactions

    NASA Astrophysics Data System (ADS)

    Belal, Tarek S.; El-Kafrawy, Dina S.; Mahrous, Mohamed S.; Abdel-Khalek, Magdi M.; Abo-Gharam, Amira H.

    2016-02-01

    This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524 nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490 nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415 nm. The fourth method involves the formation of a yellow complex peaking at 361 nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8 μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method.

  14. A procedure to estimate proximate analysis of mixed organic wastes.

    PubMed

    Zaher, U; Buffiere, P; Steyer, J P; Chen, S

    2009-04-01

    In waste materials, proximate analysis measuring the total concentration of carbohydrate, protein, and lipid contents from solid wastes is challenging, as a result of the heterogeneous and solid nature of wastes. This paper presents a new procedure that was developed to estimate such complex chemical composition of the waste using conventional practical measurements, such as chemical oxygen demand (COD) and total organic carbon. The procedure is based on mass balance of macronutrient elements (carbon, hydrogen, nitrogen, oxygen, and phosphorus [CHNOP]) (i.e., elemental continuity), in addition to the balance of COD and charge intensity that are applied in mathematical modeling of biological processes. Knowing the composition of such a complex substrate is crucial to study solid waste anaerobic degradation. The procedure was formulated to generate the detailed input required for the International Water Association (London, United Kingdom) Anaerobic Digestion Model number 1 (IWA-ADM1). The complex particulate composition estimated by the procedure was validated with several types of food wastes and animal manures. To make proximate analysis feasible for validation, the wastes were classified into 19 types to allow accurate extraction and proximate analysis. The estimated carbohydrates, proteins, lipids, and inerts concentrations were highly correlated to the proximate analysis; correlation coefficients were 0.94, 0.88, 0.99, and 0.96, respectively. For most of the wastes, carbohydrate was the highest fraction and was estimated accurately by the procedure over an extended range with high linearity. For wastes that are rich in protein and fiber, the procedure was even more consistent compared with the proximate analysis. The new procedure can be used for waste characterization in solid waste treatment design and optimization.

  15. Selecting Statistical Procedures for Quality Control Planning Based on Risk Management.

    PubMed

    Yago, Martín; Alcover, Silvia

    2016-07-01

    According to the traditional approach to statistical QC planning, the performance of QC procedures is assessed in terms of its probability of rejecting an analytical run that contains critical size errors (PEDC). Recently, the maximum expected increase in the number of unacceptable patient results reported during the presence of an undetected out-of-control error condition [Max E(NUF)], has been proposed as an alternative QC performance measure because it is more related to the current introduction of risk management concepts for QC planning in the clinical laboratory. We used a statistical model to investigate the relationship between PEDC and Max E(NUF) for simple QC procedures widely used in clinical laboratories and to construct charts relating Max E(NUF) with the capability of the analytical process that allow for QC planning based on the risk of harm to a patient due to the report of erroneous results. A QC procedure shows nearly the same Max E(NUF) value when used for controlling analytical processes with the same capability, and there is a close relationship between PEDC and Max E(NUF) for simple QC procedures; therefore, the value of PEDC can be estimated from the value of Max E(NUF) and vice versa. QC procedures selected by their high PEDC value are also characterized by a low value for Max E(NUF). The PEDC value can be used for estimating the probability of patient harm, allowing for the selection of appropriate QC procedures in QC planning based on risk management. © 2016 American Association for Clinical Chemistry.

  16. Cooperative Learning: Improving University Instruction by Basing Practice on Validated Theory

    ERIC Educational Resources Information Center

    Johnson, David W.; Johnson, Roger T.; Smith, Karl A.

    2014-01-01

    Cooperative learning is an example of how theory validated by research may be applied to instructional practice. The major theoretical base for cooperative learning is social interdependence theory. It provides clear definitions of cooperative, competitive, and individualistic learning. Hundreds of research studies have validated its basic…

  17. Fault Injection Validation of a Safety-Critical TMR Sysem

    NASA Astrophysics Data System (ADS)

    Irrera, Ivano; Madeira, Henrique; Zentai, Andras; Hergovics, Beata

    2016-08-01

    Digital systems and their software are the core technology for controlling and monitoring industrial systems in practically all activity domains. Functional safety standards such as the European standard EN 50128 for railway applications define the procedures and technical requirements for the development of software for railway control and protection systems. The validation of such systems is a highly demanding task. In this paper we discuss the use of fault injection techniques, which have been used extensively in several domains, particularly in the space domain, to complement the traditional procedures to validate a SIL (Safety Integrity Level) 4 system for railway signalling, implementing a TMR (Triple Modular Redundancy) architecture. The fault injection tool is based on JTAG technology. The results of our injection campaign showed a high degree of tolerance to most of the injected faults, but several cases of unexpected behaviour have also been observed, helping understanding worst-case scenarios.

  18. Validation of Mission Plans Through Simulation

    NASA Astrophysics Data System (ADS)

    St-Pierre, J.; Melanson, P.; Brunet, C.; Crabtree, D.

    2002-01-01

    The purpose of a spacecraft mission planning system is to automatically generate safe and optimized mission plans for a single spacecraft, or more functioning in unison. The system verifies user input syntax, conformance to commanding constraints, absence of duty cycle violations, timing conflicts, state conflicts, etc. Present day constraint-based systems with state-based predictive models use verification rules derived from expert knowledge. A familiar solution found in Mission Operations Centers, is to complement the planning system with a high fidelity spacecraft simulator. Often a dedicated workstation, the simulator is frequently used for operator training and procedure validation, and may be interfaced to actual control stations with command and telemetry links. While there are distinct advantages to having a planning system offer realistic operator training using the actual flight control console, physical verification of data transfer across layers and procedure validation, experience has revealed some drawbacks and inefficiencies in ground segment operations: With these considerations, two simulation-based mission plan validation projects are under way at the Canadian Space Agency (CSA): RVMP and ViSION. The tools proposed in these projects will automatically run scenarios and provide execution reports to operations planning personnel, prior to actual command upload. This can provide an important safeguard for system or human errors that can only be detected with high fidelity, interdependent spacecraft models running concurrently. The core element common to these projects is a spacecraft simulator, built with off-the- shelf components such as CAE's Real-Time Object-Based Simulation Environment (ROSE) technology, MathWork's MATLAB/Simulink, and Analytical Graphics' Satellite Tool Kit (STK). To complement these tools, additional components were developed, such as an emulated Spacecraft Test and Operations Language (STOL) interpreter and CCSDS TM

  19. Comparing preference assessments: selection- versus duration-based preference assessment procedures.

    PubMed

    Kodak, Tiffany; Fisher, Wayne W; Kelley, Michael E; Kisamore, April

    2009-01-01

    In the current investigation, the results of a selection- and a duration-based preference assessment procedure were compared. A Multiple Stimulus With Replacement (MSW) preference assessment [Windsor, J., Piché, L. M., & Locke, P. A. (1994). Preference testing: A comparison of two presentation methods. Research in Developmental Disabilities, 15, 439-455] and a variation of a Free-Operant (FO) preference assessment procedure [Roane, H. S., Vollmer, T. R., Ringdahl, J. E., & Marcus, B. A. (1998). Evaluation of a brief stimulus preference assessment. Journal of Applied Behavior Analysis, 31, 605-620] were conducted with four participants. A reinforcer assessment was conducted to determine which preference assessment procedure identified the item that produced the highest rates of responding. The items identified as most highly preferred were different across preference assessment procedures for all participants. Results of the reinforcer assessment showed that the MSW identified the item that functioned as the most effective reinforcer for two participants.

  20. Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.

    2012-12-01

    The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some

  1. Validity and reliability of Internet-based physiotherapy assessment for musculoskeletal disorders: a systematic review.

    PubMed

    Mani, Suresh; Sharma, Shobha; Omar, Baharudin; Paungmali, Aatit; Joseph, Leonard

    2017-04-01

    Purpose The purpose of this review is to systematically explore and summarise the validity and reliability of telerehabilitation (TR)-based physiotherapy assessment for musculoskeletal disorders. Method A comprehensive systematic literature review was conducted using a number of electronic databases: PubMed, EMBASE, PsycINFO, Cochrane Library and CINAHL, published between January 2000 and May 2015. The studies examined the validity, inter- and intra-rater reliabilities of TR-based physiotherapy assessment for musculoskeletal conditions were included. Two independent reviewers used the Quality Appraisal Tool for studies of diagnostic Reliability (QAREL) and the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool to assess the methodological quality of reliability and validity studies respectively. Results A total of 898 hits were achieved, of which 11 articles based on inclusion criteria were reviewed. Nine studies explored the concurrent validity, inter- and intra-rater reliabilities, while two studies examined only the concurrent validity. Reviewed studies were moderate to good in methodological quality. The physiotherapy assessments such as pain, swelling, range of motion, muscle strength, balance, gait and functional assessment demonstrated good concurrent validity. However, the reported concurrent validity of lumbar spine posture, special orthopaedic tests, neurodynamic tests and scar assessments ranged from low to moderate. Conclusion TR-based physiotherapy assessment was technically feasible with overall good concurrent validity and excellent reliability, except for lumbar spine posture, orthopaedic special tests, neurodynamic testa and scar assessment.

  2. Analytical Method Development and Validation for the Quantification of Acetone and Isopropyl Alcohol in the Tartaric Acid Base Pellets of Dipyridamole Modified Release Capsules by Using Headspace Gas Chromatographic Technique

    PubMed Central

    2018-01-01

    A simple, sensitive, accurate, robust headspace gas chromatographic method was developed for the quantitative determination of acetone and isopropyl alcohol in tartaric acid-based pellets of dipyridamole modified release capsules. The residual solvents acetone and isopropyl alcohol were used in the manufacturing process of the tartaric acid-based pellets of dipyridamole modified release capsules by considering the solubility of the dipyridamole and excipients in the different manufacturing stages. The method was developed and optimized by using fused silica DB-624 (30 m × 0.32 mm × 1.8 µm) column with the flame ionization detector. The method validation was carried out with regard to the guidelines for validation of analytical procedures Q2 demanded by the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH). All the validation characteristics were meeting the acceptance criteria. Hence, the developed and validated method can be applied for the intended routine analysis. PMID:29686931

  3. Analytical Method Development and Validation for the Quantification of Acetone and Isopropyl Alcohol in the Tartaric Acid Base Pellets of Dipyridamole Modified Release Capsules by Using Headspace Gas Chromatographic Technique.

    PubMed

    Valavala, Sriram; Seelam, Nareshvarma; Tondepu, Subbaiah; Jagarlapudi, V Shanmukha Kumar; Sundarmurthy, Vivekanandan

    2018-01-01

    A simple, sensitive, accurate, robust headspace gas chromatographic method was developed for the quantitative determination of acetone and isopropyl alcohol in tartaric acid-based pellets of dipyridamole modified release capsules. The residual solvents acetone and isopropyl alcohol were used in the manufacturing process of the tartaric acid-based pellets of dipyridamole modified release capsules by considering the solubility of the dipyridamole and excipients in the different manufacturing stages. The method was developed and optimized by using fused silica DB-624 (30 m × 0.32 mm × 1.8  µ m) column with the flame ionization detector. The method validation was carried out with regard to the guidelines for validation of analytical procedures Q2 demanded by the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH). All the validation characteristics were meeting the acceptance criteria. Hence, the developed and validated method can be applied for the intended routine analysis.

  4. An Argument-Based Approach to the Validation of UHTRUST: Can We Measure How Recent Graduates Can Be Trusted with Unfamiliar Tasks?

    ERIC Educational Resources Information Center

    Wijnen-Meijer, M.; Van der Schaaf, M.; Booij, E.; Harendza, S.; Boscardin, C.; Wijngaarden, J. Van; Ten Cate, Th. J.

    2013-01-01

    There is a need for valid methods to assess the readiness for clinical practice of medical graduates. This study evaluates the validity of Utrecht Hamburg Trainee Responsibility for Unfamiliar Situations Test (UHTRUST), an authentic simulation procedure to assess whether medical trainees are ready to be entrusted with unfamiliar clinical tasks…

  5. Validation study and routine control monitoring of moist heat sterilization procedures.

    PubMed

    Shintani, Hideharu

    2012-06-01

    The proposed approach to validation of steam sterilization in autoclaves follows the basic life cycle concepts applicable to all validation programs. Understand the function of sterilization process, develop and understand the cycles to carry out the process, and define a suitable test or series of tests to confirm that the function of the process is suitably ensured by the structure provided. Sterilization of product and components and parts that come in direct contact with sterilized product is the most critical of pharmaceutical processes. Consequently, this process requires a most rigorous and detailed approach to validation. An understanding of the process requires a basic understanding of microbial death, the parameters that facilitate that death, the accepted definition of sterility, and the relationship between the definition and sterilization parameters. Autoclaves and support systems need to be designed, installed, and qualified in a manner that ensures their continued reliability. Lastly, the test program must be complete and definitive. In this paper, in addition to validation study, documentation of IQ, OQ and PQ concretely were described.

  6. The Validation of a Case-Based, Cumulative Assessment and Progressions Examination

    PubMed Central

    Coker, Adeola O.; Copeland, Jeffrey T.; Gottlieb, Helmut B.; Horlen, Cheryl; Smith, Helen E.; Urteaga, Elizabeth M.; Ramsinghani, Sushma; Zertuche, Alejandra; Maize, David

    2016-01-01

    Objective. To assess content and criterion validity, as well as reliability of an internally developed, case-based, cumulative, high-stakes third-year Annual Student Assessment and Progression Examination (P3 ASAP Exam). Methods. Content validity was assessed through the writing-reviewing process. Criterion validity was assessed by comparing student scores on the P3 ASAP Exam with the nationally validated Pharmacy Curriculum Outcomes Assessment (PCOA). Reliability was assessed with psychometric analysis comparing student performance over four years. Results. The P3 ASAP Exam showed content validity through representation of didactic courses and professional outcomes. Similar scores on the P3 ASAP Exam and PCOA with Pearson correlation coefficient established criterion validity. Consistent student performance using Kuder-Richardson coefficient (KR-20) since 2012 reflected reliability of the examination. Conclusion. Pharmacy schools can implement internally developed, high-stakes, cumulative progression examinations that are valid and reliable using a robust writing-reviewing process and psychometric analyses. PMID:26941435

  7. Validation Database Based Thermal Analysis of an Advanced RPS Concept

    NASA Technical Reports Server (NTRS)

    Balint, Tibor S.; Emis, Nickolas D.

    2006-01-01

    Advanced RPS concepts can be conceived, designed and assessed using high-end computational analysis tools. These predictions may provide an initial insight into the potential performance of these models, but verification and validation are necessary and required steps to gain confidence in the numerical analysis results. This paper discusses the findings from a numerical validation exercise for a small advanced RPS concept, based on a thermal analysis methodology developed at JPL and on a validation database obtained from experiments performed at Oregon State University. Both the numerical and experimental configurations utilized a single GPHS module enabled design, resembling a Mod-RTG concept. The analysis focused on operating and environmental conditions during the storage phase only. This validation exercise helped to refine key thermal analysis and modeling parameters, such as heat transfer coefficients, and conductivity and radiation heat transfer values. Improved understanding of the Mod-RTG concept through validation of the thermal model allows for future improvements to this power system concept.

  8. Fuzzy-logic based strategy for validation of multiplex methods: example with qualitative GMO assays.

    PubMed

    Bellocchi, Gianni; Bertholet, Vincent; Hamels, Sandrine; Moens, W; Remacle, José; Van den Eede, Guy

    2010-02-01

    This paper illustrates the advantages that a fuzzy-based aggregation method could bring into the validation of a multiplex method for GMO detection (DualChip GMO kit, Eppendorf). Guidelines for validation of chemical, bio-chemical, pharmaceutical and genetic methods have been developed and ad hoc validation statistics are available and routinely used, for in-house and inter-laboratory testing, and decision-making. Fuzzy logic allows summarising the information obtained by independent validation statistics into one synthetic indicator of overall method performance. The microarray technology, introduced for simultaneous identification of multiple GMOs, poses specific validation issues (patterns of performance for a variety of GMOs at different concentrations). A fuzzy-based indicator for overall evaluation is illustrated in this paper, and applied to validation data for different genetically modified elements. Remarks were drawn on the analytical results. The fuzzy-logic based rules were shown to be applicable to improve interpretation of results and facilitate overall evaluation of the multiplex method.

  9. Clinical Validity of the ADI-R in a US-Based Latino Population

    ERIC Educational Resources Information Center

    Vanegas, Sandra B.; Magaña, Sandra; Morales, Miguel; McNamara, Ellyn

    2016-01-01

    The Autism Diagnostic Interview-Revised (ADI-R) has been validated as a tool to aid in the diagnosis of Autism; however, given the growing diversity in the United States, the ADI-R must be validated for different languages and cultures. This study evaluates the validity of the ADI-R in a US-based Latino, Spanish-speaking population of 50 children…

  10. 21 CFR 606.100 - Standard operating procedures.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... or hepatitis C virus (HCV) infection when tested under § 610.40 of this chapter, or when a blood... viral clearance procedures; (iii) To notify consignees to quarantine in-date blood and blood components... are manufactured using validated viral clearance procedures; (iv) To determine the suitability for...

  11. Aeroservoelastic Model Validation and Test Data Analysis of the F/A-18 Active Aeroelastic Wing

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Prazenica, Richard J.

    2003-01-01

    Model validation and flight test data analysis require careful consideration of the effects of uncertainty, noise, and nonlinearity. Uncertainty prevails in the data analysis techniques and results in a composite model uncertainty from unmodeled dynamics, assumptions and mechanics of the estimation procedures, noise, and nonlinearity. A fundamental requirement for reliable and robust model development is an attempt to account for each of these sources of error, in particular, for model validation, robust stability prediction, and flight control system development. This paper is concerned with data processing procedures for uncertainty reduction in model validation for stability estimation and nonlinear identification. F/A-18 Active Aeroelastic Wing (AAW) aircraft data is used to demonstrate signal representation effects on uncertain model development, stability estimation, and nonlinear identification. Data is decomposed using adaptive orthonormal best-basis and wavelet-basis signal decompositions for signal denoising into linear and nonlinear identification algorithms. Nonlinear identification from a wavelet-based Volterra kernel procedure is used to extract nonlinear dynamics from aeroelastic responses, and to assist model development and uncertainty reduction for model validation and stability prediction by removing a class of nonlinearity from the uncertainty.

  12. Beware of external validation! - A Comparative Study of Several Validation Techniques used in QSAR Modelling.

    PubMed

    Majumdar, Subhabrata; Basak, Subhash C

    2018-04-26

    Proper validation is an important aspect of QSAR modelling. External validation is one of the widely used validation methods in QSAR where the model is built on a subset of the data and validated on the rest of the samples. However, its effectiveness for datasets with a small number of samples but large number of predictors remains suspect. Calculating hundreds or thousands of molecular descriptors using currently available software has become the norm in QSAR research, owing to computational advances in the past few decades. Thus, for n chemical compounds and p descriptors calculated for each molecule, the typical chemometric dataset today has high value of p but small n (i.e. n < p). Motivated by the evidence of inadequacies of external validation in estimating the true predictive capability of a statistical model in recent literature, this paper performs an extensive and comparative study of this method with several other validation techniques. We compared four validation methods: leave-one-out, K-fold, external and multi-split validation, using statistical models built using the LASSO regression, which simultaneously performs variable selection and modelling. We used 300 simulated datasets and one real dataset of 95 congeneric amine mutagens for this evaluation. External validation metrics have high variation among different random splits of the data, hence are not recommended for predictive QSAR models. LOO has the overall best performance among all validation methods applied in our scenario. Results from external validation are too unstable for the datasets we analyzed. Based on our findings, we recommend using the LOO procedure for validating QSAR predictive models built on high-dimensional small-sample data. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  13. Smartphone based automatic organ validation in ultrasound video.

    PubMed

    Vaish, Pallavi; Bharath, R; Rajalakshmi, P

    2017-07-01

    Telesonography involves transmission of ultrasound video from remote areas to the doctors for getting diagnosis. Due to the lack of trained sonographers in remote areas, the ultrasound videos scanned by these untrained persons do not contain the proper information that is required by a physician. As compared to standard methods for video transmission, mHealth driven systems need to be developed for transmitting valid medical videos. To overcome this problem, we are proposing an organ validation algorithm to evaluate the ultrasound video based on the content present. This will guide the semi skilled person to acquire the representative data from patient. Advancement in smartphone technology allows us to perform high medical image processing on smartphone. In this paper we have developed an Application (APP) for a smartphone which can automatically detect the valid frames (which consist of clear organ visibility) in an ultrasound video and ignores the invalid frames (which consist of no-organ visibility), and produces a compressed sized video. This is done by extracting the GIST features from the Region of Interest (ROI) of the frame and then classifying the frame using SVM classifier with quadratic kernel. The developed application resulted with the accuracy of 94.93% in classifying valid and invalid images.

  14. Cross-cultural validity of the ABILOCO questionnaire for individuals with stroke, based on Rasch analysis.

    PubMed

    Avelino, Patrick Roberto; Magalhães, Lívia Castro; Faria-Fortini, Iza; Basílio, Marluce Lopes; Menezes, Kênia Kiefer Parreiras; Teixeira-Salmela, Luci Fuscaldi

    2018-06-01

    The purpose of this study was to evaluate the cross-cultural validity of the Brazilian version of the ABILOCO questionnaire for stroke subjects. Cross-cultural adaptation of the original English version of the ABILOCO to the Brazilian-Portuguese language followed standardized procedures. The adapted version was administered to 136 stroke subjects and its measurement properties were assessed using Rash analysis. Cross-cultural validity was based on cultural invariance analyses. Goodness-of-fit analysis revealed one misfitting item. The principal component analysis of the residuals showed that the first dimension explained 45% of the variance in locomotion ability; however, the eigenvalue was 1.92. The ABILOCO-Brazil divided the sample into two levels of ability and the items into about seven levels of difficulty. The item-person map showed some ceiling effect. Cultural invariance analyses revealed that although there were differences in the item calibrations between the ABILOCO-original and ABILOCO-Brazil, they did not impact the measures of locomotion ability. The ABILOCO-Brazil demonstrated satisfactory measurement properties to be used within both clinical and research contexts in Brazil, as well cross-cultural validity to be used in international/multicentric studies. However, the presence of ceiling effect suggests that it may not be appropriate for the assessment of individuals with high levels of locomotion ability. Implications for rehabilitation Self-report measures of locomotion ability are clinically important, since they describe the abilities of the individuals within real life contexts. The ABILOCO questionnaire, specific for stroke survivors, demonstrated satisfactory measurement properties, but may not be most appropriate to assess individuals with high levels of locomotion ability The results of the cross-cultural validity showed that the ABILOCO-Original and the ABILOCO-Brazil calibrations may be used interchangeable.

  15. Validation of hot-poured crack sealant performance-based guidelines.

    DOT National Transportation Integrated Search

    2017-06-01

    This report summarizes a comprehensive research effort to validate thresholds for performance-based guidelines and : grading system for hot-poured asphalt crack sealants. A series of performance tests were established in earlier research and : includ...

  16. Development of Advanced Verification and Validation Procedures and Tools for the Certification of Learning Systems in Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen; Schumann, Johann; Gupta, Pramod; Richard, Michael; Guenther, Kurt; Soares, Fola

    2005-01-01

    Adaptive control technologies that incorporate learning algorithms have been proposed to enable automatic flight control and vehicle recovery, autonomous flight, and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments. In order for adaptive control systems to be used in safety-critical aerospace applications, they must be proven to be highly safe and reliable. Rigorous methods for adaptive software verification and validation must be developed to ensure that control system software failures will not occur. Of central importance in this regard is the need to establish reliable methods that guarantee convergent learning, rapid convergence (learning) rate, and algorithm stability. This paper presents the major problems of adaptive control systems that use learning to improve performance. The paper then presents the major procedures and tools presently developed or currently being developed to enable the verification, validation, and ultimate certification of these adaptive control systems. These technologies include the application of automated program analysis methods, techniques to improve the learning process, analytical methods to verify stability, methods to automatically synthesize code, simulation and test methods, and tools to provide on-line software assurance.

  17. Meeting report: Validation of toxicogenomics-based test systems: ECVAM-ICCVAM/NICEATM considerations for regulatory use.

    PubMed

    Corvi, Raffaella; Ahr, Hans-Jürgen; Albertini, Silvio; Blakey, David H; Clerici, Libero; Coecke, Sandra; Douglas, George R; Gribaldo, Laura; Groten, John P; Haase, Bernd; Hamernik, Karen; Hartung, Thomas; Inoue, Tohru; Indans, Ian; Maurici, Daniela; Orphanides, George; Rembges, Diana; Sansone, Susanna-Assunta; Snape, Jason R; Toda, Eisaku; Tong, Weida; van Delft, Joost H; Weis, Brenda; Schechtman, Leonard M

    2006-03-01

    This is the report of the first workshop "Validation of Toxicogenomics-Based Test Systems" held 11-12 December 2003 in Ispra, Italy. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and organized jointly by ECVAM, the U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM). The primary aim of the workshop was for participants to discuss and define principles applicable to the validation of toxicogenomics platforms as well as validation of specific toxicologic test methods that incorporate toxicogenomics technologies. The workshop was viewed as an opportunity for initiating a dialogue between technologic experts, regulators, and the principal validation bodies and for identifying those factors to which the validation process would be applicable. It was felt that to do so now, as the technology is evolving and associated challenges are identified, would be a basis for the future validation of the technology when it reaches the appropriate stage. Because of the complexity of the issue, different aspects of the validation of toxicogenomics-based test methods were covered. The three focus areas include a) biologic validation of toxicogenomics-based test methods for regulatory decision making, b) technical and bioinformatics aspects related to validation, and c) validation issues as they relate to regulatory acceptance and use of toxicogenomics-based test methods. In this report we summarize the discussions and describe in detail the recommendations for future direction and priorities.

  18. Meeting Report: Validation of Toxicogenomics-Based Test Systems: ECVAM–ICCVAM/NICEATM Considerations for Regulatory Use

    PubMed Central

    Corvi, Raffaella; Ahr, Hans-Jürgen; Albertini, Silvio; Blakey, David H.; Clerici, Libero; Coecke, Sandra; Douglas, George R.; Gribaldo, Laura; Groten, John P.; Haase, Bernd; Hamernik, Karen; Hartung, Thomas; Inoue, Tohru; Indans, Ian; Maurici, Daniela; Orphanides, George; Rembges, Diana; Sansone, Susanna-Assunta; Snape, Jason R.; Toda, Eisaku; Tong, Weida; van Delft, Joost H.; Weis, Brenda; Schechtman, Leonard M.

    2006-01-01

    This is the report of the first workshop “Validation of Toxicogenomics-Based Test Systems” held 11–12 December 2003 in Ispra, Italy. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and organized jointly by ECVAM, the U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM). The primary aim of the workshop was for participants to discuss and define principles applicable to the validation of toxicogenomics platforms as well as validation of specific toxicologic test methods that incorporate toxicogenomics technologies. The workshop was viewed as an opportunity for initiating a dialogue between technologic experts, regulators, and the principal validation bodies and for identifying those factors to which the validation process would be applicable. It was felt that to do so now, as the technology is evolving and associated challenges are identified, would be a basis for the future validation of the technology when it reaches the appropriate stage. Because of the complexity of the issue, different aspects of the validation of toxicogenomics-based test methods were covered. The three focus areas include a) biologic validation of toxicogenomics-based test methods for regulatory decision making, b) technical and bioinformatics aspects related to validation, and c) validation issues as they relate to regulatory acceptance and use of toxicogenomics-based test methods. In this report we summarize the discussions and describe in detail the recommendations for future direction and priorities. PMID:16507466

  19. Validation of highly reliable, real-time knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1988-01-01

    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.

  20. A long-term validation of the modernised DC-ARC-OES solid-sample method.

    PubMed

    Flórián, K; Hassler, J; Förster, O

    2001-12-01

    The validation procedure based on ISO 17025 standard has been used to study and illustrate both the longterm stability of the calibration process of the DC-ARC solid sample spectrometric method and the main validation criteria of the method. In the calculation of the validation characteristics depending on the linearity(calibration), also the fulfilment of predetermining criteria such as normality and homoscedasticity was checked. In order to decide whether there are any trends in the time-variation of the analytical signal or not, also the Neumann test of trend was applied and evaluated. Finally, a comparison with similar validation data of the ETV-ICP-OES method was carried out.

  1. Validation of a Novel Laparoscopic Adjustable Gastric Band Simulator

    PubMed Central

    Sankaranarayanan, Ganesh; Adair, James D.; Halic, Tansel; Gromski, Mark A.; Lu, Zhonghua; Ahn, Woojin; Jones, Daniel B.; De, Suvranu

    2011-01-01

    Background Morbid obesity accounts for more than 90,000 deaths per year in the United States. Laparoscopic adjustable gastric banding (LAGB) is the second most common weight loss procedure performed in the US and the most common in Europe and Australia. Simulation in surgical training is a rapidly advancing field that has been adopted by many to prepare surgeons for surgical techniques and procedures. Study Aim The aim of our study was to determine face, construct and content validity for a novel virtual reality laparoscopic adjustable gastric band simulator. Methods Twenty-eight subjects were categorized into two groups (Expert and Novice), determined by their skill level in laparoscopic surgery. Experts consisted of subjects who had at least four years of laparoscopic training and operative experience. Novices consisted of subjects with medical training, but with less than four years of laparoscopic training. The subjects performed the virtual reality laparoscopic adjustable band surgery simulator. They were automatically scored, according to various tasks. The subjects then completed a questionnaire to evaluate face and content validity. Results On a 5-point Likert scale (1 – lowest score, 5 – highest score), the mean score for visual realism was 4.00 ± 0.67 and the mean score for realism of the interface and tool movements was 4.07 ± 0.77 [Face Validity]. There were significant differences in the performance of the two subject groups (Expert and Novice), based on total scores (p<0.001) [Construct Validity]. Mean scores for utility of the simulator, as addressed by the Expert group, was 4.50 ± 0.71 [Content Validity]. Conclusion We created a virtual reality laparoscopic adjustable gastric band simulator. Our initial results demonstrate excellent face, construct and content validity findings. To our knowledge, this is the first virtual reality simulator with haptic feedback for training residents and surgeons in the laparoscopic adjustable gastric banding

  2. Development of improved pavement rehabilitation procedures based on FWD backcalculation.

    DOT National Transportation Integrated Search

    2015-01-01

    Hot Mix Asphalt (HMA) overlays are among the most effective maintenance and rehabilitation : alternatives in improving the structural as well as functional performance of flexible pavements. HMA : overlay design procedures can be based on: (1) engine...

  3. 40 CFR 90.411 - Post-test analyzer procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Post-test analyzer procedures. 90.411... Test Procedures § 90.411 Post-test analyzer procedures. (a) Perform a HC hang-up check within 60...), the test is void. (d) Read and record the post-test data specified in § 90.405(e). (e) For a valid...

  4. 40 CFR 90.411 - Post-test analyzer procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Post-test analyzer procedures. 90.411... Test Procedures § 90.411 Post-test analyzer procedures. (a) Perform a HC hang-up check within 60...), the test is void. (d) Read and record the post-test data specified in § 90.405(e). (e) For a valid...

  5. 40 CFR 90.411 - Post-test analyzer procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Post-test analyzer procedures. 90.411... Test Procedures § 90.411 Post-test analyzer procedures. (a) Perform a HC hang-up check within 60...), the test is void. (d) Read and record the post-test data specified in § 90.405(e). (e) For a valid...

  6. 40 CFR 90.411 - Post-test analyzer procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Post-test analyzer procedures. 90.411... Test Procedures § 90.411 Post-test analyzer procedures. (a) Perform a HC hang-up check within 60...), the test is void. (d) Read and record the post-test data specified in § 90.405(e). (e) For a valid...

  7. 40 CFR 90.411 - Post-test analyzer procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Post-test analyzer procedures. 90.411... Test Procedures § 90.411 Post-test analyzer procedures. (a) Perform a HC hang-up check within 60...), the test is void. (d) Read and record the post-test data specified in § 90.405(e). (e) For a valid...

  8. Development and validation of an automated, microscopy-based method for enumeration of groups of intestinal bacteria.

    PubMed

    Jansen, G J; Wildeboer-Veloo, A C; Tonk, R H; Franks, A H; Welling, G W

    1999-09-01

    An automated microscopy-based method using fluorescently labelled 16S rRNA-targeted oligonucleotide probes directed against the predominant groups of intestinal bacteria was developed and validated. The method makes use of the Leica 600HR image analysis system, a Kodak MegaPlus camera model 1.4 and a servo-controlled Leica DM/RXA ultra-violet microscope. Software for automated image acquisition and analysis was developed and tested. The performance of the method was validated using a set of four fluorescent oligonucleotide probes: a universal probe for the detection of all bacterial species, one probe specific for Bifidobacterium spp., a digenus-probe specific for Bacteroides spp. and Prevotella spp. and a trigenus-probe specific for Ruminococcus spp., Clostridium spp. and Eubacterium spp. A nucleic acid stain, 4',6-diamidino-2-phenylindole (DAPI), was also included in the validation. In order to quantify the assay-error, one faecal sample was measured 20 times using each separate probe. Thereafter faecal samples of 20 different volunteers were measured following the same procedure in order to quantify the error due to individual-related differences in gut flora composition. It was concluded that the combination of automated microscopy and fluorescent whole-cell hybridisation enables distinction in gut flora-composition between volunteers at a significant level. With this method it is possible to process 48 faecal samples overnight, with coefficients of variation ranging from 0.07 to 0.30.

  9. Validation of column-based chromatography processes for the purification of proteins. Technical report No. 14.

    PubMed

    2008-01-01

    PDA Technical Report No. 14 has been written to provide current best practices, such as application of risk-based decision making, based in sound science to provide a foundation for the validation of column-based chromatography processes and to expand upon information provided in Technical Report No. 42, Process Validation of Protein Manufacturing. The intent of this technical report is to provide an integrated validation life-cycle approach that begins with the use of process development data for the definition of operational parameters as a basis for validation, confirmation, and/or minor adjustment to these parameters at manufacturing scale during production of conformance batches and maintenance of the validated state throughout the product's life cycle.

  10. SeaWiFS Technical Report Series. Volume 38; SeaWiFS Calibration and Validation Quality Control Procedures

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); McClain, Charles R.; Darzi, Michael; Barnes, Robert A.; Eplee, Robert E.; Firestone, James K.; Patt, Frederick S.; Robinson, Wayne D.; Schieber, Brian D.; hide

    1996-01-01

    This document provides five brief reports that address several quality control procedures under the auspices of the Calibration and Validation Element (CVE) within the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Project. Chapter 1 describes analyses of the 32 sensor engineering telemetry streams. Anomalies in any of the values may impact sensor performance in direct or indirect ways. The analyses are primarily examinations of parameter time series combined with statistical methods such as auto- and cross-correlation functions. Chapter 2 describes how the various onboard (solar and lunar) and vicarious (in situ) calibration data will be analyzed to quantify sensor degradation, if present. The analyses also include methods for detecting the influence of charged particles on sensor performance such as might be expected in the South Atlantic Anomaly (SAA). Chapter 3 discusses the quality control of the ancillary environmental data that are routinely received from other agencies or projects which are used in the atmospheric correction algorithm (total ozone, surface wind velocity, and surface pressure; surface relative humidity is also obtained, but is not used in the initial operational algorithm). Chapter 4 explains the procedures for screening level-, level-2, and level-3 products. These quality control operations incorporate both automated and interactive procedures which check for file format errors (all levels), navigation offsets (level-1), mask and flag performance (level-2), and product anomalies (all levels). Finally, Chapter 5 discusses the match-up data set development for comparing SeaWiFS level-2 derived products with in situ observations, as well as the subsequent outlier analyses that will be used for evaluating error sources.

  11. Screening for cognitive impairment in older individuals. Validation study of a computer-based test.

    PubMed

    Green, R C; Green, J; Harrison, J M; Kutner, M H

    1994-08-01

    This study examined the validity of a computer-based cognitive test that was recently designed to screen the elderly for cognitive impairment. Criterion-related validity was examined by comparing test scores of impaired patients and normal control subjects. Construct-related validity was computed through correlations between computer-based subtests and related conventional neuropsychological subtests. University center for memory disorders. Fifty-two patients with mild cognitive impairment by strict clinical criteria and 50 unimpaired, age- and education-matched control subjects. Control subjects were rigorously screened by neurological, neuropsychological, imaging, and electrophysiological criteria to identify and exclude individuals with occult abnormalities. Using a cut-off total score of 126, this computer-based instrument had a sensitivity of 0.83 and a specificity of 0.96. Using a prevalence estimate of 10%, predictive values, positive and negative, were 0.70 and 0.96, respectively. Computer-based subtests correlated significantly with conventional neuropsychological tests measuring similar cognitive domains. Thirteen (17.8%) of 73 volunteers with normal medical histories were excluded from the control group, with unsuspected abnormalities on standard neuropsychological tests, electroencephalograms, or magnetic resonance imaging scans. Computer-based testing is a valid screening methodology for the detection of mild cognitive impairment in the elderly, although this particular test has important limitations. Broader applications of computer-based testing will require extensive population-based validation. Future studies should recognize that normal control subjects without a history of disease who are typically used in validation studies may have a high incidence of unsuspected abnormalities on neurodiagnostic studies.

  12. Translating the simulation of procedural drilling techniques for interactive neurosurgical training.

    PubMed

    Stredney, Don; Rezai, Ali R; Prevedello, Daniel M; Elder, J Bradley; Kerwin, Thomas; Hittle, Bradley; Wiet, Gregory J

    2013-10-01

    Through previous efforts we have developed a fully virtual environment to provide procedural training of otologic surgical technique. The virtual environment is based on high-resolution volumetric data of the regional anatomy. These volumetric data help drive an interactive multisensory, ie, visual (stereo), aural (stereo), and tactile, simulation environment. Subsequently, we have extended our efforts to support the training of neurosurgical procedural technique as part of the Congress of Neurological Surgeons simulation initiative. To deliberately study the integration of simulation technologies into the neurosurgical curriculum and to determine their efficacy in teaching minimally invasive cranial and skull base approaches. We discuss issues of biofidelity and our methods to provide objective, quantitative and automated assessment for the residents. We conclude with a discussion of our experiences by reporting preliminary formative pilot studies and proposed approaches to take the simulation to the next level through additional validation studies. We have presented our efforts to translate an otologic simulation environment for use in the neurosurgical curriculum. We have demonstrated the initial proof of principles and define the steps to integrate and validate the system as an adjuvant to the neurosurgical curriculum.

  13. Learn, see, practice, prove, do, maintain: an evidence-based pedagogical framework for procedural skill training in medicine.

    PubMed

    Sawyer, Taylor; White, Marjorie; Zaveri, Pavan; Chang, Todd; Ades, Anne; French, Heather; Anderson, JoDee; Auerbach, Marc; Johnston, Lindsay; Kessler, David

    2015-08-01

    Acquisition of competency in procedural skills is a fundamental goal of medical training. In this Perspective, the authors propose an evidence-based pedagogical framework for procedural skill training. The framework was developed based on a review of the literature using a critical synthesis approach and builds on earlier models of procedural skill training in medicine. The authors begin by describing the fundamentals of procedural skill development. Then, a six-step pedagogical framework for procedural skills training is presented: Learn, See, Practice, Prove, Do, and Maintain. In this framework, procedural skill training begins with the learner acquiring requisite cognitive knowledge through didactic education (Learn) and observation of the procedure (See). The learner then progresses to the stage of psychomotor skill acquisition and is allowed to deliberately practice the procedure on a simulator (Practice). Simulation-based mastery learning is employed to allow the trainee to prove competency prior to performing the procedure on a patient (Prove). Once competency is demonstrated on a simulator, the trainee is allowed to perform the procedure on patients with direct supervision, until he or she can be entrusted to perform the procedure independently (Do). Maintenance of the skill is ensured through continued clinical practice, supplemented by simulation-based training as needed (Maintain). Evidence in support of each component of the framework is presented. Implementation of the proposed framework presents a paradigm shift in procedural skill training. However, the authors believe that adoption of the framework will improve procedural skill training and patient safety.

  14. Application of Recursive Partitioning to Derive and Validate a Claims-Based Algorithm for Identifying Keratinocyte Carcinoma (Nonmelanoma Skin Cancer).

    PubMed

    Chan, An-Wen; Fung, Kinwah; Tran, Jennifer M; Kitchen, Jessica; Austin, Peter C; Weinstock, Martin A; Rochon, Paula A

    2016-10-01

    Keratinocyte carcinoma (nonmelanoma skin cancer) accounts for substantial burden in terms of high incidence and health care costs but is excluded by most cancer registries in North America. Administrative health insurance claims databases offer an opportunity to identify these cancers using diagnosis and procedural codes submitted for reimbursement purposes. To apply recursive partitioning to derive and validate a claims-based algorithm for identifying keratinocyte carcinoma with high sensitivity and specificity. Retrospective study using population-based administrative databases linked to 602 371 pathology episodes from a community laboratory for adults residing in Ontario, Canada, from January 1, 1992, to December 31, 2009. The final analysis was completed in January 2016. We used recursive partitioning (classification trees) to derive an algorithm based on health insurance claims. The performance of the derived algorithm was compared with 5 prespecified algorithms and validated using an independent academic hospital clinic data set of 2082 patients seen in May and June 2011. Sensitivity, specificity, positive predictive value, and negative predictive value using the histopathological diagnosis as the criterion standard. We aimed to achieve maximal specificity, while maintaining greater than 80% sensitivity. Among 602 371 pathology episodes, 131 562 (21.8%) had a diagnosis of keratinocyte carcinoma. Our final derived algorithm outperformed the 5 simple prespecified algorithms and performed well in both community and hospital data sets in terms of sensitivity (82.6% and 84.9%, respectively), specificity (93.0% and 99.0%, respectively), positive predictive value (76.7% and 69.2%, respectively), and negative predictive value (95.0% and 99.6%, respectively). Algorithm performance did not vary substantially during the 18-year period. This algorithm offers a reliable mechanism for ascertaining keratinocyte carcinoma for epidemiological research in the absence of

  15. Advances in the Validation of Satellite-Based Maps of Volcanic Sulfur Dioxide Plumes

    NASA Astrophysics Data System (ADS)

    Realmuto, V. J.; Berk, A.; Acharya, P. K.; Kennett, R.

    2013-12-01

    The monitoring of volcanic gas emissions with gas cameras, spectrometer arrays, tethersondes, and UAVs presents new opportunities for the validation of satellite-based retrievals of gas concentrations. Gas cameras and spectrometer arrays provide instantaneous observations of the gas burden, or concentration along an optical path, over broad sections of a plume, similar to the observations acquired by nadir-viewing satellites. Tethersondes and UAVs provide us with direct measurements of the vertical profiles of gas concentrations within plumes. This presentation will focus on our current efforts to validate ASTER-based maps of sulfur dioxide plumes at Turrialba and Kilauea Volcanoes (located in Costa Rica and Hawaii, respectively). These volcanoes, which are the subjects of comprehensive monitoring programs, are challenging targets for thermal infrared (TIR) remote sensing due the warm and humid atmospheric conditions. The high spatial resolution of ASTER in the TIR (90 meters) allows us to map the plumes back to their source vents, but also requires us to pay close attention to the temperature and emissivity of the surfaces beneath the plumes. Our knowledge of the surface and atmospheric conditions is never perfect, and we employ interactive mapping techniques that allow us to evaluate the impact of these uncertainties on our estimates of plume composition. To accomplish this interactive mapping we have developed the Plume Tracker tool kit, which integrates retrieval procedures, visualization tools, and a customized version of the MODTRAN radiative transfer (RT) model under a single graphics user interface (GUI). We are in the process of porting the RT calculations to graphics processing units (GPUs) with the goal of achieving a 100-fold increase in the speed of computation relative to conventional CPU-based processing. We will report on our progress with this evolution of Plume Tracker. Portions of this research were conducted at the Jet Propulsion Laboratory

  16. Measuring Practitioner Attitudes toward Evidence-Based Treatments: A Validation Study

    ERIC Educational Resources Information Center

    Ashcraft, Rindee G. P.; Foster, Sharon L.; Lowery, Amy E.; Henggeler, Scott W.; Chapman, Jason E.; Rowland, Melisa D.

    2011-01-01

    A better understanding of clinicians' attitudes toward evidence-based treatments (EBT) will presumably enhance the transfer of EBTs for substance-abusing adolescents from research to clinical application. The reliability and validity of two measures of therapist attitudes toward EBT were examined: the Evidence-Based Practice Attitude Scale…

  17. Validation of Immunohistochemical Assays for Integral Biomarkers in the NCI-MATCH EAY131 Clinical Trial.

    PubMed

    Khoury, Joseph D; Wang, Wei-Lien; Prieto, Victor G; Medeiros, L Jeffrey; Kalhor, Neda; Hameed, Meera; Broaddus, Russell; Hamilton, Stanley R

    2018-02-01

    Biomarkers that guide therapy selection are gaining unprecedented importance as targeted therapy options increase in scope and complexity. In conjunction with high-throughput molecular techniques, therapy-guiding biomarker assays based upon immunohistochemistry (IHC) have a critical role in cancer care in that they inform about the expression status of a protein target. Here, we describe the validation procedures for four clinical IHC biomarker assays-PTEN, RB, MLH1, and MSH2-for use as integral biomarkers in the nationwide NCI-Molecular Analysis for Therapy Choice (NCI-MATCH) EAY131 clinical trial. Validation procedures were developed through an iterative process based on collective experience and adaptation of broad guidelines from the FDA. The steps included primary antibody selection; assay optimization; development of assay interpretation criteria incorporating biological considerations; and expected staining patterns, including indeterminate results, orthogonal validation, and tissue validation. Following assay lockdown, patient samples and cell lines were used for analytic and clinical validation. The assays were then approved as laboratory-developed tests and used for clinical trial decisions for treatment selection. Calculations of sensitivity and specificity were undertaken using various definitions of gold-standard references, and external validation was required for the PTEN IHC assay. In conclusion, validation of IHC biomarker assays critical for guiding therapy in clinical trials is feasible using comprehensive preanalytic, analytic, and postanalytic steps. Implementation of standardized guidelines provides a useful framework for validating IHC biomarker assays that allow for reproducibility across institutions for routine clinical use. Clin Cancer Res; 24(3); 521-31. ©2017 AACR . ©2017 American Association for Cancer Research.

  18. 29 CFR 1607.6 - Use of selection procedures which have not been validated.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... circumstances in which a user cannot or need not utilize the validation techniques contemplated by these... which has an adverse impact, the validation techniques contemplated by these guidelines usually should be followed if technically feasible. Where the user cannot or need not follow the validation...

  19. 29 CFR 1607.6 - Use of selection procedures which have not been validated.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... circumstances in which a user cannot or need not utilize the validation techniques contemplated by these... which has an adverse impact, the validation techniques contemplated by these guidelines usually should be followed if technically feasible. Where the user cannot or need not follow the validation...

  20. 29 CFR 1607.6 - Use of selection procedures which have not been validated.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... circumstances in which a user cannot or need not utilize the validation techniques contemplated by these... which has an adverse impact, the validation techniques contemplated by these guidelines usually should be followed if technically feasible. Where the user cannot or need not follow the validation...

  1. 29 CFR 1607.6 - Use of selection procedures which have not been validated.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... circumstances in which a user cannot or need not utilize the validation techniques contemplated by these... which has an adverse impact, the validation techniques contemplated by these guidelines usually should be followed if technically feasible. Where the user cannot or need not follow the validation...

  2. 29 CFR 1607.6 - Use of selection procedures which have not been validated.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... circumstances in which a user cannot or need not utilize the validation techniques contemplated by these... which has an adverse impact, the validation techniques contemplated by these guidelines usually should be followed if technically feasible. Where the user cannot or need not follow the validation...

  3. Verification and Validation of KBS with Neural Network Components

    NASA Technical Reports Server (NTRS)

    Wen, Wu; Callahan, John

    1996-01-01

    Artificial Neural Network (ANN) play an important role in developing robust Knowledge Based Systems (KBS). The ANN based components used in these systems learn to give appropriate predictions through training with correct input-output data patterns. Unlike traditional KBS that depends on a rule database and a production engine, the ANN based system mimics the decisions of an expert without specifically formulating the if-than type of rules. In fact, the ANNs demonstrate their superiority when such if-then type of rules are hard to generate by human expert. Verification of traditional knowledge based system is based on the proof of consistency and completeness of the rule knowledge base and correctness of the production engine.These techniques, however, can not be directly applied to ANN based components.In this position paper, we propose a verification and validation procedure for KBS with ANN based components. The essence of the procedure is to obtain an accurate system specification through incremental modification of the specifications using an ANN rule extraction algorithm.

  4. Reliability and Validity of 10 Different Standard Setting Procedures.

    ERIC Educational Resources Information Center

    Halpin, Glennelle; Halpin, Gerald

    Research indicating that different cut-off points result from the use of different standard-setting techniques leaves decision makers with a disturbing dilemma: Which standard-setting method is best? This investigation of the reliability and validity of 10 different standard-setting approaches was designed to provide information that might help…

  5. Development and validation of an ICD-10-based disability predictive index for patients admitted to hospitals with trauma.

    PubMed

    Wada, Tomoki; Yasunaga, Hideo; Yamana, Hayato; Matsui, Hiroki; Fushimi, Kiyohide; Morimura, Naoto

    2018-03-01

    There was no established disability predictive measurement for patients with trauma that could be used in administrative claims databases. The aim of the present study was to develop and validate a diagnosis-based disability predictive index for severe physical disability at discharge using the International Classification of Diseases, 10th revision (ICD-10) coding. This retrospective observational study used the Diagnosis Procedure Combination database in Japan. Patients who were admitted to hospitals with trauma and discharged alive from 01 April 2010 to 31 March 2015 were included. Pediatric patients under 15 years old were excluded. Data for patients admitted to hospitals from 01 April 2010 to 31 March 2013 was used for development of a disability predictive index (derivation cohort), while data for patients admitted to hospitals from 01 April 2013 to 31 March 2015 was used for the internal validation (validation cohort). The outcome of interest was severe physical disability defined as the Barthel Index score of <60 at discharge. Trauma-related ICD-10 codes were categorized into 36 injury groups with reference to the categorization used in the Global Burden of Diseases study 2013. A multivariable logistic regression analysis was performed for the outcome using the injury groups and patient baseline characteristics including patient age, sex, and Charlson Comorbidity Index (CCI) score in the derivation cohort. A score corresponding to a regression coefficient was assigned to each injury group. The disability predictive index for each patient was defined as the sum of the scores. The predictive performance of the index was validated using the receiver operating characteristic curve analysis in the validation cohort. The derivation cohort included 1,475,158 patients, while the validation cohort included 939,659 patients. Of the 939,659 patients, 235,382 (25.0%) were discharged with severe physical disability. The c-statistics of the disability predictive index

  6. The Examination of the Effects of Writing Strategy-Based Procedural Facilitative Environments on Students' English Foreign Language Writing Anxiety Levels

    PubMed Central

    Tsiriotakis, Ioanna K.; Vassilaki, Eleni; Spantidakis, Ioannis; Stavrou, Nektarios A. M.

    2017-01-01

    Empirical studies have shown that anxiety and negative emotion can hinder language acquisition. The present study implemented a writing instructional model so as to investigate its effects on the writing anxiety levels of English Foreign Language learners. The study was conducted with 177 participants, who were administered the Second Language Writing Anxiety Inventory (SLWAI; Cheng, 2004) that assesses somatic, cognitive and behavioral anxiety, both at baseline and following the implementation of a writing instructional model. The hypothesis stated that the participant's writing anxiety levels would lessen following the provision of a writing strategy-based procedural facilitative environment that fosters cognitive apprenticeship. The initial hypothesis was supported by the findings. Specifically, in the final measurement statistical significant differences appeared where participants in the experimental group showed notable lower mean values of the three factors of anxiety, a factor that largely can be attributed to the content of the intervention program applied to this specific group. The findings validate that Foreign Language writing anxiety negatively effects Foreign Language learning and performance. The findings also support the effectiveness of strategy-based procedural facilitative writing environments that foster cognitive apprenticeship, so as to enhance language skill development and reduce feelings of Foreign Language writing anxiety. PMID:28119658

  7. The Examination of the Effects of Writing Strategy-Based Procedural Facilitative Environments on Students' English Foreign Language Writing Anxiety Levels.

    PubMed

    Tsiriotakis, Ioanna K; Vassilaki, Eleni; Spantidakis, Ioannis; Stavrou, Nektarios A M

    2016-01-01

    Empirical studies have shown that anxiety and negative emotion can hinder language acquisition. The present study implemented a writing instructional model so as to investigate its effects on the writing anxiety levels of English Foreign Language learners. The study was conducted with 177 participants, who were administered the Second Language Writing Anxiety Inventory (SLWAI; Cheng, 2004) that assesses somatic, cognitive and behavioral anxiety, both at baseline and following the implementation of a writing instructional model. The hypothesis stated that the participant's writing anxiety levels would lessen following the provision of a writing strategy-based procedural facilitative environment that fosters cognitive apprenticeship. The initial hypothesis was supported by the findings. Specifically, in the final measurement statistical significant differences appeared where participants in the experimental group showed notable lower mean values of the three factors of anxiety, a factor that largely can be attributed to the content of the intervention program applied to this specific group. The findings validate that Foreign Language writing anxiety negatively effects Foreign Language learning and performance. The findings also support the effectiveness of strategy-based procedural facilitative writing environments that foster cognitive apprenticeship, so as to enhance language skill development and reduce feelings of Foreign Language writing anxiety.

  8. Quantitative impedance measurements for eddy current model validation

    NASA Astrophysics Data System (ADS)

    Khan, T. A.; Nakagawa, N.

    2000-05-01

    This paper reports on a series of laboratory-based impedance measurement data, collected by the use of a quantitatively accurate, mechanically controlled measurement station. The purpose of the measurement is to validate a BEM-based eddy current model against experiment. We have therefore selected two "validation probes," which are both split-D differential probes. Their internal structures and dimensions are extracted from x-ray CT scan data, and thus known within the measurement tolerance. A series of measurements was carried out, using the validation probes and two Ti-6Al-4V block specimens, one containing two 1-mm long fatigue cracks, and the other containing six EDM notches of a range of sizes. Motor-controlled XY scanner performed raster scans over the cracks, with the probe riding on the surface with a spring-loaded mechanism to maintain the lift off. Both an impedance analyzer and a commercial EC instrument were used in the measurement. The probes were driven in both differential and single-coil modes for the specific purpose of model validation. The differential measurements were done exclusively by the eddyscope, while the single-coil data were taken with both the impedance analyzer and the eddyscope. From the single-coil measurements, we obtained the transfer function to translate the voltage output of the eddyscope into impedance values, and then used it to translate the differential measurement data into impedance results. The presentation will highlight the schematics of the measurement procedure, a representative of raw data, explanation of the post data-processing procedure, and then a series of resulting 2D flaw impedance results. A noise estimation will be given also, in order to quantify the accuracy of these measurements, and to be used in probability-of-detection estimation.—This work was supported by the NSF Industry/University Cooperative Research Program.

  9. Performing a Content Validation Study.

    ERIC Educational Resources Information Center

    Spool, Mark D.

    Content validity is concerned with three components: (1) the job content; (2) the test content, and (3) the strength of the relationship between the two. A content validation study, to be considered adequate and defensible should include at least the following four procedures: (1) A thorough and accurate job analysis (to define the job content);…

  10. A consensus-based framework for design, validation, and implementation of simulation-based training curricula in surgery.

    PubMed

    Zevin, Boris; Levy, Jeffrey S; Satava, Richard M; Grantcharov, Teodor P

    2012-10-01

    Simulation-based training can improve technical and nontechnical skills in surgery. To date, there is no consensus on the principles for design, validation, and implementation of a simulation-based surgical training curriculum. The aim of this study was to define such principles and formulate them into an interoperable framework using international expert consensus based on the Delphi method. Literature was reviewed, 4 international experts were queried, and consensus conference of national and international members of surgical societies was held to identify the items for the Delphi survey. Forty-five international experts in surgical education were invited to complete the online survey by ranking each item on a Likert scale from 1 to 5. Consensus was predefined as Cronbach's α ≥0.80. Items that 80% of experts ranked as ≥4 were included in the final framework. Twenty-four international experts with training in general surgery (n = 11), orthopaedic surgery (n = 2), obstetrics and gynecology (n = 3), urology (n = 1), plastic surgery (n = 1), pediatric surgery (n = 1), otolaryngology (n = 1), vascular surgery (n = 1), military (n = 1), and doctorate-level educators (n = 2) completed the iterative online Delphi survey. Consensus among participants was achieved after one round of the survey (Cronbach's α = 0.91). The final framework included predevelopment analysis; cognitive, psychomotor, and team-based training; curriculum validation evaluation and improvement; and maintenance of training. The Delphi methodology allowed for determination of international expert consensus on the principles for design, validation, and implementation of a simulation-based surgical training curriculum. These principles were formulated into a framework that can be used internationally across surgical specialties as a step-by-step guide for the development and validation of future simulation-based training curricula. Copyright © 2012 American College of Surgeons. Published by

  11. Community-Based Validation of the Social Phobia Screener (SOPHS).

    PubMed

    Batterham, Philip J; Mackinnon, Andrew J; Christensen, Helen

    2017-10-01

    There is a need for brief, accurate screening scales for social anxiety disorder to enable better identification of the disorder in research and clinical settings. A five-item social anxiety screener, the Social Phobia Screener (SOPHS), was developed to address this need. The screener was validated in two samples: (a) 12,292 Australian young adults screened for a clinical trial, including 1,687 participants who completed a phone-based clinical interview and (b) 4,214 population-based Australian adults recruited online. The SOPHS (78% sensitivity, 72% specificity) was found to have comparable screening performance to the Social Phobia Inventory (77% sensitivity, 71% specificity) and Mini-Social Phobia Inventory (74% sensitivity, 73% specificity) relative to clinical criteria in the trial sample. In the population-based sample, the SOPHS was also accurate (95% sensitivity, 73% specificity) in identifying Diagnostic and Statistical Manual of Mental Disorders-Fifth edition social anxiety disorder. The SOPHS is a valid and reliable screener for social anxiety that is freely available for use in research and clinical settings.

  12. A web-based procedure for liver segmentation in CT images

    NASA Astrophysics Data System (ADS)

    Yuan, Rong; Luo, Ming; Wang, Luyao; Xie, Qingguo

    2015-03-01

    Liver segmentation in CT images has been acknowledged as a basic and indispensable part in systems of computer aided liver surgery for operation design and risk evaluation. In this paper, we will introduce and implement a web-based procedure for liver segmentation to help radiologists and surgeons get an accurate result efficiently and expediently. Several clinical datasets are used to evaluate the accessibility and the accuracy. This procedure seems a promising approach for extraction of liver volumetry of various shapes. Moreover, it is possible for user to access the segmentation wherever the Internet is available without any specific machine.

  13. Validity of Students Worksheet Based Problem-Based Learning for 9th Grade Junior High School in living organism Inheritance and Food Biotechnology.

    NASA Astrophysics Data System (ADS)

    Jefriadi, J.; Ahda, Y.; Sumarmin, R.

    2018-04-01

    Based on preliminary research of students worksheet used by teachers has several disadvantages such as students worksheet arranged directly drove learners conduct an investigation without preceded by directing learners to a problem or provide stimulation, student's worksheet not provide a concrete imageand presentation activities on the students worksheet not refer to any one learning models curicullum recommended. To address problems Reviews these students then developed a worksheet based on problem-based learning. This is a research development that using Ploom models. The phases are preliminary research, development and assessment. The instruments used in data collection that includes pieces of observation/interviews, instrument self-evaluation, instruments validity. The results of the validation expert on student worksheets get a valid result the average value 80,1%. Validity of students worksheet based problem-based learning for 9th grade junior high school in living organism inheritance and food biotechnology get valid category.

  14. Building a computer program to support children, parents, and distraction during healthcare procedures.

    PubMed

    Hanrahan, Kirsten; McCarthy, Ann Marie; Kleiber, Charmaine; Ataman, Kaan; Street, W Nick; Zimmerman, M Bridget; Ersig, Anne L

    2012-10-01

    This secondary data analysis used data mining methods to develop predictive models of child risk for distress during a healthcare procedure. Data used came from a study that predicted factors associated with children's responses to an intravenous catheter insertion while parents provided distraction coaching. From the 255 items used in the primary study, 44 predictive items were identified through automatic feature selection and used to build support vector machine regression models. Models were validated using multiple cross-validation tests and by comparing variables identified as explanatory in the traditional versus support vector machine regression. Rule-based approaches were applied to the model outputs to identify overall risk for distress. A decision tree was then applied to evidence-based instructions for tailoring distraction to characteristics and preferences of the parent and child. The resulting decision support computer application, titled Children, Parents and Distraction, is being used in research. Future use will support practitioners in deciding the level and type of distraction intervention needed by a child undergoing a healthcare procedure.

  15. Building a Computer Program to Support Children, Parents, and Distraction during Healthcare Procedures

    PubMed Central

    McCarthy, Ann Marie; Kleiber, Charmaine; Ataman, Kaan; Street, W. Nick; Zimmerman, M. Bridget; Ersig, Anne L.

    2012-01-01

    This secondary data analysis used data mining methods to develop predictive models of child risk for distress during a healthcare procedure. Data used came from a study that predicted factors associated with children’s responses to an intravenous catheter insertion while parents provided distraction coaching. From the 255 items used in the primary study, 44 predictive items were identified through automatic feature selection and used to build support vector machine regression models. Models were validated using multiple cross-validation tests and by comparing variables identified as explanatory in the traditional versus support vector machine regression. Rule-based approaches were applied to the model outputs to identify overall risk for distress. A decision tree was then applied to evidence-based instructions for tailoring distraction to characteristics and preferences of the parent and child. The resulting decision support computer application, the Children, Parents and Distraction (CPaD), is being used in research. Future use will support practitioners in deciding the level and type of distraction intervention needed by a child undergoing a healthcare procedure. PMID:22805121

  16. 29 CFR 1607.9 - No assumption of validity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of validity. The enforcement agencies will take into account the fact that a thorough job analysis... professional standards enhance the probability that the selection procedure is valid for the job. ...

  17. 18 CFR 284.502 - Procedures for applying for market-based rates.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Procedures for applying for market-based rates. 284.502 Section 284.502 Conservation of Power and Water Resources FEDERAL... POLICY ACT OF 1978 AND RELATED AUTHORITIES Applications for Market-Based Rates for Storage § 284.502...

  18. PC based temporary shielding administrative procedure (TSAP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsen, D.E.; Pederson, G.E.; Hamby, P.N.

    1995-03-01

    A completely new Administrative Procedure for temporary shielding was developed for use at Commonwealth Edison`s six nuclear stations. This procedure promotes the use of shielding, and addresses industry requirements for the use and control of temporary shielding. The importance of an effective procedure has increased since more temporary shielding is being used as ALARA goals become more ambitious. To help implement the administrative procedure, a personal computer software program was written to incorporate the procedural requirements. This software incorporates the useability of a Windows graphical user interface with extensive help and database features. This combination of a comprehensive administrative proceduremore » and user friendly software promotes the effective use and management of temporary shielding while ensuring that industry requirements are met.« less

  19. Validation techniques for fault emulation of SRAM-based FPGAs

    DOE PAGES

    Quinn, Heather; Wirthlin, Michael

    2015-08-07

    A variety of fault emulation systems have been created to study the effect of single-event effects (SEEs) in static random access memory (SRAM) based field-programmable gate arrays (FPGAs). These systems are useful for augmenting radiation-hardness assurance (RHA) methodologies for verifying the effectiveness for mitigation techniques; understanding error signatures and failure modes in FPGAs; and failure rate estimation. For radiation effects researchers, it is important that these systems properly emulate how SEEs manifest in FPGAs. If the fault emulation systems does not mimic the radiation environment, the system will generate erroneous data and incorrect predictions of behavior of the FPGA inmore » a radiation environment. Validation determines whether the emulated faults are reasonable analogs to the radiation-induced faults. In this study we present methods for validating fault emulation systems and provide several examples of validated FPGA fault emulation systems.« less

  20. Development and validation of an instrument for evaluating inquiry-based tasks in science textbooks

    NASA Astrophysics Data System (ADS)

    Yang, Wenyuan; Liu, Enshan

    2016-12-01

    This article describes the development and validation of an instrument that can be used for content analysis of inquiry-based tasks. According to the theories of educational evaluation and qualities of inquiry, four essential functions that inquiry-based tasks should serve are defined: (1) assisting in the construction of understandings about scientific concepts, (2) providing students opportunities to use inquiry process skills, (3) being conducive to establishing understandings about scientific inquiry, and (4) giving students opportunities to develop higher order thinking skills. An instrument - the Inquiry-Based Tasks Analysis Inventory (ITAI) - was developed to judge whether inquiry-based tasks perform these functions well. To test the reliability and validity of the ITAI, 4 faculty members were invited to use the ITAI to collect data from 53 inquiry-based tasks in the 3 most widely adopted senior secondary biology textbooks in Mainland China. The results indicate that (1) the inter-rater reliability reached 87.7%, (2) the grading criteria have high discriminant validity, (3) the items possess high convergent validity, and (4) the Cronbach's alpha reliability coefficient reached 0.792. The study concludes that the ITAI is valid and reliable. Because of its solid foundations in theoretical and empirical argumentation, the ITAI is trustworthy.

  1. Formalizing procedures for operations automation, operator training and spacecraft autonomy

    NASA Technical Reports Server (NTRS)

    Lecouat, Francois; Desaintvincent, Arnaud

    1994-01-01

    The generation and validation of operations procedures is a key task of mission preparation that is quite complex and costly. This has motivated the development of software applications providing support for procedures preparation. Several applications have been developed at MATRA MARCONI SPACE (MMS) over the last five years. They are presented in the first section of this paper. The main idea is that if procedures are represented in a formal language, they can be managed more easily with a computer tool and some automatic verifications can be performed. One difficulty is to define a formal language that is easy to use for operators and operations engineers. From the experience of the various procedures management tools developed in the last five years (including the POM, EOA, and CSS projects), MMS has derived OPSMAKER, a generic tool for procedure elaboration and validation. It has been applied to quite different types of missions, ranging from crew procedures (PREVISE system), ground control centers management procedures (PROCSU system), and - most relevant to the present paper - satellite operation procedures (PROCSAT developed for CNES, to support the preparation and verification of SPOT 4 operation procedures, and OPSAT for MMS telecom satellites operation procedures).

  2. Activities identification for activity-based cost/management applications of the diagnostics outpatient procedures.

    PubMed

    Alrashdan, Abdalla; Momani, Amer; Ababneh, Tamador

    2012-01-01

    One of the most challenging problems facing healthcare providers is to determine the actual cost for their procedures, which is important for internal accounting and price justification to insurers. The objective of this paper is to find suitable categories to identify the diagnostic outpatient medical procedures and translate them from functional orientation to process orientation. A hierarchal task tree is developed based on a classification schema of procedural activities. Each procedure is seen as a process consisting of a number of activities. This makes a powerful foundation for activity-based cost/management implementation and provides enough information to discover the value-added and non-value-added activities that assist in process improvement and eventually may lead to cost reduction. Work measurement techniques are used to identify the standard time of each activity at the lowest level of the task tree. A real case study at a private hospital is presented to demonstrate the proposed methodology. © 2011 National Association for Healthcare Quality.

  3. Validation techniques of agent based modelling for geospatial simulations

    NASA Astrophysics Data System (ADS)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  4. Validating crash locations for quantitative spatial analysis: a GIS-based approach.

    PubMed

    Loo, Becky P Y

    2006-09-01

    In this paper, the spatial variables of the crash database in Hong Kong from 1993 to 2004 are validated. The proposed spatial data validation system makes use of three databases (the crash, road network and district board databases) and relies on GIS to carry out most of the validation steps so that the human resource required for manually checking the accuracy of the spatial data can be enormously reduced. With the GIS-based spatial data validation system, it was found that about 65-80% of the police crash records from 1993 to 2004 had correct road names and district board information. In 2004, the police crash database contained about 12.7% mistakes for road names and 9.7% mistakes for district boards. The situation was broadly comparable to the United Kingdom. However, the results also suggest that safety researchers should carefully validate spatial data in the crash database before scientific analysis.

  5. Validation of the updated ArthroS simulator: face and construct validity of a passive haptic virtual reality simulator with novel performance metrics.

    PubMed

    Garfjeld Roberts, Patrick; Guyver, Paul; Baldwin, Mathew; Akhtar, Kash; Alvand, Abtin; Price, Andrew J; Rees, Jonathan L

    2017-02-01

    To assess the construct and face validity of ArthroS, a passive haptic VR simulator. A secondary aim was to evaluate the novel performance metrics produced by this simulator. Two groups of 30 participants, each divided into novice, intermediate or expert based on arthroscopic experience, completed three separate tasks on either the knee or shoulder module of the simulator. Performance was recorded using 12 automatically generated performance metrics and video footage of the arthroscopic procedures. The videos were blindly assessed using a validated global rating scale (GRS). Participants completed a survey about the simulator's realism and training utility. This new simulator demonstrated construct validity of its tasks when evaluated against a GRS (p ≤ 0.003 in all cases). Regarding it's automatically generated performance metrics, established outputs such as time taken (p ≤ 0.001) and instrument path length (p ≤ 0.007) also demonstrated good construct validity. However, two-thirds of the proposed 'novel metrics' the simulator reports could not distinguish participants based on arthroscopic experience. Face validity assessment rated the simulator as a realistic and useful tool for trainees, but the passive haptic feedback (a key feature of this simulator) is rated as less realistic. The ArthroS simulator has good task construct validity based on established objective outputs, but some of the novel performance metrics could not distinguish between surgical experience. The passive haptic feedback of the simulator also needs improvement. If simulators could offer automated and validated performance feedback, this would facilitate improvements in the delivery of training by allowing trainees to practise and self-assess.

  6. Applying Kane's Validity Framework to a Simulation Based Assessment of Clinical Competence

    ERIC Educational Resources Information Center

    Tavares, Walter; Brydges, Ryan; Myre, Paul; Prpic, Jason; Turner, Linda; Yelle, Richard; Huiskamp, Maud

    2018-01-01

    Assessment of clinical competence is complex and inference based. Trustworthy and defensible assessment processes must have favourable evidence of validity, particularly where decisions are considered high stakes. We aimed to organize, collect and interpret validity evidence for a high stakes simulation based assessment strategy for certifying…

  7. Quantification of construction waste prevented by BIM-based design validation: Case studies in South Korea.

    PubMed

    Won, Jongsung; Cheng, Jack C P; Lee, Ghang

    2016-03-01

    Waste generated in construction and demolition processes comprised around 50% of the solid waste in South Korea in 2013. Many cases show that design validation based on building information modeling (BIM) is an effective means to reduce the amount of construction waste since construction waste is mainly generated due to improper design and unexpected changes in the design and construction phases. However, the amount of construction waste that could be avoided by adopting BIM-based design validation has been unknown. This paper aims to estimate the amount of construction waste prevented by a BIM-based design validation process based on the amount of construction waste that might be generated due to design errors. Two project cases in South Korea were studied in this paper, with 381 and 136 design errors detected, respectively during the BIM-based design validation. Each design error was categorized according to its cause and the likelihood of detection before construction. The case studies show that BIM-based design validation could prevent 4.3-15.2% of construction waste that might have been generated without using BIM. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. How to develop a Standard Operating Procedure for sorting unfixed cells

    PubMed Central

    Schmid, Ingrid

    2012-01-01

    Written Standard Operating Procedures (SOPs) are an important tool to assure that recurring tasks in a laboratory are performed in a consistent manner. When the procedure covered in the SOP involves a high-risk activity such as sorting unfixed cells using a jet-in-air sorter, safety elements are critical components of the document. The details on sort sample handling, sorter set-up, validation, operation, troubleshooting, and maintenance, personal protective equipment (PPE), and operator training, outlined in the SOP are to be based on careful risk assessment of the procedure. This review provides background information on the hazards associated with sorting of unfixed cells and the process used to arrive at the appropriate combination of facility design, instrument placement, safety equipment, and practices to be followed. PMID:22381383

  9. How to develop a standard operating procedure for sorting unfixed cells.

    PubMed

    Schmid, Ingrid

    2012-07-01

    Written standard operating procedures (SOPs) are an important tool to assure that recurring tasks in a laboratory are performed in a consistent manner. When the procedure covered in the SOP involves a high-risk activity such as sorting unfixed cells using a jet-in-air sorter, safety elements are critical components of the document. The details on sort sample handling, sorter set-up, validation, operation, troubleshooting, and maintenance, personal protective equipment (PPE), and operator training, outlined in the SOP are to be based on careful risk assessment of the procedure. This review provides background information on the hazards associated with sorting of unfixed cells and the process used to arrive at the appropriate combination of facility design, instrument placement, safety equipment, and practices to be followed. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Single-Item Screening for Agoraphobic Symptoms: Validation of a Web-Based Audiovisual Screening Instrument

    PubMed Central

    van Ballegooijen, Wouter; Riper, Heleen; Donker, Tara; Martin Abello, Katherina; Marks, Isaac; Cuijpers, Pim

    2012-01-01

    The advent of web-based treatments for anxiety disorders creates a need for quick and valid online screening instruments, suitable for a range of social groups. This study validates a single-item multimedia screening instrument for agoraphobia, part of the Visual Screener for Common Mental Disorders (VS-CMD), and compares it with the text-based agoraphobia items of the PDSS-SR. The study concerned 85 subjects in an RCT of the effects of web-based therapy for panic symptoms. The VS-CMD item and items 4 and 5 of the PDSS-SR were validated by comparing scores to the outcomes of the CIDI diagnostic interview. Screening for agoraphobia was found moderately valid for both the multimedia item (sensitivity.81, specificity.66, AUC.734) and the text-based items (AUC.607–.697). Single-item multimedia screening for anxiety disorders should be further developed and tested in the general population and in patient, illiterate and immigrant samples. PMID:22844391

  11. Student Opinions about the Seven-Step Procedure in Problem-Based Hospitality Management Education

    ERIC Educational Resources Information Center

    Zwaal, Wichard; Otting, Hans

    2014-01-01

    This study investigates how hospitality management students appreciate the role and application of the seven-step procedure in problem-based learning. A survey was developed containing sections about personal characteristics, recall of the seven steps, overall report marks, and 30 statements about the seven-step procedure. The survey was…

  12. 49 CFR Appendix C to Part 173 - Procedure for Base-level Vibration Testing

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 2 2014-10-01 2014-10-01 false Procedure for Base-level Vibration Testing C Appendix C to Part 173 Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS...-GENERAL REQUIREMENTS FOR SHIPMENTS AND PACKAGINGS Pt. 173, App. C Appendix C to Part 173—Procedure for...

  13. 49 CFR Appendix C to Part 173 - Procedure for Base-level Vibration Testing

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 2 2011-10-01 2011-10-01 false Procedure for Base-level Vibration Testing C Appendix C to Part 173 Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS...-GENERAL REQUIREMENTS FOR SHIPMENTS AND PACKAGINGS Pt. 173, App. C Appendix C to Part 173—Procedure for...

  14. A simple randomisation procedure for validating discriminant analysis: a methodological note.

    PubMed

    Wastell, D G

    1987-04-01

    Because the goal of discriminant analysis (DA) is to optimise classification, it designedly exaggerates between-group differences. This bias complicates validation of DA. Jack-knifing has been used for validation but is inappropriate when stepwise selection (SWDA) is employed. A simple randomisation test is presented which is shown to give correct decisions for SWDA. The general superiority of randomisation tests over orthodox significance tests is discussed. Current work on non-parametric methods of estimating the error rates of prediction rules is briefly reviewed.

  15. A novel tree-based procedure for deciphering the genomic spectrum of clinical disease entities.

    PubMed

    Mbogning, Cyprien; Perdry, Hervé; Toussile, Wilson; Broët, Philippe

    2014-01-01

    Dissecting the genomic spectrum of clinical disease entities is a challenging task. Recursive partitioning (or classification trees) methods provide powerful tools for exploring complex interplay among genomic factors, with respect to a main factor, that can reveal hidden genomic patterns. To take confounding variables into account, the partially linear tree-based regression (PLTR) model has been recently published. It combines regression models and tree-based methodology. It is however computationally burdensome and not well suited for situations for which a large number of exploratory variables is expected. We developed a novel procedure that represents an alternative to the original PLTR procedure, and considered different selection criteria. A simulation study with different scenarios has been performed to compare the performances of the proposed procedure to the original PLTR strategy. The proposed procedure with a Bayesian Information Criterion (BIC) achieved good performances to detect the hidden structure as compared to the original procedure. The novel procedure was used for analyzing patterns of copy-number alterations in lung adenocarcinomas, with respect to Kirsten Rat Sarcoma Viral Oncogene Homolog gene (KRAS) mutation status, while controlling for a cohort effect. Results highlight two subgroups of pure or nearly pure wild-type KRAS tumors with particular copy-number alteration patterns. The proposed procedure with a BIC criterion represents a powerful and practical alternative to the original procedure. Our procedure performs well in a general framework and is simple to implement.

  16. Validating Trial-Based Functional Analyses in Mainstream Primary School Classrooms

    ERIC Educational Resources Information Center

    Austin, Jennifer L.; Groves, Emily A.; Reynish, Lisa C.; Francis, Laura L.

    2015-01-01

    There is growing evidence to support the use of trial-based functional analyses, particularly in classroom settings. However, there currently are no evaluations of this procedure with typically developing children. Furthermore, it is possible that refinements may be needed to adapt trial-based analyses to mainstream classrooms. This study was…

  17. Validating MODIS and Sentinel-2 NDVI Products at a Temperate Deciduous Forest Site Using Two Independent Ground-Based Sensors.

    PubMed

    Lange, Maximilian; Dechant, Benjamin; Rebmann, Corinna; Vohland, Michael; Cuntz, Matthias; Doktor, Daniel

    2017-08-11

    Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure.

  18. Full-scale laboratory validation of a wireless MEMS-based technology for damage assessment of concrete structures

    NASA Astrophysics Data System (ADS)

    Trapani, Davide; Zonta, Daniele; Molinari, Marco; Amditis, Angelos; Bimpas, Matthaios; Bertsch, Nicolas; Spiering, Vincent; Santana, Juan; Sterken, Tom; Torfs, Tom; Bairaktaris, Dimitris; Bairaktaris, Manos; Camarinopulos, Stefanos; Frondistou-Yannas, Mata; Ulieru, Dumitru

    2012-04-01

    This paper illustrates an experimental campaign conducted under laboratory conditions on a full-scale reinforced concrete three-dimensional frame instrumented with wireless sensors developed within the Memscon project. In particular it describes the assumptions which the experimental campaign was based on, the design of the structure, the laboratory setup and the results of the tests. The aim of the campaign was to validate the performance of Memscon sensing systems, consisting of wireless accelerometers and strain sensors, on a real concrete structure during construction and under an actual earthquake. Another aspect of interest was to assess the effectiveness of the full damage recognition procedure based on the data recorded by the sensors and the reliability of the Decision Support System (DSS) developed in order to provide the stakeholders recommendations for building rehabilitation and the costs of this. With these ends, a Eurocode 8 spectrum-compatible accelerogram with increasing amplitude was applied at the top of an instrumented concrete frame built in the laboratory. MEMSCON sensors were directly compared with wired instruments, based on devices available on the market and taken as references, during both construction and seismic simulation.

  19. Computer Based Procedures for Field Workers - FY16 Research Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Bly, Aaron

    The Computer-Based Procedure (CBP) research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. One of the primary missions of the LWRS program is to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. One area that could yield tremendous savings in increased efficiency and safety is in improving procedure use. A CBP provides the opportunity to incorporate context-driven jobmore » aids, such as drawings, photos, and just-in-time training. The presentation of information in CBPs can be much more flexible and tailored to the task, actual plant condition, and operation mode. The dynamic presentation of the procedure will guide the user down the path of relevant steps, thus minimizing time spent by the field worker to evaluate plant conditions and decisions related to the applicability of each step. This dynamic presentation of the procedure also minimizes the risk of conducting steps out of order and/or incorrectly assessed applicability of steps. This report provides a summary of the main research activities conducted in the Computer-Based Procedures for Field Workers effort since 2012. The main focus of the report is on the research activities conducted in fiscal year 2016. The activities discussed are the Nuclear Electronic Work Packages – Enterprise Requirements initiative, the development of a design guidance for CBPs (which compiles all insights gained through the years of CBP research), the facilitation of vendor studies at the Idaho National Laboratory (INL) Advanced Test Reactor (ATR), a pilot study for how to enhance the plant design modification work process, the collection of feedback from a field evaluation study at Plant Vogtle, and path forward to

  20. 42 CFR 488.7 - Validation survey.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 5 2014-10-01 2014-10-01 false Validation survey. 488.7 Section 488.7 Public...) STANDARDS AND CERTIFICATION SURVEY, CERTIFICATION, AND ENFORCEMENT PROCEDURES General Provisions § 488.7 Validation survey. (a) Basis for survey. CMS may require a survey of an accredited provider or supplier to...

  1. 42 CFR 488.7 - Validation survey.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 5 2012-10-01 2012-10-01 false Validation survey. 488.7 Section 488.7 Public...) STANDARDS AND CERTIFICATION SURVEY, CERTIFICATION, AND ENFORCEMENT PROCEDURES General Provisions § 488.7 Validation survey. (a) Basis for survey. CMS may require a survey of an accredited provider or supplier to...

  2. 42 CFR 488.7 - Validation survey.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 5 2013-10-01 2013-10-01 false Validation survey. 488.7 Section 488.7 Public...) STANDARDS AND CERTIFICATION SURVEY, CERTIFICATION, AND ENFORCEMENT PROCEDURES General Provisions § 488.7 Validation survey. (a) Basis for survey. CMS may require a survey of an accredited provider or supplier to...

  3. 42 CFR 488.7 - Validation survey.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Validation survey. 488.7 Section 488.7 Public...) STANDARDS AND CERTIFICATION SURVEY, CERTIFICATION, AND ENFORCEMENT PROCEDURES General Provisions § 488.7 Validation survey. (a) Basis for survey. CMS may require a survey of an accredited provider or supplier to...

  4. 42 CFR 488.7 - Validation survey.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 5 2011-10-01 2011-10-01 false Validation survey. 488.7 Section 488.7 Public...) STANDARDS AND CERTIFICATION SURVEY, CERTIFICATION, AND ENFORCEMENT PROCEDURES General Provisions § 488.7 Validation survey. (a) Basis for survey. CMS may require a survey of an accredited provider or supplier to...

  5. A validation procedure for a LADAR system radiometric simulation model

    NASA Astrophysics Data System (ADS)

    Leishman, Brad; Budge, Scott; Pack, Robert

    2007-04-01

    The USU LadarSIM software package is a ladar system engineering tool that has recently been enhanced to include the modeling of the radiometry of Ladar beam footprints. This paper will discuss our validation of the radiometric model and present a practical approach to future validation work. In order to validate complicated and interrelated factors affecting radiometry, a systematic approach had to be developed. Data for known parameters were first gathered then unknown parameters of the system were determined from simulation test scenarios. This was done in a way to isolate as many unknown variables as possible, then build on the previously obtained results. First, the appropriate voltage threshold levels of the discrimination electronics were set by analyzing the number of false alarms seen in actual data sets. With this threshold set, the system noise was then adjusted to achieve the appropriate number of dropouts. Once a suitable noise level was found, the range errors of the simulated and actual data sets were compared and studied. Predicted errors in range measurements were analyzed using two methods: first by examining the range error of a surface with known reflectivity and second by examining the range errors for specific detectors with known responsivities. This provided insight into the discrimination method and receiver electronics used in the actual system.

  6. Reliability and Validity of Assessing User Satisfaction With Web-Based Health Interventions.

    PubMed

    Boß, Leif; Lehr, Dirk; Reis, Dorota; Vis, Christiaan; Riper, Heleen; Berking, Matthias; Ebert, David Daniel

    2016-08-31

    The perspective of users should be taken into account in the evaluation of Web-based health interventions. Assessing the users' satisfaction with the intervention they receive could enhance the evidence for the intervention effects. Thus, there is a need for valid and reliable measures to assess satisfaction with Web-based health interventions. The objective of this study was to analyze the reliability, factorial structure, and construct validity of the Client Satisfaction Questionnaire adapted to Internet-based interventions (CSQ-I). The psychometric quality of the CSQ-I was analyzed in user samples from 2 separate randomized controlled trials evaluating Web-based health interventions, one from a depression prevention intervention (sample 1, N=174) and the other from a stress management intervention (sample 2, N=111). At first, the underlying measurement model of the CSQ-I was analyzed to determine the internal consistency. The factorial structure of the scale and the measurement invariance across groups were tested by multigroup confirmatory factor analyses. Additionally, the construct validity of the scale was examined by comparing satisfaction scores with the primary clinical outcome. Multigroup confirmatory analyses on the scale yielded a one-factorial structure with a good fit (root-mean-square error of approximation =.09, comparative fit index =.96, standardized root-mean-square residual =.05) that showed partial strong invariance across the 2 samples. The scale showed very good reliability, indicated by McDonald omegas of .95 in sample 1 and .93 in sample 2. Significant correlations with change in depressive symptoms (r=-.35, P<.001) and perceived stress (r=-.48, P<.001) demonstrated the construct validity of the scale. The proven internal consistency, factorial structure, and construct validity of the CSQ-I indicate a good overall psychometric quality of the measure to assess the user's general satisfaction with Web-based interventions for depression and

  7. Face, content, and construct validity of human placenta as a haptic training tool in neurointerventional surgery.

    PubMed

    Ribeiro de Oliveira, Marcelo Magaldi; Nicolato, Arthur; Santos, Marcilea; Godinho, Joao Victor; Brito, Rafael; Alvarenga, Alexandre; Martins, Ana Luiza Valle; Prosdocimi, André; Trivelato, Felipe Padovani; Sabbagh, Abdulrahman J; Reis, Augusto Barbosa; Maestro, Rolando Del

    2016-05-01

    OBJECT The development of neurointerventional treatments of central nervous system disorders has resulted in the need for adequate training environments for novice interventionalists. Virtual simulators offer anatomical definition but lack adequate tactile feedback. Animal models, which provide more lifelike training, require an appropriate infrastructure base. The authors describe a training model for neurointerventional procedures using the human placenta (HP), which affords haptic training with significantly fewer resource requirements, and discuss its validation. METHODS Twelve HPs were prepared for simulated endovascular procedures. Training exercises performed by interventional neuroradiologists and novice fellows were placental angiography, stent placement, aneurysm coiling, and intravascular liquid embolic agent injection. RESULTS The endovascular training exercises proposed can be easily reproduced in the HP. Face, content, and construct validity were assessed by 6 neurointerventional radiologists and 6 novice fellows in interventional radiology. CONCLUSIONS The use of HP provides an inexpensive training model for the training of neurointerventionalists. Preliminary validation results show that this simulation model has face and content validity and has demonstrated construct validity for the interventions assessed in this study.

  8. The reliability and concurrent validity of measurements used to quantify lumbar spine mobility: an analysis of an iphone® application and gravity based inclinometry.

    PubMed

    Kolber, Morey J; Pizzini, Matias; Robinson, Ashley; Yanez, Dania; Hanney, William J

    2013-04-01

    PURPOSEAIM: This purpose of this study was to investigate the reliability, minimal detectable change (MDC), and concurrent validity of active spinal mobility measurements using a gravity-based bubble inclinometer and iPhone® application. MATERIALSMETHODS: Two investigators each used a bubble inclinometer and an iPhone® with inclinometer application to measure total thoracolumbo-pelvic flexion, isolated lumbar flexion, total thoracolumbo-pelvic extension, and thoracolumbar lateral flexion in 30 asymptomatic participants using a blinded repeated measures design. The procedures used in this investigation for measuring spinal mobility yielded good intrarater and interrater reliability with Intraclass Correlation Coefficients (ICC) for bubble inclinometry ≥ 0.81 and the iPhone® ≥ 0.80. The MDC90 for the interrater analysis ranged from 4° to 9°. The concurrent validity between bubble inclinometry and the iPhone® application was good with ICC values of ≥ 0.86. The 95% level of agreement indicates that although these measuring instruments are equivalent individual differences of up to 18° may exist when using these devices interchangeably. The bubble inclinometer and iPhone® possess good intrarater and interrater reliability as well as concurrent validity when strict measurement procedures are adhered to. This study provides preliminary evidence to suggest that smart phone applications may offer clinical utility comparable to inclinometry for quantifying spinal mobility. Clinicians should be aware of the potential disagreement when using these devices interchangeably. 2b (Observational study of reliability).

  9. Preliminary Investigation of Time Remaining Display on the Computer-based Emergency Operating Procedure

    NASA Astrophysics Data System (ADS)

    Suryono, T. J.; Gofuku, A.

    2018-02-01

    One of the important thing in the mitigation of accidents in nuclear power plant accidents is time management. The accidents should be resolved as soon as possible in order to prevent the core melting and the release of radioactive material to the environment. In this case, operators should follow the emergency operating procedure related with the accident, in step by step order and in allowable time. Nowadays, the advanced main control rooms are equipped with computer-based procedures (CBPs) which is make it easier for operators to do their tasks of monitoring and controlling the reactor. However, most of the CBPs do not include the time remaining display feature which informs operators of time available for them to execute procedure steps and warns them if the they reach the time limit. Furthermore, the feature will increase the awareness of operators about their current situation in the procedure. This paper investigates this issue. The simplified of emergency operating procedure (EOP) of steam generator tube rupture (SGTR) accident of PWR plant is applied. In addition, the sequence of actions on each step of the procedure is modelled using multilevel flow modelling (MFM) and influenced propagation rule. The prediction of action time on each step is acquired based on similar case accidents and the Support Vector Regression. The derived time will be processed and then displayed on a CBP user interface.

  10. Model-Based Verification and Validation of Spacecraft Avionics

    NASA Technical Reports Server (NTRS)

    Khan, M. Omair; Sievers, Michael; Standley, Shaun

    2012-01-01

    Verification and Validation (V&V) at JPL is traditionally performed on flight or flight-like hardware running flight software. For some time, the complexity of avionics has increased exponentially while the time allocated for system integration and associated V&V testing has remained fixed. There is an increasing need to perform comprehensive system level V&V using modeling and simulation, and to use scarce hardware testing time to validate models; the norm for thermal and structural V&V for some time. Our approach extends model-based V&V to electronics and software through functional and structural models implemented in SysML. We develop component models of electronics and software that are validated by comparison with test results from actual equipment. The models are then simulated enabling a more complete set of test cases than possible on flight hardware. SysML simulations provide access and control of internal nodes that may not be available in physical systems. This is particularly helpful in testing fault protection behaviors when injecting faults is either not possible or potentially damaging to the hardware. We can also model both hardware and software behaviors in SysML, which allows us to simulate hardware and software interactions. With an integrated model and simulation capability we can evaluate the hardware and software interactions and identify problems sooner. The primary missing piece is validating SysML model correctness against hardware; this experiment demonstrated such an approach is possible.

  11. Mimic expert judgement through automated procedure for selecting rainfall events responsible for shallow landslide: A statistical approach to validation

    NASA Astrophysics Data System (ADS)

    Giovanna, Vessia; Luca, Pisano; Carmela, Vennari; Mauro, Rossi; Mario, Parise

    2016-01-01

    This paper proposes an automated method for the selection of rainfall data (duration, D, and cumulated, E), responsible for shallow landslide initiation. The method mimics an expert person identifying D and E from rainfall records through a manual procedure whose rules are applied according to her/his judgement. The comparison between the two methods is based on 300 D-E pairs drawn from temporal rainfall data series recorded in a 30 days time-lag before the landslide occurrence. Statistical tests, employed on D and E samples considered both paired and independent values to verify whether they belong to the same population, show that the automated procedure is able to replicate the expert pairs drawn by the expert judgment. Furthermore, a criterion based on cumulated distribution functions (CDFs) is proposed to select the most related D-E pairs to the expert one among the 6 drawn from the coded procedure for tracing the empirical rainfall threshold line.

  12. Designing and validation of a yoga-based intervention for obsessive compulsive disorder.

    PubMed

    Bhat, Shubha; Varambally, Shivarama; Karmani, Sneha; Govindaraj, Ramajayam; Gangadhar, B N

    2016-06-01

    Some yoga-based practices have been found to be useful for patients with obsessive compulsive disorder (OCD). The authors could not find a validated yoga therapy module available for OCD. This study attempted to formulate a generic yoga-based intervention module for OCD. A yoga module was designed based on traditional and contemporary yoga literature. The module was sent to 10 yoga experts for content validation. The experts rated the usefulness of the practices on a scale of 1-5 (5 = extremely useful). The final version of the module was pilot-tested on patients with OCD (n = 17) for both feasibility and effect on symptoms. Eighty-eight per cent (22 out of 25) of the items in the initial module were retained, with modifications in the module as suggested by the experts along with patients' inputs and authors' experience. The module was found to be feasible and showed an improvement in symptoms of OCD on total Yale-Brown Obsessive-Compulsive Scale (YBOCS) score (p = 0.001). A generic yoga therapy module for OCD was validated by experts in the field and found feasible to practice in patients. A decrease in the symptom scores was also found following yoga practice of 2 weeks. Further clinical validation is warranted to confirm efficacy.

  13. Application and Evaluation of an Expert Judgment Elicitation Procedure for Correlations.

    PubMed

    Zondervan-Zwijnenburg, Mariëlle; van de Schoot-Hubeek, Wenneke; Lek, Kimberley; Hoijtink, Herbert; van de Schoot, Rens

    2017-01-01

    The purpose of the current study was to apply and evaluate a procedure to elicit expert judgments about correlations, and to update this information with empirical data. The result is a face-to-face group elicitation procedure with as its central element a trial roulette question that elicits experts' judgments expressed as distributions. During the elicitation procedure, a concordance probability question was used to provide feedback to the experts on their judgments. We evaluated the elicitation procedure in terms of validity and reliability by means of an application with a small sample of experts. Validity means that the elicited distributions accurately represent the experts' judgments. Reliability concerns the consistency of the elicited judgments over time. Four behavioral scientists provided their judgments with respect to the correlation between cognitive potential and academic performance for two separate populations enrolled at a specific school in the Netherlands that provides special education to youth with severe behavioral problems: youth with autism spectrum disorder (ASD), and youth with diagnoses other than ASD. Measures of face-validity, feasibility, convergent validity, coherence, and intra-rater reliability showed promising results. Furthermore, the current study illustrates the use of the elicitation procedure and elicited distributions in a social science application. The elicited distributions were used as a prior for the correlation, and updated with data for both populations collected at the school of interest. The current study shows that the newly developed elicitation procedure combining the trial roulette method with the elicitation of correlations is a promising tool, and that the results of the procedure are useful as prior information in a Bayesian analysis.

  14. Validated measurements of microbial loads on environmental surfaces in intensive care units before and after disinfecting cleaning.

    PubMed

    Frickmann, H; Bachert, S; Warnke, P; Podbielski, A

    2018-03-01

    Preanalytic aspects can make results of hygiene studies difficult to compare. Efficacy of surface disinfection was assessed with an evaluated swabbing procedure. A validated microbial screening of surfaces was performed in the patients' environment and from hands of healthcare workers on two intensive care units (ICUs) prior to and after a standardized disinfection procedure. From a pure culture, the recovery rate of the swabs for Staphylococcus aureus was 35%-64% and dropped to 0%-22% from a mixed culture with 10-times more Staphylococcus epidermidis than S. aureus. Microbial surface loads 30 min before and after the cleaning procedures were indistinguishable. The quality-ensured screening procedure proved that adequate hygiene procedures are associated with a low overall colonization of surfaces and skin of healthcare workers. Unchanged microbial loads before and after surface disinfection demonstrated the low additional impact of this procedure in the endemic situation when the pathogen load prior to surface disinfection is already low. Based on a validated screening system ensuring the interpretability and reliability of the results, the study confirms the efficiency of combined hand and surface hygiene procedures to guarantee low rates of bacterial colonization. © 2017 The Society for Applied Microbiology.

  15. Guidelines for experimental design protocol and validation procedure for the measurement of heat resistance of microorganisms in milk.

    PubMed

    Condron, Robin; Farrokh, Choreh; Jordan, Kieran; McClure, Peter; Ross, Tom; Cerf, Olivier

    2015-01-02

    Studies on the heat resistance of dairy pathogens are a vital part of assessing the safety of dairy products. However, harmonized methodology for the study of heat resistance of food pathogens is lacking, even though there is a need for such harmonized experimental design protocols and for harmonized validation procedures for heat treatment studies. Such an approach is of particular importance to allow international agreement on appropriate risk management of emerging potential hazards for human and animal health. This paper is working toward establishment of a harmonized protocol for the study of the heat resistance of pathogens, identifying critical issues for establishment of internationally agreed protocols, including a harmonized framework for reporting and interpretation of heat inactivation studies of potentially pathogenic microorganisms. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. The Validity and Reliability of a Performance Assessment Procedure in Ice Hockey

    ERIC Educational Resources Information Center

    Nadeau, Luc; Richard, Jean-Francois; Godbout, Paul

    2008-01-01

    Background: Coaches and physical educators must obtain valid data relating to the contribution of each of their players in order to assess their level of performance in team sport competition. This information must also be collected and used in real game situations to be more valid. Developed initially for a physical education class context, the…

  17. Validity of Cognitive Load Measures in Simulation-Based Training: A Systematic Review.

    PubMed

    Naismith, Laura M; Cavalcanti, Rodrigo B

    2015-11-01

    Cognitive load theory (CLT) provides a rich framework to inform instructional design. Despite the applicability of CLT to simulation-based medical training, findings from multimedia learning have not been consistently replicated in this context. This lack of transferability may be related to issues in measuring cognitive load (CL) during simulation. The authors conducted a review of CLT studies across simulation training contexts to assess the validity evidence for different CL measures. PRISMA standards were followed. For 48 studies selected from a search of MEDLINE, EMBASE, PsycInfo, CINAHL, and ERIC databases, information was extracted about study aims, methods, validity evidence of measures, and findings. Studies were categorized on the basis of findings and prevalence of validity evidence collected, and statistical comparisons between measurement types and research domains were pursued. CL during simulation training has been measured in diverse populations including medical trainees, pilots, and university students. Most studies (71%; 34) used self-report measures; others included secondary task performance, physiological indices, and observer ratings. Correlations between CL and learning varied from positive to negative. Overall validity evidence for CL measures was low (mean score 1.55/5). Studies reporting greater validity evidence were more likely to report that high CL impaired learning. The authors found evidence that inconsistent correlations between CL and learning may be related to issues of validity in CL measures. Further research would benefit from rigorous documentation of validity and from triangulating measures of CL. This can better inform CLT instructional design for simulation-based medical training.

  18. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems

    PubMed Central

    Silva, Lenardo C.; Almeida, Hyggo O.; Perkusich, Angelo; Perkusich, Mirko

    2015-01-01

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage. PMID:26528982

  19. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems.

    PubMed

    Silva, Lenardo C; Almeida, Hyggo O; Perkusich, Angelo; Perkusich, Mirko

    2015-10-30

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage.

  20. Validation of the Evidence-Based Practice Process Assessment Scale

    ERIC Educational Resources Information Center

    Rubin, Allen; Parrish, Danielle E.

    2011-01-01

    Objective: This report describes the reliability, validity, and sensitivity of a scale that assesses practitioners' perceived familiarity with, attitudes of, and implementation of the evidence-based practice (EBP) process. Method: Social work practitioners and second-year master of social works (MSW) students (N = 511) were surveyed in four sites…

  1. Is Validation Always Valid? Cross-Cultural Complexities of Standard-Based Instruments Migrating out of Their Context

    ERIC Educational Resources Information Center

    Pastori, Giulia; Pagani, Valentina

    2017-01-01

    The international application of standard-based measures to assess ECEC quality raises crucial questions concerning the cultural complexities and the problematic validity of instruments migrating out of their cultural cradle; nevertheless the topic has received only marginal attention in the literature. This paper, which aims to address this gap,…

  2. Microbial ecology laboratory procedures manual NASA/MSFC

    NASA Technical Reports Server (NTRS)

    Huff, Timothy L.

    1990-01-01

    An essential part of the efficient operation of any microbiology laboratory involved in sample analysis is a standard procedures manual. The purpose of this manual is to provide concise and well defined instructions on routine technical procedures involving sample analysis and methods for monitoring and maintaining quality control within the laboratory. Of equal importance is the safe operation of the laboratory. This manual outlines detailed procedures to be followed in the microbial ecology laboratory to assure safety, analytical control, and validity of results.

  3. Testing a potential alternative to traditional identification procedures: Reaction time-based concealed information test does not work for lineups with cooperative witnesses.

    PubMed

    Sauerland, Melanie; Wolfs, Andrea C F; Crans, Samantha; Verschuere, Bruno

    2017-11-27

    Direct eyewitness identification is widely used, but prone to error. We tested the validity of indirect eyewitness identification decisions using the reaction time-based concealed information test (CIT) for assessing cooperative eyewitnesses' face memory as an alternative to traditional lineup procedures. In a series of five experiments, a total of 401 mock eyewitnesses watched one of 11 different stimulus events that depicted a breach of law. Eyewitness identifications in the CIT were derived from longer reaction times as compared to well-matched foil faces not encountered before. Across the five experiments, the weighted mean effect size d was 0.14 (95% CI 0.08-0.19). The reaction time-based CIT seems unsuited for testing cooperative eyewitnesses' memory for faces. The careful matching of the faces required for a fair lineup or the lack of intent to deceive may have hampered the diagnosticity of the reaction time-based CIT.

  4. 46 CFR Appendix C to Part 404 - Procedures for Annual Review of Base Pilotage Rates

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 8 2013-10-01 2013-10-01 false Procedures for Annual Review of Base Pilotage Rates C Appendix C to Part 404 Shipping COAST GUARD (GREAT LAKES PILOTAGE), DEPARTMENT OF HOMELAND SECURITY GREAT LAKES PILOTAGE RATEMAKING Pt. 404, App. C Appendix C to Part 404—Procedures for Annual Review of Base...

  5. 46 CFR Appendix C to Part 404 - Procedures for Annual Review of Base Pilotage Rates

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 8 2014-10-01 2014-10-01 false Procedures for Annual Review of Base Pilotage Rates C Appendix C to Part 404 Shipping COAST GUARD (GREAT LAKES PILOTAGE), DEPARTMENT OF HOMELAND SECURITY GREAT LAKES PILOTAGE RATEMAKING Pt. 404, App. C Appendix C to Part 404—Procedures for Annual Review of Base...

  6. An experimental validation of a statistical-based damage detection approach.

    DOT National Transportation Integrated Search

    2011-01-01

    In this work, a previously-developed, statistical-based, damage-detection approach was validated for its ability to : autonomously detect damage in bridges. The damage-detection approach uses statistical differences in the actual and : predicted beha...

  7. THE RELIABILITY AND CONCURRENT VALIDITY OF MEASUREMENTS USED TO QUANTIFY LUMBAR SPINE MOBILITY: AN ANALYSIS OF AN IPHONE® APPLICATION AND GRAVITY BASED INCLINOMETRY

    PubMed Central

    Pizzini, Matias; Robinson, Ashley; Yanez, Dania; Hanney, William J.

    2013-01-01

    Purpose/Aim: This purpose of this study was to investigate the reliability, minimal detectable change (MDC), and concurrent validity of active spinal mobility measurements using a gravity‐based bubble inclinometer and iPhone® application. Materials/Methods: Two investigators each used a bubble inclinometer and an iPhone® with inclinometer application to measure total thoracolumbo‐pelvic flexion, isolated lumbar flexion, total thoracolumbo‐pelvic extension, and thoracolumbar lateral flexion in 30 asymptomatic participants using a blinded repeated measures design. Results: The procedures used in this investigation for measuring spinal mobility yielded good intrarater and interrater reliability with Intraclass Correlation Coefficients (ICC) for bubble inclinometry ≥ 0.81 and the iPhone® ≥ 0.80. The MDC90 for the interrater analysis ranged from 4° to 9°. The concurrent validity between bubble inclinometry and the iPhone® application was good with ICC values of ≥ 0.86. The 95% level of agreement indicates that although these measuring instruments are equivalent individual differences of up to 18° may exist when using these devices interchangeably. Conclusions: The bubble inclinometer and iPhone® possess good intrarater and interrater reliability as well as concurrent validity when strict measurement procedures are adhered to. This study provides preliminary evidence to suggest that smart phone applications may offer clinical utility comparable to inclinometry for quantifying spinal mobility. Clinicians should be aware of the potential disagreement when using these devices interchangeably. Level of Evidence: 2b (Observational study of reliability) PMID:23593551

  8. Airport Landside - Volume III : ALSIM Calibration and Validation.

    DOT National Transportation Integrated Search

    1982-06-01

    This volume discusses calibration and validation procedures applied to the Airport Landside Simulation Model (ALSIM), using data obtained at Miami, Denver and LaGuardia Airports. Criteria for the selection of a validation methodology are described. T...

  9. A posteriori model validation for the temporal order of directed functional connectivity maps

    PubMed Central

    Beltz, Adriene M.; Molenaar, Peter C. M.

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data). PMID:26379489

  10. A posteriori model validation for the temporal order of directed functional connectivity maps.

    PubMed

    Beltz, Adriene M; Molenaar, Peter C M

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data).

  11. An integrated computer-based procedure for teamwork in digital nuclear power plants.

    PubMed

    Gao, Qin; Yu, Wenzhu; Jiang, Xiang; Song, Fei; Pan, Jiajie; Li, Zhizhong

    2015-01-01

    Computer-based procedures (CBPs) are expected to improve operator performance in nuclear power plants (NPPs), but they may reduce the openness of interaction between team members and harm teamwork consequently. To support teamwork in the main control room of an NPP, this study proposed a team-level integrated CBP that presents team members' operation status and execution histories to one another. Through a laboratory experiment, we compared the new integrated design and the existing individual CBP design. Sixty participants, randomly divided into twenty teams of three people each, were assigned to the two conditions to perform simulated emergency operating procedures. The results showed that compared with the existing CBP design, the integrated CBP reduced the effort of team communication and improved team transparency. The results suggest that this novel design is effective to optim team process, but its impact on the behavioural outcomes may be moderated by more factors, such as task duration. The study proposed and evaluated a team-level integrated computer-based procedure, which present team members' operation status and execution history to one another. The experimental results show that compared with the traditional procedure design, the integrated design reduces the effort of team communication and improves team transparency.

  12. Simple procedure for phase-space measurement and entanglement validation

    NASA Astrophysics Data System (ADS)

    Rundle, R. P.; Mills, P. W.; Tilma, Todd; Samson, J. H.; Everitt, M. J.

    2017-08-01

    It has recently been shown that it is possible to represent the complete quantum state of any system as a phase-space quasiprobability distribution (Wigner function) [Phys. Rev. Lett. 117, 180401 (2016), 10.1103/PhysRevLett.117.180401]. Such functions take the form of expectation values of an observable that has a direct analogy to displaced parity operators. In this work we give a procedure for the measurement of the Wigner function that should be applicable to any quantum system. We have applied our procedure to IBM's Quantum Experience five-qubit quantum processor to demonstrate that we can measure and generate the Wigner functions of two different Bell states as well as the five-qubit Greenberger-Horne-Zeilinger state. Because Wigner functions for spin systems are not unique, we define, compare, and contrast two distinct examples. We show how the use of these Wigner functions leads to an optimal method for quantum state analysis especially in the situation where specific characteristic features are of particular interest (such as for spin Schrödinger cat states). Furthermore we show that this analysis leads to straightforward, and potentially very efficient, entanglement test and state characterization methods.

  13. Translating the Simulation of Procedural Drilling Techniques for Interactive Neurosurgical Training

    PubMed Central

    Stredney, Don; Rezai, Ali R.; Prevedello, Daniel M.; Elder, J. Bradley; Kerwin, Thomas; Hittle, Bradley; Wiet, Gregory J.

    2014-01-01

    Background Through previous and concurrent efforts, we have developed a fully virtual environment to provide procedural training of otologic surgical technique. The virtual environment is based on high-resolution volumetric data of the regional anatomy. This volumetric data helps drive an interactive multi-sensory, i.e., visual (stereo), aural (stereo), and tactile simulation environment. Subsequently, we have extended our efforts to support the training of neurosurgical procedural technique as part of the CNS simulation initiative. Objective The goal of this multi-level development is to deliberately study the integration of simulation technologies into the neurosurgical curriculum and to determine their efficacy in teaching minimally invasive cranial and skull base approaches. Methods We discuss issues of biofidelity as well as our methods to provide objective, quantitative automated assessment for the residents. Results We conclude with a discussion of our experiences by reporting on preliminary formative pilot studies and proposed approaches to take the simulation to the next level through additional validation studies. Conclusion We have presented our efforts to translate an otologic simulation environment for use in the neurosurgical curriculum. We have demonstrated the initial proof of principles and define the steps to integrate and validate the system as an adjuvant to the neurosurgical curriculum. PMID:24051887

  14. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing

    NASA Astrophysics Data System (ADS)

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture—for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments—as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series—daily Poaceae pollen concentrations over the period 2006-2014—was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  15. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.

    PubMed

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  16. Standardizing the classification of abortion incidents: the Procedural Abortion Incident Reporting and Surveillance (PAIRS) Framework.

    PubMed

    Taylor, Diana; Upadhyay, Ushma D; Fjerstad, Mary; Battistelli, Molly F; Weitz, Tracy A; Paul, Maureen E

    2017-07-01

    To develop and validate standardized criteria for assessing abortion-related incidents (adverse events, morbidities, near misses) for first-trimester aspiration abortion procedures and to demonstrate the utility of a standardized framework [the Procedural Abortion Incident Reporting & Surveillance (PAIRS) Framework] for estimating serious abortion-related adverse events. As part of a California-based study of early aspiration abortion provision conducted between 2007 and 2013, we developed and validated a standardized framework for defining and monitoring first-trimester (≤14weeks) aspiration abortion morbidity and adverse events using multiple methods: a literature review, framework criteria testing with empirical data, repeated expert reviews and data-based revisions to the framework. The final framework distinguishes incidents resulting from procedural abortion care (adverse events) from morbidity related to pregnancy, the abortion process and other nonabortion related conditions. It further classifies incidents by diagnosis (confirmatory data, etiology, risk factors), management (treatment type and location), timing (immediate or delayed), seriousness (minor or major) and outcome. Empirical validation of the framework using data from 19,673 women receiving aspiration abortions revealed almost an equal proportion of total adverse events (n=205, 1.04%) and total abortion- or pregnancy-related morbidity (n=194, 0.99%). The majority of adverse events were due to retained products of conception (0.37%), failed attempted abortion (0.15%) and postabortion infection (0.17%). Serious or major adverse events were rare (n=11, 0.06%). Distinguishing morbidity diagnoses from adverse events using a standardized, empirically tested framework confirms the very low frequency of serious adverse events related to clinic-based abortion care. The PAIRS Framework provides a useful set of tools to systematically classify and monitor abortion-related incidents for first

  17. The PDS-based Data Processing, Archiving and Management Procedures in Chang'e Mission

    NASA Astrophysics Data System (ADS)

    Zhang, Z. B.; Li, C.; Zhang, H.; Zhang, P.; Chen, W.

    2017-12-01

    PDS is adopted as standard format of scientific data and foundation of all data-related procedures in Chang'e mission. Unlike the geographically distributed nature of the planetary data system, all procedures of data processing, archiving, management and distribution are proceeded in the headquarter of Ground Research and Application System of Chang'e mission in a centralized manner. The RAW data acquired by the ground stations is transmitted to and processed by data preprocessing subsystem (DPS) for the production of PDS-compliant Level 0 Level 2 data products using established algorithms, with each product file being well described using an attached label, then all products with the same orbit number are put together into a scheduled task for archiving along with a XML archive list file recoding all product files' properties such as file name, file size etc. After receiving the archive request from DPS, data management subsystem (DMS) is provoked to parse the XML list file to validate all the claimed files and their compliance to PDS using a prebuilt data dictionary, then to exact metadata of each data product file from its PDS label and the fields of its normalized filename. Various requirements of data management, retrieving, distribution and application can be well met using the flexible combination of the rich metadata empowered by the PDS. In the forthcoming CE-5 mission, all the design of data structure and procedures will be updated from PDS version 3 used in previous CE-1, CE-2 and CE-3 missions to the new version 4, the main changes would be: 1) a dedicated detached XML label will be used to describe the corresponding scientific data acquired by the 4 instruments carried, the XML parsing framework used in archive list validation will be reused for the label after some necessary adjustments; 2) all the image data acquired by the panorama camera, landing camera and lunar mineralogical spectrometer should use an Array_2D_Image/Array_3D_Image object to store

  18. Development and validation of a registry-based definition of eosinophilic esophagitis in Denmark

    PubMed Central

    Dellon, Evan S; Erichsen, Rune; Pedersen, Lars; Shaheen, Nicholas J; Baron, John A; Sørensen, Henrik T; Vyberg, Mogens

    2013-01-01

    AIM: To develop and validate a case definition of eosinophilic esophagitis (EoE) in the linked Danish health registries. METHODS: For case definition development, we queried the Danish medical registries from 2006-2007 to identify candidate cases of EoE in Northern Denmark. All International Classification of Diseases-10 (ICD-10) and prescription codes were obtained, and archived pathology slides were obtained and re-reviewed to determine case status. We used an iterative process to select inclusion/exclusion codes, refine the case definition, and optimize sensitivity and specificity. We then re-queried the registries from 2008-2009 to yield a validation set. The case definition algorithm was applied, and sensitivity and specificity were calculated. RESULTS: Of the 51 and 49 candidate cases identified in both the development and validation sets, 21 and 24 had EoE, respectively. Characteristics of EoE cases in the development set [mean age 35 years; 76% male; 86% dysphagia; 103 eosinophils per high-power field (eos/hpf)] were similar to those in the validation set (mean age 42 years; 83% male; 67% dysphagia; 77 eos/hpf). Re-review of archived slides confirmed that the pathology coding for esophageal eosinophilia was correct in greater than 90% of cases. Two registry-based case algorithms based on pathology, ICD-10, and pharmacy codes were successfully generated in the development set, one that was sensitive (90%) and one that was specific (97%). When these algorithms were applied to the validation set, they remained sensitive (88%) and specific (96%). CONCLUSION: Two registry-based definitions, one highly sensitive and one highly specific, were developed and validated for the linked Danish national health databases, making future population-based studies feasible. PMID:23382628

  19. Price adjustment for traditional Chinese medicine procedures: Based on a standardized value parity model.

    PubMed

    Wang, Haiyin; Jin, Chunlin; Jiang, Qingwu

    2017-11-20

    Traditional Chinese medicine (TCM) is an important part of China's medical system. Due to the prolonged low price of TCM procedures and the lack of an effective mechanism for dynamic price adjustment, the development of TCM has markedly lagged behind Western medicine. The World Health Organization (WHO) has emphasized the need to enhance the development of alternative and traditional medicine when creating national health care systems. The establishment of scientific and appropriate mechanisms to adjust the price of medical procedures in TCM is crucial to promoting the development of TCM. This study has examined incorporating value indicators and data on basic manpower expended, time spent, technical difficulty, and the degree of risk in the latest standards for the price of medical procedures in China, and this study also offers a price adjustment model with the relative price ratio as a key index. This study examined 144 TCM procedures and found that prices of TCM procedures were mainly based on the value of medical care provided; on average, medical care provided accounted for 89% of the price. Current price levels were generally low and the current price accounted for 56% of the standardized value of a procedure, on average. Current price levels accounted for a markedly lower standardized value of acupuncture, moxibustion, special treatment with TCM, and comprehensive TCM procedures. This study selected a total of 79 procedures and adjusted them by priority. The relationship between the price of TCM procedures and the suggested price was significantly optimized (p < 0.01). This study suggests that adjustment of the price of medical procedures based on a standardized value parity model is a scientific and suitable method of price adjustment that can serve as a reference for other provinces and municipalities in China and other countries and regions that mainly have fee-for-service (FFS) medical care.

  20. Reliability and Validity of Assessing User Satisfaction With Web-Based Health Interventions

    PubMed Central

    Lehr, Dirk; Reis, Dorota; Vis, Christiaan; Riper, Heleen; Berking, Matthias; Ebert, David Daniel

    2016-01-01

    Background The perspective of users should be taken into account in the evaluation of Web-based health interventions. Assessing the users’ satisfaction with the intervention they receive could enhance the evidence for the intervention effects. Thus, there is a need for valid and reliable measures to assess satisfaction with Web-based health interventions. Objective The objective of this study was to analyze the reliability, factorial structure, and construct validity of the Client Satisfaction Questionnaire adapted to Internet-based interventions (CSQ-I). Methods The psychometric quality of the CSQ-I was analyzed in user samples from 2 separate randomized controlled trials evaluating Web-based health interventions, one from a depression prevention intervention (sample 1, N=174) and the other from a stress management intervention (sample 2, N=111). At first, the underlying measurement model of the CSQ-I was analyzed to determine the internal consistency. The factorial structure of the scale and the measurement invariance across groups were tested by multigroup confirmatory factor analyses. Additionally, the construct validity of the scale was examined by comparing satisfaction scores with the primary clinical outcome. Results Multigroup confirmatory analyses on the scale yielded a one-factorial structure with a good fit (root-mean-square error of approximation =.09, comparative fit index =.96, standardized root-mean-square residual =.05) that showed partial strong invariance across the 2 samples. The scale showed very good reliability, indicated by McDonald omegas of .95 in sample 1 and .93 in sample 2. Significant correlations with change in depressive symptoms (r=−.35, P<.001) and perceived stress (r=−.48, P<.001) demonstrated the construct validity of the scale. Conclusions The proven internal consistency, factorial structure, and construct validity of the CSQ-I indicate a good overall psychometric quality of the measure to assess the user’s general

  1. Mixed group validation: a method to address the limitations of criterion group validation in research on malingering detection.

    PubMed

    Frederick, R I

    2000-01-01

    Mixed group validation (MGV) is offered as an alternative to criterion group validation (CGV) to estimate the true positive and false positive rates of tests and other diagnostic signs. CGV requires perfect confidence about each research participant's status with respect to the presence or absence of pathology. MGV determines diagnostic efficiencies based on group data; knowing an individual's status with respect to pathology is not required. MGV can use relatively weak indicators to validate better diagnostic signs, whereas CGV requires perfect diagnostic signs to avoid error in computing true positive and false positive rates. The process of MGV is explained, and a computer simulation demonstrates the soundness of the procedure. MGV of the Rey 15-Item Memory Test (Rey, 1958) for 723 pre-trial criminal defendants resulted in higher estimates of true positive rates and lower estimates of false positive rates as compared with prior research conducted with CGV. The author demonstrates how MGV addresses all the criticisms Rogers (1997b) outlined for differential prevalence designs in malingering detection research. Copyright 2000 John Wiley & Sons, Ltd.

  2. 40 CFR 86.1341-90 - Test cycle validation criteria.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Test cycle validation criteria. 86... Procedures § 86.1341-90 Test cycle validation criteria. (a) To minimize the biasing effect of the time lag... brake horsepower-hour. (c) Regression line analysis to calculate validation statistics. (1) Linear...

  3. 40 CFR 86.1341-90 - Test cycle validation criteria.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 20 2013-07-01 2013-07-01 false Test cycle validation criteria. 86... Procedures § 86.1341-90 Test cycle validation criteria. (a) To minimize the biasing effect of the time lag... brake horsepower-hour. (c) Regression line analysis to calculate validation statistics. (1) Linear...

  4. 40 CFR 86.1341-90 - Test cycle validation criteria.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 20 2012-07-01 2012-07-01 false Test cycle validation criteria. 86... Procedures § 86.1341-90 Test cycle validation criteria. (a) To minimize the biasing effect of the time lag... brake horsepower-hour. (c) Regression line analysis to calculate validation statistics. (1) Linear...

  5. Interprofessional and interdisciplinary simulation-based training leads to safe sedation procedures in the emergency department.

    PubMed

    Sauter, Thomas C; Hautz, Wolf E; Hostettler, Simone; Brodmann-Maeder, Monika; Martinolli, Luca; Lehmann, Beat; Exadaktylos, Aristomenis K; Haider, Dominik G

    2016-08-02

    Sedation is a procedure required for many interventions in the Emergency department (ED) such as reductions, surgical procedures or cardioversions. However, especially under emergency conditions with high risk patients and rapidly changing interdisciplinary and interprofessional teams, the procedure caries important risks. It is thus vital but difficult to implement a standard operating procedure for sedation procedures in any ED. Reports on both, implementation strategies as well as their success are currently lacking. This study describes the development, implementation and clinical evaluation of an interprofessional and interdisciplinary simulation-based sedation training concept. All physicians and nurses with specialised training in emergency medicine at the Berne University Department of Emergency Medicine participated in a mandatory interdisciplinary and interprofessional simulation-based sedation training. The curriculum consisted of an individual self-learning module, an airway skill training course, three simulation-based team training cases, and a final practical learning course in the operating theatre. Before and after each training session, self-efficacy, awareness of emergency procedures, knowledge of sedation medication and crisis resource management were assessed with a questionnaire. Changes in these measures were compared via paired tests, separately for groups formed based on experience and profession. To assess the clinical effect of training, we collected patient and team satisfaction as well as duration and complications for all sedations in the ED within the year after implementation. We further compared time to beginning of procedure, time for duration of procedure and time until discharge after implementation with the one year period before the implementation. Cohen's d was calculated as effect size for all statistically significant tests. Fifty staff members (26 nurses and 24 physicians) participated in the training. In all subgroups

  6. Consent Procedures and Participation Rates in School-Based Intervention and Prevention Research: Using a Multi-Component, Partnership-Based Approach to Recruit Participants

    PubMed Central

    Leff, Stephen S.; Franko, Debra L.; Weinstein, Elana; Beakley, Kelly; Power, Thomas J.

    2009-01-01

    Evaluations of school-based interventions and prevention programs typically require parental consent for students to participate. In school-based efforts, program evaluators may have limited access to parents and considerable effort is required to obtain signed consent. This issue is particularly salient when conducting research in under-resourced, urban schools, where parent involvement in the school setting may be somewhat limited. The aims of this article were to (a) examine the published school-based prevention and intervention literature to assess the state of the field in terms of consent procedures and participation rates; and (b) describe two examples of health promotion studies that used multi-component, partnership-based strategies in urban schools to encourage communication among children, their parents, and researchers. The purpose of the case studies was to generate hypotheses to advance the science related to school-based participant recruitment for research studies. Of nearly 500 studies reviewed, only 11.5% reported both consent procedures and participation rates. Studies using active consent procedures had a mean participation rate of 65.5% (range: 11–100%). This article highlights the need for researchers to report consent procedures and participation rates and describes partnership-based strategies used to enroll students into two urban, school-based health promotion studies. PMID:19834586

  7. Emergency Kausch-Whipple procedure: indications and experiences.

    PubMed

    Standop, Jens; Glowka, Tim; Schmitz, Volker; Schaefer, Nico; Hirner, Andreas; Kalff, Jörg C

    2010-03-01

    Pancreaticoduodenectomy is a demanding procedure even in selected patients but becomes formidable when performed in cases of emergency. This study analyzed our experience with urgent pancreatoduodenectomies; special emphases were put on the evaluation of diagnostic means and the validation of existing indications for performance of this procedure. Three hundred one patients who underwent pancreatoduodenectomy between 1989 and 2008 were identified from a pancreatic resection database and reviewed for emergency indications. Six patients (2%) underwent emergency pancreatoduodenectomy. Indications included endoscopy-related perforation, postoperative complications, and uncontrollable intraduodenal tumor bleeding. Length of stay and occurrence of nonsurgical complications were increased in emergency compared with elective pancreatoduodenectomies. Although increased, no significant differences were found regarding mortality and surgery-related complications. Indications for emergency pancreatoduodenectomies were based on clinical decisions rather than on radiologic diagnostics. Urgent pancreatic head resections may be considered as an option in selected patients if handling of local complications by interventional measures or limited surgery seems unsafe.

  8. 49 CFR Appendix C to Part 173 - Procedure for Base-level Vibration Testing

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 2 2013-10-01 2013-10-01 false Procedure for Base-level Vibration Testing C Appendix C to Part 173 Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS... Base-level Vibration Testing Base-level vibration testing shall be conducted as follows: 1. Three...

  9. 49 CFR Appendix C to Part 173 - Procedure for Base-level Vibration Testing

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Procedure for Base-level Vibration Testing C Appendix C to Part 173 Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS... Base-level Vibration Testing Base-level vibration testing shall be conducted as follows: 1. Three...

  10. Monte Carol-based validation of neutronic methodology for EBR-II analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liaw, J.R.; Finck, P.J.

    1993-01-01

    The continuous-energy Monte Carlo code VIM (Ref. 1) has been validated extensively over the years against fast critical experiments and other neutronic analysis codes. A high degree of confidence in VIM for predicting reactor physics parameters has been firmly established. This paper presents a numerical validation of two conventional multigroup neutronic analysis codes, DIF3D (Ref. 4) and VARIANT (Ref. 5), against VIM for two Experimental Breeder Reactor II (EBR-II) core loadings in detailed three-dimensional hexagonal-z geometry. The DIF3D code is based on nodal diffusion theory, and it is used in calculations for day-today reactor operations, whereas the VARIANT code ismore » based on nodal transport theory and is used with increasing frequency for specific applications. Both DIF3D and VARIANT rely on multigroup cross sections generated from ENDF/B-V by the ETOE-2/MC[sup 2]-II/SDX (Ref. 6) code package. Hence, this study also validates the multigroup cross-section processing methodology against the continuous-energy approach used in VIM.« less

  11. 32 CFR 634.35 - Chemical testing policies and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 4 2010-07-01 2010-07-01 true Chemical testing policies and procedures. 634.35... Chemical testing policies and procedures. (a) Validity of chemical testing. Results of chemical testing are... instruction manual. (iv) Perform preventive maintenance as required by the instruction manual. (c) Chemical...

  12. 32 CFR 634.35 - Chemical testing policies and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 4 2011-07-01 2011-07-01 false Chemical testing policies and procedures. 634.35... Chemical testing policies and procedures. (a) Validity of chemical testing. Results of chemical testing are... instruction manual. (iv) Perform preventive maintenance as required by the instruction manual. (c) Chemical...

  13. 32 CFR 634.35 - Chemical testing policies and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 4 2012-07-01 2011-07-01 true Chemical testing policies and procedures. 634.35... Chemical testing policies and procedures. (a) Validity of chemical testing. Results of chemical testing are... instruction manual. (iv) Perform preventive maintenance as required by the instruction manual. (c) Chemical...

  14. 32 CFR 634.35 - Chemical testing policies and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 4 2014-07-01 2013-07-01 true Chemical testing policies and procedures. 634.35... Chemical testing policies and procedures. (a) Validity of chemical testing. Results of chemical testing are... instruction manual. (iv) Perform preventive maintenance as required by the instruction manual. (c) Chemical...

  15. 32 CFR 634.35 - Chemical testing policies and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 4 2013-07-01 2013-07-01 false Chemical testing policies and procedures. 634.35... Chemical testing policies and procedures. (a) Validity of chemical testing. Results of chemical testing are... instruction manual. (iv) Perform preventive maintenance as required by the instruction manual. (c) Chemical...

  16. A mixture gatekeeping procedure based on the Hommel test for clinical trial applications.

    PubMed

    Brechenmacher, Thomas; Xu, Jane; Dmitrienko, Alex; Tamhane, Ajit C

    2011-07-01

    When conducting clinical trials with hierarchically ordered objectives, it is essential to use multiplicity adjustment methods that control the familywise error rate in the strong sense while taking into account the logical relations among the null hypotheses. This paper proposes a gatekeeping procedure based on the Hommel (1988) test, which offers power advantages compared to other p value-based tests proposed in the literature. A general description of the procedure is given and details are presented on how it can be applied to complex clinical trial designs. Two clinical trial examples are given to illustrate the methodology developed in the paper.

  17. Recommendations for elaboration, transcultural adaptation and validation process of tests in Speech, Hearing and Language Pathology.

    PubMed

    Pernambuco, Leandro; Espelt, Albert; Magalhães, Hipólito Virgílio; Lima, Kenio Costa de

    2017-06-08

    to present a guide with recommendations for translation, adaptation, elaboration and process of validation of tests in Speech and Language Pathology. the recommendations were based on international guidelines with a focus on the elaboration, translation, cross-cultural adaptation and validation process of tests. the recommendations were grouped into two Charts, one of them with procedures for translation and transcultural adaptation and the other for obtaining evidence of validity, reliability and measures of accuracy of the tests. a guide with norms for the organization and systematization of the process of elaboration, translation, cross-cultural adaptation and validation process of tests in Speech and Language Pathology was created.

  18. Evaluating Procedures for Reducing Measurement Error in Math Curriculum-Based Measurement Probes

    ERIC Educational Resources Information Center

    Methe, Scott A.; Briesch, Amy M.; Hulac, David

    2015-01-01

    At present, it is unclear whether math curriculum-based measurement (M-CBM) procedures provide a dependable measure of student progress in math computation because support for its technical properties is based largely upon a body of correlational research. Recent investigations into the dependability of M-CBM scores have found that evaluating…

  19. Clinical Validity of the ADI-R in a US-Based Latino Population.

    PubMed

    Vanegas, Sandra B; Magaña, Sandra; Morales, Miguel; McNamara, Ellyn

    2016-05-01

    The Autism Diagnostic Interview-Revised (ADI-R) has been validated as a tool to aid in the diagnosis of Autism; however, given the growing diversity in the United States, the ADI-R must be validated for different languages and cultures. This study evaluates the validity of the ADI-R in a US-based Latino, Spanish-speaking population of 50 children and adolescents with ASD and developmental disability. Sensitivity and specificity of the ADI-R as a diagnostic tool were moderate, but lower than previously reported values. Validity of the social reciprocity and restrictive and repetitive behaviors domains was high, but low in the communication domain. Findings suggest that language discordance between caregiver and child may influence reporting of communication symptoms and contribute to lower sensitivity and specificity.

  20. Procedural virtual reality simulation in minimally invasive surgery.

    PubMed

    Våpenstad, Cecilie; Buzink, Sonja N

    2013-02-01

    Simulation of procedural tasks has the potential to bridge the gap between basic skills training outside the operating room (OR) and performance of complex surgical tasks in the OR. This paper provides an overview of procedural virtual reality (VR) simulation currently available on the market and presented in scientific literature for laparoscopy (LS), flexible gastrointestinal endoscopy (FGE), and endovascular surgery (EVS). An online survey was sent to companies and research groups selling or developing procedural VR simulators, and a systematic search was done for scientific publications presenting or applying VR simulators to train or assess procedural skills in the PUBMED and SCOPUS databases. The results of five simulator companies were included in the survey. In the literature review, 116 articles were analyzed (45 on LS, 43 on FGE, 28 on EVS), presenting a total of 23 simulator systems. The companies stated to altogether offer 78 procedural tasks (33 for LS, 12 for FGE, 33 for EVS), of which 17 also were found in the literature review. Although study type and used outcomes vary between the three different fields, approximately 90 % of the studies presented in the retrieved publications for LS found convincing evidence to confirm the validity or added value of procedural VR simulation. This was the case in approximately 75 % for FGE and EVS. Procedural training using VR simulators has been found to improve clinical performance. There is nevertheless a large amount of simulated procedural tasks that have not been validated. Future research should focus on the optimal use of procedural simulators in the most effective training setups and further investigate the benefits of procedural VR simulation to improve clinical outcome.

  1. Validation of the Chinese Version of the Epistemic Beliefs Inventory Using Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Wang, Xihui; Zhang, Zhidong; Zhang, Xiuying; Hou, Dadong

    2013-01-01

    The Epistemic Beliefs Inventory (EBI), as a theory-based and empirically validated instrument, was originally developed and widely used in the North American context. Through a strict translation procedure the authors translated the EBI into Chinese, and then administered it to 451 students in 7 universities in mainland China. The construct…

  2. Improving liquid chromatography-tandem mass spectrometry determinations by modifying noise frequency spectrum between two consecutive wavelet-based low-pass filtering procedures.

    PubMed

    Chen, Hsiao-Ping; Liao, Hui-Ju; Huang, Chih-Min; Wang, Shau-Chun; Yu, Sung-Nien

    2010-04-23

    This paper employs one chemometric technique to modify the noise spectrum of liquid chromatography-tandem mass spectrometry (LC-MS/MS) chromatogram between two consecutive wavelet-based low-pass filter procedures to improve the peak signal-to-noise (S/N) ratio enhancement. Although similar techniques of using other sets of low-pass procedures such as matched filters have been published, the procedures developed in this work are able to avoid peak broadening disadvantages inherent in matched filters. In addition, unlike Fourier transform-based low-pass filters, wavelet-based filters efficiently reject noises in the chromatograms directly in the time domain without distorting the original signals. In this work, the low-pass filtering procedures sequentially convolve the original chromatograms against each set of low pass filters to result in approximation coefficients, representing the low-frequency wavelets, of the first five resolution levels. The tedious trials of setting threshold values to properly shrink each wavelet are therefore no longer required. This noise modification technique is to multiply one wavelet-based low-pass filtered LC-MS/MS chromatogram with another artificial chromatogram added with thermal noises prior to the other wavelet-based low-pass filter. Because low-pass filter cannot eliminate frequency components below its cut-off frequency, more efficient peak S/N ratio improvement cannot be accomplished using consecutive low-pass filter procedures to process LC-MS/MS chromatograms. In contrast, when the low-pass filtered LC-MS/MS chromatogram is conditioned with the multiplication alteration prior to the other low-pass filter, much better ratio improvement is achieved. The noise frequency spectrum of low-pass filtered chromatogram, which originally contains frequency components below the filter cut-off frequency, is altered to span a broader range with multiplication operation. When the frequency range of this modified noise spectrum shifts

  3. Validation of the Learning Progression-based Assessment of Modern Genetics in a college context

    NASA Astrophysics Data System (ADS)

    Todd, Amber; Romine, William L.

    2016-07-01

    Building upon a methodologically diverse research foundation, we adapted and validated the Learning Progression-based Assessment of Modern Genetics (LPA-MG) for college students' knowledge of the domain. Toward collecting valid learning progression-based measures in a college majors context, we redeveloped and content validated a majority of a previous version of the LPA-MG which was developed for high school students. Using a Rasch model calibrated on 316 students from 2 sections of majors introductory biology, we demonstrate the validity of this version and describe how college students' ideas of modern genetics are likely to change as the students progress from low to high understanding. We then utilize these findings to build theory around the connections college students at different levels of understanding make within and across the many ideas within the domain.

  4. 40 CFR 86.1341-98 - Test cycle validation criteria.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 20 2012-07-01 2012-07-01 false Test cycle validation criteria. 86... Procedures § 86.1341-98 Test cycle validation criteria. Section 86.1341-98 includes text that specifies...-90 (d)(4), shall be excluded from both cycle validation and the integrated work used for emissions...

  5. 40 CFR 86.1341-98 - Test cycle validation criteria.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 20 2013-07-01 2013-07-01 false Test cycle validation criteria. 86... Procedures § 86.1341-98 Test cycle validation criteria. Section 86.1341-98 includes text that specifies...-90 (d)(4), shall be excluded from both cycle validation and the integrated work used for emissions...

  6. 40 CFR 86.1341-98 - Test cycle validation criteria.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Test cycle validation criteria. 86... Procedures § 86.1341-98 Test cycle validation criteria. Section 86.1341-98 includes text that specifies...-90 (d)(4), shall be excluded from both cycle validation and the integrated work used for emissions...

  7. 49 CFR Appendix C to Part 173 - Procedure for Base-level Vibration Testing

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 2 2012-10-01 2012-10-01 false Procedure for Base-level Vibration Testing C... Base-level Vibration Testing Base-level vibration testing shall be conducted as follows: 1. Three... platform. 4. Immediately following the period of vibration, each package shall be removed from the platform...

  8. 40 CFR 91.411 - Post-test analyzer procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Post-test analyzer procedures. 91.411... Post-test analyzer procedures. (a) Perform a hang-up check within 60 seconds of the completion of the... and record the post-test data specified in § 91.405(e). (e) For a valid test, the analyzer drift...

  9. 40 CFR 91.411 - Post-test analyzer procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Post-test analyzer procedures. 91.411... Post-test analyzer procedures. (a) Perform a hang-up check within 60 seconds of the completion of the... and record the post-test data specified in § 91.405(e). (e) For a valid test, the analyzer drift...

  10. 40 CFR 91.411 - Post-test analyzer procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Post-test analyzer procedures. 91.411... Post-test analyzer procedures. (a) Perform a hang-up check within 60 seconds of the completion of the... and record the post-test data specified in § 91.405(e). (e) For a valid test, the analyzer drift...

  11. 40 CFR 91.411 - Post-test analyzer procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Post-test analyzer procedures. 91.411... Post-test analyzer procedures. (a) Perform a hang-up check within 60 seconds of the completion of the... and record the post-test data specified in § 91.405(e). (e) For a valid test, the analyzer drift...

  12. 40 CFR 91.411 - Post-test analyzer procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Post-test analyzer procedures. 91.411... Post-test analyzer procedures. (a) Perform a hang-up check within 60 seconds of the completion of the... and record the post-test data specified in § 91.405(e). (e) For a valid test, the analyzer drift...

  13. 50 CFR 300.152 - Procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... Nationals Fishing in Russian Fisheries § 300.152 Procedures. (a) Application for annual permits. U.S. vessel owners and operators must have a valid permit issued by the Russian Federation obtained pursuant to a complete application submitted through NMFS before fishing in the Russian EZ or for Russian fishery...

  14. 50 CFR 300.152 - Procedures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... Nationals Fishing in Russian Fisheries § 300.152 Procedures. (a) Application for annual permits. U.S. vessel owners and operators must have a valid permit issued by the Russian Federation obtained pursuant to a complete application submitted through NMFS before fishing in the Russian EZ or for Russian fishery...

  15. 50 CFR 300.152 - Procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... Nationals Fishing in Russian Fisheries § 300.152 Procedures. (a) Application for annual permits. U.S. vessel owners and operators must have a valid permit issued by the Russian Federation obtained pursuant to a complete application submitted through NMFS before fishing in the Russian EZ or for Russian fishery...

  16. 50 CFR 300.152 - Procedures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... Nationals Fishing in Russian Fisheries § 300.152 Procedures. (a) Application for annual permits. U.S. vessel owners and operators must have a valid permit issued by the Russian Federation obtained pursuant to a complete application submitted through NMFS before fishing in the Russian EZ or for Russian fishery...

  17. 50 CFR 300.152 - Procedures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... Nationals Fishing in Russian Fisheries § 300.152 Procedures. (a) Application for annual permits. U.S. vessel owners and operators must have a valid permit issued by the Russian Federation obtained pursuant to a complete application submitted through NMFS before fishing in the Russian EZ or for Russian fishery...

  18. A Rasch-Based Validation of the Vocabulary Size Test

    ERIC Educational Resources Information Center

    Beglar, David

    2010-01-01

    The primary purpose of this study was to provide preliminary validity evidence for a 140-item form of the Vocabulary Size Test, which is designed to measure written receptive knowledge of the first 14,000 words of English. Nineteen native speakers of English and 178 native speakers of Japanese participated in the study. Analyses based on the Rasch…

  19. Code-based Diagnostic Algorithms for Idiopathic Pulmonary Fibrosis. Case Validation and Improvement.

    PubMed

    Ley, Brett; Urbania, Thomas; Husson, Gail; Vittinghoff, Eric; Brush, David R; Eisner, Mark D; Iribarren, Carlos; Collard, Harold R

    2017-06-01

    Population-based studies of idiopathic pulmonary fibrosis (IPF) in the United States have been limited by reliance on diagnostic code-based algorithms that lack clinical validation. To validate a well-accepted International Classification of Diseases, Ninth Revision, code-based algorithm for IPF using patient-level information and to develop a modified algorithm for IPF with enhanced predictive value. The traditional IPF algorithm was used to identify potential cases of IPF in the Kaiser Permanente Northern California adult population from 2000 to 2014. Incidence and prevalence were determined overall and by age, sex, and race/ethnicity. A validation subset of cases (n = 150) underwent expert medical record and chest computed tomography review. A modified IPF algorithm was then derived and validated to optimize positive predictive value. From 2000 to 2014, the traditional IPF algorithm identified 2,608 cases among 5,389,627 at-risk adults in the Kaiser Permanente Northern California population. Annual incidence was 6.8/100,000 person-years (95% confidence interval [CI], 6.1-7.7) and was higher in patients with older age, male sex, and white race. The positive predictive value of the IPF algorithm was only 42.2% (95% CI, 30.6 to 54.6%); sensitivity was 55.6% (95% CI, 21.2 to 86.3%). The corrected incidence was estimated at 5.6/100,000 person-years (95% CI, 2.6-10.3). A modified IPF algorithm had improved positive predictive value but reduced sensitivity compared with the traditional algorithm. A well-accepted International Classification of Diseases, Ninth Revision, code-based IPF algorithm performs poorly, falsely classifying many non-IPF cases as IPF and missing a substantial proportion of IPF cases. A modification of the IPF algorithm may be useful for future population-based studies of IPF.

  20. [Symptom and complaint validation of chronic pain in social medical evaluation. Part I: Terminological and methodological approaches].

    PubMed

    Dohrenbusch, R

    2009-06-01

    Chronic pain accompanied by disability and handicap is a frequent symptom necessitating medical assessment. Current guidelines for the assessment of malingering suggest discrimination between explanatory demonstration, aggravation and simulation. However, this distinction has not clearly been put into operation and validated. The necessity of assessment strategies based on general principles of psychological assessment and testing is emphasized. Standardized and normalized psychological assessment methods and symptom validation techniques should be used in the assessment of subjects with chronic pain problems. An adaptive procedure for assessing the validity of complaints is suggested to minimize effort and costs.

  1. 20 CFR 404.346 - Your relationship as wife, husband, widow, or widower based upon a deemed valid marriage.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... widower based upon a deemed valid marriage. 404.346 Section 404.346 Employees' Benefits SOCIAL SECURITY... relationship as wife, husband, widow, or widower based upon a deemed valid marriage. (a) General. If your... explained in § 404.345, you may be eligible for benefits based upon a deemed valid marriage. You will be...

  2. Validity of "Hi_Science" as instructional media based-android refer to experiential learning model

    NASA Astrophysics Data System (ADS)

    Qamariah, Jumadi, Senam, Wilujeng, Insih

    2017-08-01

    Hi_Science is instructional media based-android in learning science on material environmental pollution and global warming. This study is aimed: (a) to show the display of Hi_Science that will be applied in Junior High School, and (b) to describe the validity of Hi_Science. Hi_Science as instructional media created with colaboration of innovative learning model and development of technology at the current time. Learning media selected is based-android and collaborated with experiential learning model as an innovative learning model. Hi_Science had adapted student worksheet by Taufiq (2015). Student worksheet had very good category by two expert lecturers and two science teachers (Taufik, 2015). This student worksheet is refined and redeveloped in android as an instructional media which can be used by students for learning science not only in the classroom, but also at home. Therefore, student worksheet which has become instructional media based-android must be validated again. Hi_Science has been validated by two experts. The validation is based on assessment of meterials aspects and media aspects. The data collection was done by media assessment instrument. The result showed the assessment of material aspects has obtained the average value 4,72 with percentage of agreement 96,47%, that means Hi_Science on the material aspects is in excellent category or very valid category. The assessment of media aspects has obtained the average value 4,53 with percentage of agreement 98,70%, that means Hi_Science on the media aspects is in excellent category or very valid category. It was concluded that Hi_Science as instructional media can be applied in the junior high school.

  3. Contact stresses in meshing spur gear teeth: Use of an incremental finite element procedure

    NASA Technical Reports Server (NTRS)

    Hsieh, Chih-Ming; Huston, Ronald L.; Oswald, Fred B.

    1992-01-01

    Contact stresses in meshing spur gear teeth are examined. The analysis is based upon an incremental finite element procedure that simultaneously determines the stresses in the contact region between the meshing teeth. The teeth themselves are modeled by two dimensional plain strain elements. Friction effects are included, with the friction forces assumed to obey Coulomb's law. The analysis assumes that the displacements are small and that the tooth materials are linearly elastic. The analysis procedure is validated by comparing its results with those for the classical two contacting semicylinders obtained from the Hertz method. Agreement is excellent.

  4. 29 CFR 1607.7 - Use of other validity studies.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 4 2011-07-01 2011-07-01 false Use of other validity studies. 1607.7 Section 1607.7 Labor... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection...

  5. Efficacy Outcome Measures for Procedural Sedation Clinical Trials in Adults: An ACTTION Systematic Review.

    PubMed

    Williams, Mark R; McKeown, Andrew; Dexter, Franklin; Miner, James R; Sessler, Daniel I; Vargo, John; Turk, Dennis C; Dworkin, Robert H

    2016-01-01

    Successful procedural sedation represents a spectrum of patient- and clinician-related goals. The absence of a gold-standard measure of the efficacy of procedural sedation has led to a variety of outcomes being used in clinical trials, with the consequent lack of consistency among measures, making comparisons among trials and meta-analyses challenging. We evaluated which existing measures have undergone psychometric analysis in a procedural sedation setting and whether the validity of any of these measures support their use across the range of procedures for which sedation is indicated. Numerous measures were found to have been used in clinical research on procedural sedation across a wide range of procedures. However, reliability and validity have been evaluated for only a limited number of sedation scales, observer-rated pain/discomfort scales, and satisfaction measures in only a few categories of procedures. Typically, studies only examined 1 or 2 aspects of scale validity. The results are likely unique to the specific clinical settings they were tested in. Certain scales, for example, those requiring motor stimulation, are unsuitable to evaluate sedation for procedures where movement is prohibited (e.g., magnetic resonance imaging scans). Further work is required to evaluate existing measures for procedures for which they were not developed. Depending on the outcomes of these efforts, it might ultimately be necessary to consider measures of sedation efficacy to be procedure specific.

  6. Validating MODIS and Sentinel-2 NDVI Products at a Temperate Deciduous Forest Site Using Two Independent Ground-Based Sensors

    PubMed Central

    Lange, Maximilian; Rebmann, Corinna; Cuntz, Matthias; Doktor, Daniel

    2017-01-01

    Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure. PMID:28800065

  7. Validation of urban freeway models.

    DOT National Transportation Integrated Search

    2015-01-01

    This report describes the methodology, data, conclusions, and enhanced models regarding the validation of two sets of models developed in the Strategic Highway Research Program 2 (SHRP 2) Reliability Project L03, Analytical Procedures for Determining...

  8. Navigation of guidewires and catheters in the body during intervention procedures: a review of computer-based models.

    PubMed

    Sharei, Hoda; Alderliesten, Tanja; van den Dobbelsteen, John J; Dankelman, Jenny

    2018-01-01

    Guidewires and catheters are used during minimally invasive interventional procedures to traverse in vascular system and access the desired position. Computer models are increasingly being used to predict the behavior of these instruments. This information can be used to choose the right instrument for each case and increase the success rate of the procedure. Moreover, a designer can test the performance of instruments before the manufacturing phase. A precise model of the instrument is also useful for a training simulator. Therefore, to identify the strengths and weaknesses of different approaches used to model guidewires and catheters, a literature review of the existing techniques has been performed. The literature search was carried out in Google Scholar and Web of Science and limited to English for the period 1960 to 2017. For a computer model to be used in practice, it should be sufficiently realistic and, for some applications, real time. Therefore, we compared different modeling techniques with regard to these requirements, and the purposes of these models are reviewed. Important factors that influence the interaction between the instruments and the vascular wall are discussed. Finally, different ways used to evaluate and validate the models are described. We classified the developed models based on their formulation into finite-element method (FEM), mass-spring model (MSM), and rigid multibody links. Despite its numerical stability, FEM requires a very high computational effort. On the other hand, MSM is faster but there is a risk of numerical instability. The rigid multibody links method has a simple structure and is easy to implement. However, as the length of the instrument is increased, the model becomes slower. For the level of realism of the simulation, friction and collision were incorporated as the most influential forces applied to the instrument during the propagation within a vascular system. To evaluate the accuracy, most of the studies

  9. Validating Diagnostic and Screening Procedures for Pre-Motor Parkinson’s Disease

    DTIC Science & Technology

    2013-04-01

    Screening Procedures for Pre-motor Parkinson’s Disease 5b. GRANT NUMBER W81XWH-11-1-0310 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) Dr. J William...Procedures for Pre-Motor Parkinson’s Disease PRINCIPAL INVESTIGATOR: Dr. J William Langston CONTRACTING ORGANIZATION: Parkinson’s...those of the author( s ) and should not be construed as an official Department of the Army position, policy or decision unless so designated by other

  10. Validating Diagnostic and Screening Procedures for Pre-Motor Parkinson’s Disease

    DTIC Science & Technology

    2014-04-01

    Screening Procedures for Pre-motor Parkinson’s Disease 5b. GRANT NUMBER W81XWH-11-1-0310 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) Dr. J William...Procedures for Pre-Motor Parkinson’s Disease PRINCIPAL INVESTIGATOR: Dr. J William Langston CONTRACTING ORGANIZATION: Parkinson’s...are those of the author( s ) and should not be construed as an official Department of the Army position, policy or decision unless so designated by

  11. A Case Study of Policies and Procedures to Address Cyberbullying at a Technology-Based Middle School

    ERIC Educational Resources Information Center

    Tate, Bettina Polite

    2017-01-01

    This qualitative case study explored the policies and procedures used to effectively address cyberbullying at a technology-based middle school. The purpose of the study was to gain an in-depth understanding of policies and procedures used to address cyberbullying at a technology-based middle school in the southern United States. The study sought…

  12. Empirical validation of an agent-based model of wood markets in Switzerland

    PubMed Central

    Hilty, Lorenz M.; Lemm, Renato; Thees, Oliver

    2018-01-01

    We present an agent-based model of wood markets and show our efforts to validate this model using empirical data from different sources, including interviews, workshops, experiments, and official statistics. Own surveys closed gaps where data was not available. Our approach to model validation used a variety of techniques, including the replication of historical production amounts, prices, and survey results, as well as a historical case study of a large sawmill entering the market and becoming insolvent only a few years later. Validating the model using this case provided additional insights, showing how the model can be used to simulate scenarios of resource availability and resource allocation. We conclude that the outcome of the rigorous validation qualifies the model to simulate scenarios concerning resource availability and allocation in our study region. PMID:29351300

  13. Development and initial validation of a cognitive-based work-nonwork conflict scale.

    PubMed

    Ezzedeen, Souha R; Swiercz, Paul M

    2007-06-01

    Current research related to work and life outside work specifies three types of work-nonwork conflict: time, strain, and behavior-based. Overlooked in these models is a cognitive-based type of conflict whereby individuals experience work-nonwork conflict from cognitive preoccupation with work. Four studies on six different groups (N=549) were undertaken to develop and validate an initial measure of this construct. Structural equation modeling confirmed a two-factor, nine-item scale. Hypotheses regarding cognitive-based conflict's relationship with life satisfaction, work involvement, work-nonwork conflict, and work hours were supported. The relationship with knowledge work was partially supported in that only the cognitive dimension of cognitive-based conflict was related to extent of knowledge work. Hypotheses regarding cognitive-based conflict's relationship with family demands were rejected in that the cognitive dimension correlated positively rather than negatively with number of dependent children and perceived family demands. The study provides encouraging preliminary evidence of scale validity.

  14. Procedure for extraction of disparate data from maps into computerized data bases

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1979-01-01

    A procedure is presented for extracting disparate sources of data from geographic maps and for the conversion of these data into a suitable format for processing on a computer-oriented information system. Several graphic digitizing considerations are included and related to the NASA Earth Resources Laboratory's Digitizer System. Current operating procedures for the Digitizer System are given in a simplified and logical manner. The report serves as a guide to those organizations interested in converting map-based data by using a comparable map digitizing system.

  15. Community-Based Participatory Research Conceptual Model: Community Partner Consultation and Face Validity.

    PubMed

    Belone, Lorenda; Lucero, Julie E; Duran, Bonnie; Tafoya, Greg; Baker, Elizabeth A; Chan, Domin; Chang, Charlotte; Greene-Moton, Ella; Kelley, Michele A; Wallerstein, Nina

    2016-01-01

    A national community-based participatory research (CBPR) team developed a conceptual model of CBPR partnerships to understand the contribution of partnership processes to improved community capacity and health outcomes. With the model primarily developed through academic literature and expert consensus building, we sought community input to assess face validity and acceptability. Our research team conducted semi-structured focus groups with six partnerships nationwide. Participants validated and expanded on existing model constructs and identified new constructs based on "real-world" praxis, resulting in a revised model. Four cross-cutting constructs were identified: trust development, capacity, mutual learning, and power dynamics. By empirically testing the model, we found community face validity and capacity to adapt the model to diverse contexts. We recommend partnerships use and adapt the CBPR model and its constructs, for collective reflection and evaluation, to enhance their partnering practices and achieve their health and research goals. © The Author(s) 2014.

  16. Validation of current procedural terminology codes for rotavirus vaccination among infants in two commercially insured US populations.

    PubMed

    Hoffman, Veena; Everage, Nicholas J; Quinlan, Scott C; Skerry, Kathleen; Esposito, Daina; Praet, Nicolas; Rosillon, Dominique; Holick, Crystal N; Dore, David D

    2016-12-01

    We validated procedure codes used in health insurance claims for reimbursement of rotavirus vaccination by comparing claims for monovalent live-attenuated human rotavirus vaccine (RV1) and live, oral pentavalent rotavirus vaccine (RV5) to medical records. Using administrative data from two commercially insured United States populations, we randomly sampled vaccination claims for RV1 and RV5 from a cohort of infants aged less than 1 year from an ongoing post-licensure safety study of rotavirus vaccines. The codes for RV1 and RV5 found in claims were confirmed through medical record review. The positive predictive value (PPV) of the Current Procedural Terminology codes for RV1 and RV5 was calculated as the number of medical record-confirmed vaccinations divided by the number of medical records obtained. Medical record review confirmed 92 of 104 RV1 vaccination claims (PPV: 88.5%; 95% CI: 80.7-93.9%) and 98 of 113 RV5 vaccination claims (PPV: 86.7%; 95% CI: 79.1-92.4%). Among the 217 medical records abstracted, only three (1.4%) of vaccinations were misclassified in claims-all were RV5 misclassified as RV1. The medical records corresponding to 9 RV1 and 15 RV5 claims contained insufficient information to classify the type of rotavirus vaccine. Misclassification of rotavirus vaccines is infrequent within claims. The PPVs reported here are conservative estimates as those with insufficient information in the medical records were assumed to be incorrectly coded in the claims. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Validity of plant fiber length measurement : a review of fiber length measurement based on kenaf as a model

    Treesearch

    James S. Han; Theodore Mianowski; Yi-yu Lin

    1999-01-01

    The efficacy of fiber length measurement techniques such as digitizing, the Kajaani procedure, and NIH Image are compared in order to determine the optimal tool. Kenaf bast fibers, aspen, and red pine fibers were collected from different anatomical parts, and the fiber lengths were compared using various analytical tools. A statistical analysis on the validity of the...

  18. A modified lead-matrix separation procedure shown for lead isotope analysis in Trojan silver artefacts as an example.

    PubMed

    Vogl, Jochen; Paz, Boaz; Koenig, Maren; Pritzkow, Wolfgang

    2013-03-01

    A modified Pb-matrix separation procedure using NH4HCO3 solution as eluent has been developed and validated for determination of Pb isotope amount ratios by thermal ionization mass spectrometry. The procedure is based on chromatographic separation using the Pb·Spec resin and an in-house-prepared NH4HCO3 solution serving as eluent. The advantages of this eluent are low Pb blanks (<40 pg mL(-1)) and the property that NH4HCO3 can be easily removed by use of a heating step (>60 °C). Pb recovery is >95 % for water samples. For archaeological silver samples, however, the Pb recovery is reduced to approximately 50 %, but causes no bias in the determination of Pb isotope amount ratios. The validated procedure was used to determine lead isotope amount ratios in Trojan silver artefacts with expanded uncertainties (k = 2) <0.09 %.

  19. CAG12 - A CSCM based procedure for flow of an equilibrium chemically reacting gas

    NASA Technical Reports Server (NTRS)

    Green, M. J.; Davy, W. C.; Lombard, C. K.

    1985-01-01

    The Conservative Supra Characteristic Method (CSCM), an implicit upwind Navier-Stokes algorithm, is extended to the numerical simulation of flows in chemical equilibrium. The resulting computer code known as Chemistry and Gasdynamics Implicit - Version 2 (CAG12) is described. First-order accurate results are presented for inviscid and viscous Mach 20 flows of air past a hemisphere-cylinder. The solution procedure captures the bow shock in a chemically reacting gas, a technique that is needed for simulating high altitude, rarefied flows. In an initial effort to validate the code, the inviscid results are compared with published gasdynamic and chemistry solutions and satisfactorily agreement is obtained.

  20. Validation of Living Donor Nephrectomy Codes

    PubMed Central

    Lam, Ngan N.; Lentine, Krista L.; Klarenbach, Scott; Sood, Manish M.; Kuwornu, Paul J.; Naylor, Kyla L.; Knoll, Gregory A.; Kim, S. Joseph; Young, Ann; Garg, Amit X.

    2018-01-01

    Background: Use of administrative data for outcomes assessment in living kidney donors is increasing given the rarity of complications and challenges with loss to follow-up. Objective: To assess the validity of living donor nephrectomy in health care administrative databases compared with the reference standard of manual chart review. Design: Retrospective cohort study. Setting: 5 major transplant centers in Ontario, Canada. Patients: Living kidney donors between 2003 and 2010. Measurements: Sensitivity and positive predictive value (PPV). Methods: Using administrative databases, we conducted a retrospective study to determine the validity of diagnostic and procedural codes for living donor nephrectomies. The reference standard was living donor nephrectomies identified through the province’s tissue and organ procurement agency, with verification by manual chart review. Operating characteristics (sensitivity and PPV) of various algorithms using diagnostic, procedural, and physician billing codes were calculated. Results: During the study period, there were a total of 1199 living donor nephrectomies. Overall, the best algorithm for identifying living kidney donors was the presence of 1 diagnostic code for kidney donor (ICD-10 Z52.4) and 1 procedural code for kidney procurement/excision (1PC58, 1PC89, 1PC91). Compared with the reference standard, this algorithm had a sensitivity of 97% and a PPV of 90%. The diagnostic and procedural codes performed better than the physician billing codes (sensitivity 60%, PPV 78%). Limitations: The donor chart review and validation study was performed in Ontario and may not be generalizable to other regions. Conclusions: An algorithm consisting of 1 diagnostic and 1 procedural code can be reliably used to conduct health services research that requires the accurate determination of living kidney donors at the population level. PMID:29662679

  1. Human performance measurement: Validation procedures applicable to advanced manned telescience systems

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.

    1990-01-01

    As telescience systems become more and more complex, autonomous, and opaque to their operators it becomes increasingly difficult to determine whether the total system is performing as it should. Some of the complex and interrelated human performance measurement issues are addressed as they relate to total system validation. The assumption is made that human interaction with the automated system will be required well into the Space Station Freedom era. Candidate human performance measurement-validation techniques are discussed for selected ground-to-space-to-ground and space-to-space situations. Most of these measures may be used in conjunction with an information throughput model presented elsewhere (Haines, 1990). Teleoperations, teleanalysis, teleplanning, teledesign, and teledocumentation are considered, as are selected illustrative examples of space related telescience activities.

  2. Differentiation of ileostomy from colostomy procedures: assessing the accuracy of current procedural terminology codes and the utility of natural language processing.

    PubMed

    Vo, Elaine; Davila, Jessica A; Hou, Jason; Hodge, Krystle; Li, Linda T; Suliburk, James W; Kao, Lillian S; Berger, David H; Liang, Mike K

    2013-08-01

    Large databases provide a wealth of information for researchers, but identifying patient cohorts often relies on the use of current procedural terminology (CPT) codes. In particular, studies of stoma surgery have been limited by the accuracy of CPT codes in identifying and differentiating ileostomy procedures from colostomy procedures. It is important to make this distinction because the prevalence of complications associated with stoma formation and reversal differ dramatically between types of stoma. Natural language processing (NLP) is a process that allows text-based searching. The Automated Retrieval Console is an NLP-based software that allows investigators to design and perform NLP-assisted document classification. In this study, we evaluated the role of CPT codes and NLP in differentiating ileostomy from colostomy procedures. Using CPT codes, we conducted a retrospective study that identified all patients undergoing a stoma-related procedure at a single institution between January 2005 and December 2011. All operative reports during this time were reviewed manually to abstract the following variables: formation or reversal and ileostomy or colostomy. Sensitivity and specificity for validation of the CPT codes against the mastery surgery schedule were calculated. Operative reports were evaluated by use of NLP to differentiate ileostomy- from colostomy-related procedures. Sensitivity and specificity for identifying patients with ileostomy or colostomy procedures were calculated for CPT codes and NLP for the entire cohort. CPT codes performed well in identifying stoma procedures (sensitivity 87.4%, specificity 97.5%). A total of 664 stoma procedures were identified by CPT codes between 2005 and 2011. The CPT codes were adequate in identifying stoma formation (sensitivity 97.7%, specificity 72.4%) and stoma reversal (sensitivity 74.1%, specificity 98.7%), but they were inadequate in identifying ileostomy (sensitivity 35.0%, specificity 88.1%) and colostomy (75

  3. Validity and reliability of instruments aimed at measuring Evidence-Based Practice in Physical Therapy: a systematic review of the literature.

    PubMed

    Fernández-Domínguez, Juan Carlos; Sesé-Abad, Albert; Morales-Asencio, Jose Miguel; Oliva-Pascual-Vaca, Angel; Salinas-Bueno, Iosune; de Pedro-Gómez, Joan Ernest

    2014-12-01

    Our goal is to compile and analyse the characteristics - especially validity and reliability - of all the existing international tools that have been used to measure evidence-based clinical practice in physiotherapy. A systematic review conducted with data from exclusively quantitative-type studies synthesized in narrative format. An in-depth search of the literature was conducted in two phases: initial, structured, electronic search of databases and also journals with summarized evidence; followed by a residual-directed search in the bibliographical references of the main articles found in the primary search procedure. The studies included were assigned to members of the research team who acted as peer reviewers. Relevant information was extracted from each of the selected articles using a template that included the general characteristics of the instrument as well as an analysis of the quality of the validation processes carried out, by following the criteria of Terwee. Twenty-four instruments were found to comply with the review screening criteria; however, in all cases, they were found to be limited as regards the 'constructs' included. Besides, they can all be seen to be lacking as regards comprehensiveness associated to the validation process of the psychometric tests used. It seems that what constitutes a rigorously developed assessment instrument for EBP in physical therapy continues to be a challenge. © 2014 John Wiley & Sons, Ltd.

  4. Total laparoscopic reversal of Hartmann's procedure.

    PubMed

    Masoni, Luigi; Mari, Francesco Saverio; Nigri, Giuseppe; Favi, Francesco; Pindozzi, Fioralba; Dall'Oglio, Anna; Pancaldi, Alessandra; Brescia, Antonio

    2013-01-01

    Hartmann's procedure is still performed in those cases in which colorectal anastomosis might be unsafe. Reversal of Hartmann's procedure (HR) is considered a major surgical procedure with a high morbidity (55 to 60%) and mortality rate (0 to 4%). To decrease these rates, laparoscopic Hartmann's reversal procedure was successfully experienced. We report our totally laparoscopic Hartmann's reversal technique. Between 2004 and 2010 we performed 27 HRs with a totally laparoscopic approach. The efficacy and safety of this technique were demonstrated evaluating the operative data, postoperative complications, and the outcome of the patients. There were no open conversions or major intraoperative complications. Anastomotic leaking occurred in one patient requiring an ileostomy; one patient needed a blood transfusion and one had a nosocomial pneumonia. The mean postoperative hospitalization was 5.7 days. Laparoscopic HR is a feasible and safe procedure and can be considered a valid alternative to open HR.

  5. Initial Validation of Ballistic Shock Transducers

    DTIC Science & Technology

    2017-06-05

    objective of this TOP is to describe various methods and instrumentation used in the initial validation of accelerometers to be used in both Ballistic...MILITARY STANDARD 810G CHANGE NOTICE 1, METHOD 522.2, SECTION 4.4.2 ....................................... A-1 B. GLOSSARY...Procedure (TOP) is to describe various methods and instrumentation used in the initial validation of accelerometers to be used in both Ballistic

  6. A methodological, task-based approach to Procedure-Specific Simulations training.

    PubMed

    Setty, Yaki; Salzman, Oren

    2016-12-01

    Procedure-Specific Simulations (PSS) are 3D realistic simulations that provide a platform to practice complete surgical procedures in a virtual-reality environment. While PSS have the potential to improve surgeons' proficiency, there are no existing standards or guidelines for PSS development in a structured manner. We employ a unique platform inspired by game design to develop virtual reality simulations in three dimensions of urethrovesical anastomosis during radical prostatectomy. 3D visualization is supported by a stereo vision, providing a fully realistic view of the simulation. The software can be executed for any robotic surgery platform. Specifically, we tested the simulation under windows environment on the RobotiX Mentor. Using urethrovesical anastomosis during radical prostatectomy simulation as a representative example, we present a task-based methodological approach to PSS training. The methodology provides tasks in increasing levels of difficulty from a novice level of basic anatomy identification, to an expert level that permits testing new surgical approaches. The modular methodology presented here can be easily extended to support more complex tasks. We foresee this methodology as a tool used to integrate PSS as a complementary training process for surgical procedures.

  7. Development and validation of an automated operational modal analysis algorithm for vibration-based monitoring and tensile load estimation

    NASA Astrophysics Data System (ADS)

    Rainieri, Carlo; Fabbrocino, Giovanni

    2015-08-01

    In the last few decades large research efforts have been devoted to the development of methods for automated detection of damage and degradation phenomena at an early stage. Modal-based damage detection techniques are well-established methods, whose effectiveness for Level 1 (existence) and Level 2 (location) damage detection is demonstrated by several studies. The indirect estimation of tensile loads in cables and tie-rods is another attractive application of vibration measurements. It provides interesting opportunities for cheap and fast quality checks in the construction phase, as well as for safety evaluations and structural maintenance over the structure lifespan. However, the lack of automated modal identification and tracking procedures has been for long a relevant drawback to the extensive application of the above-mentioned techniques in the engineering practice. An increasing number of field applications of modal-based structural health and performance assessment are appearing after the development of several automated output-only modal identification procedures in the last few years. Nevertheless, additional efforts are still needed to enhance the robustness of automated modal identification algorithms, control the computational efforts and improve the reliability of modal parameter estimates (in particular, damping). This paper deals with an original algorithm for automated output-only modal parameter estimation. Particular emphasis is given to the extensive validation of the algorithm based on simulated and real datasets in view of continuous monitoring applications. The results point out that the algorithm is fairly robust and demonstrate its ability to provide accurate and precise estimates of the modal parameters, including damping ratios. As a result, it has been used to develop systems for vibration-based estimation of tensile loads in cables and tie-rods. Promising results have been achieved for non-destructive testing as well as continuous

  8. Multiple Imputation based Clustering Validation (MIV) for Big Longitudinal Trial Data with Missing Values in eHealth.

    PubMed

    Zhang, Zhaoyang; Fang, Hua; Wang, Honggang

    2016-06-01

    Web-delivered trials are an important component in eHealth services. These trials, mostly behavior-based, generate big heterogeneous data that are longitudinal, high dimensional with missing values. Unsupervised learning methods have been widely applied in this area, however, validating the optimal number of clusters has been challenging. Built upon our multiple imputation (MI) based fuzzy clustering, MIfuzzy, we proposed a new multiple imputation based validation (MIV) framework and corresponding MIV algorithms for clustering big longitudinal eHealth data with missing values, more generally for fuzzy-logic based clustering methods. Specifically, we detect the optimal number of clusters by auto-searching and -synthesizing a suite of MI-based validation methods and indices, including conventional (bootstrap or cross-validation based) and emerging (modularity-based) validation indices for general clustering methods as well as the specific one (Xie and Beni) for fuzzy clustering. The MIV performance was demonstrated on a big longitudinal dataset from a real web-delivered trial and using simulation. The results indicate MI-based Xie and Beni index for fuzzy-clustering are more appropriate for detecting the optimal number of clusters for such complex data. The MIV concept and algorithms could be easily adapted to different types of clustering that could process big incomplete longitudinal trial data in eHealth services.

  9. Multiple Imputation based Clustering Validation (MIV) for Big Longitudinal Trial Data with Missing Values in eHealth

    PubMed Central

    Zhang, Zhaoyang; Wang, Honggang

    2016-01-01

    Web-delivered trials are an important component in eHealth services. These trials, mostly behavior-based, generate big heterogeneous data that are longitudinal, high dimensional with missing values. Unsupervised learning methods have been widely applied in this area, however, validating the optimal number of clusters has been challenging. Built upon our multiple imputation (MI) based fuzzy clustering, MIfuzzy, we proposed a new multiple imputation based validation (MIV) framework and corresponding MIV algorithms for clustering big longitudinal eHealth data with missing values, more generally for fuzzy-logic based clustering methods. Specifically, we detect the optimal number of clusters by auto-searching and -synthesizing a suite of MI-based validation methods and indices, including conventional (bootstrap or cross-validation based) and emerging (modularity-based) validation indices for general clustering methods as well as the specific one (Xie and Beni) for fuzzy clustering. The MIV performance was demonstrated on a big longitudinal dataset from a real web-delivered trial and using simulation. The results indicate MI-based Xie and Beni index for fuzzy-clustering is more appropriate for detecting the optimal number of clusters for such complex data. The MIV concept and algorithms could be easily adapted to different types of clustering that could process big incomplete longitudinal trial data in eHealth services. PMID:27126063

  10. Using Qualitative Methods to Improve Questionnaires for Spanish Speakers: Assessing Face Validity of a Food Behavior Checklist

    PubMed Central

    BANNA, JINAN C.; VERA BECERRA, LUZ E.; KAISER, LUCIA L.; TOWNSEND, MARILYN S.

    2015-01-01

    Development of outcome measures relevant to health nutrition behaviors requires a rigorous process of testing and revision. Whereas researchers often report performance of quantitative data collection to assess questionnaire validity and reliability, qualitative testing procedures are often overlooked. This report outlines a procedure for assessing face validity of a Spanish-language dietary assessment tool. Reviewing the literature produced no rigorously validated Spanish-language food behavior assessment tools for the US Department of Agriculture’s food assistance and education programs. In response to this need, this study evaluated the face validity of a Spanish-language food behavior checklist adapted from a 16-item English version of a food behavior checklist shown to be valid and reliable for limited-resource English speakers. The English version was translated using rigorous methods involving initial translation by one party and creation of five possible versions. Photos were modified based on client input and new photos were taken as necessary. A sample of low-income, Spanish-speaking women completed cognitive interviews (n=20). Spanish translation experts (n=7) fluent in both languages and familiar with both cultures made minor modifications but essentially approved client preferences. The resulting checklist generated a readability score of 93, indicating low reading difficulty. The Spanish-language checklist has adequate face validity in the target population and is ready for further validation using convergent measures. At the conclusion of testing, this instrument may be used to evaluate nutrition education interventions in California. These qualitative procedures provide a framework for designing evaluation tools for low-literate audiences participating in the US Department of Agriculture food assistance and education programs. PMID:20102831

  11. Using qualitative methods to improve questionnaires for Spanish speakers: assessing face validity of a food behavior checklist.

    PubMed

    Banna, Jinan C; Vera Becerra, Luz E; Kaiser, Lucia L; Townsend, Marilyn S

    2010-01-01

    Development of outcome measures relevant to health nutrition behaviors requires a rigorous process of testing and revision. Whereas researchers often report performance of quantitative data collection to assess questionnaire validity and reliability, qualitative testing procedures are often overlooked. This report outlines a procedure for assessing face validity of a Spanish-language dietary assessment tool. Reviewing the literature produced no rigorously validated Spanish-language food behavior assessment tools for the US Department of Agriculture's food assistance and education programs. In response to this need, this study evaluated the face validity of a Spanish-language food behavior checklist adapted from a 16-item English version of a food behavior checklist shown to be valid and reliable for limited-resource English speakers. The English version was translated using rigorous methods involving initial translation by one party and creation of five possible versions. Photos were modified based on client input and new photos were taken as necessary. A sample of low-income, Spanish-speaking women completed cognitive interviews (n=20). Spanish translation experts (n=7) fluent in both languages and familiar with both cultures made minor modifications but essentially approved client preferences. The resulting checklist generated a readability score of 93, indicating low reading difficulty. The Spanish-language checklist has adequate face validity in the target population and is ready for further validation using convergent measures. At the conclusion of testing, this instrument may be used to evaluate nutrition education interventions in California. These qualitative procedures provide a framework for designing evaluation tools for low-literate audiences participating in the US Department of Agriculture food assistance and education programs. Copyright 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.

  12. Validation of Procedures for Monitoring Crewmember Immune Function

    NASA Technical Reports Server (NTRS)

    Pierson, Duane; Crucian, Brian; Mehta, Satish; Stowe, Raymond; Uchakin, Peter; Quiriarte, Heather; Sams, Clarence

    2010-01-01

    The objective of this Supplemental Medical Objective (SMO) is to determine the status of the immune system, physiological stress and latent viral reactivation (a clinical outcome that can be measured) during both short and long-duration spaceflight. In addition, this study will develop and validate an immune monitoring strategy consistent with operational flight requirements and constraints. Pre-mission, in-flight and post-flight blood and saliva samples will be obtained from participating crewmembers. Assays included peripheral immunophenotype, T cell function, cytokine profiles, viral-specific immunity, latent viral reactivation (EBV, CMV, VZV), and stress hormone measurements. To date, 18 short duration (now completed) and 8 long-duration crewmembers have completed the study. The long-duration phase of this study is ongoing. For this presentation, the final data set for the short duration subjects will be discussed.

  13. Validation and transferability study of a method based on near-infrared hyperspectral imaging for the detection and quantification of ergot bodies in cereals.

    PubMed

    Vermeulen, Ph; Fernández Pierna, J A; van Egmond, H P; Zegers, J; Dardenne, P; Baeten, V

    2013-09-01

    In recent years, near-infrared (NIR) hyperspectral imaging has proved its suitability for quality and safety control in the cereal sector by allowing spectroscopic images to be collected at single-kernel level, which is of great interest to cereal control laboratories. Contaminants in cereals include, inter alia, impurities such as straw, grains from other crops, and insects, as well as undesirable substances such as ergot (sclerotium of Claviceps purpurea). For the cereal sector, the presence of ergot creates a high toxicity risk for animals and humans because of its alkaloid content. A study was undertaken, in which a complete procedure for detecting ergot bodies in cereals was developed, based on their NIR spectral characteristics. These were used to build relevant decision rules based on chemometric tools and on the morphological information obtained from the NIR images. The study sought to transfer this procedure from a pilot online NIR hyperspectral imaging system at laboratory level to a NIR hyperspectral imaging system at industrial level and to validate the latter. All the analyses performed showed that the results obtained using both NIR hyperspectral imaging cameras were quite stable and repeatable. In addition, a correlation higher than 0.94 was obtained between the predicted values obtained by NIR hyperspectral imaging and those supplied by the stereo-microscopic method which is the reference method. The validation of the transferred protocol on blind samples showed that the method could identify and quantify ergot contamination, demonstrating the transferability of the method. These results were obtained on samples with an ergot concentration of 0.02% which is less than the EC limit for cereals (intervention grains) destined for humans fixed at 0.05%.

  14. Relations among conceptual knowledge, procedural knowledge, and procedural flexibility in two samples differing in prior knowledge.

    PubMed

    Schneider, Michael; Rittle-Johnson, Bethany; Star, Jon R

    2011-11-01

    Competence in many domains rests on children developing conceptual and procedural knowledge, as well as procedural flexibility. However, research on the developmental relations between these different types of knowledge has yielded unclear results, in part because little attention has been paid to the validity of the measures or to the effects of prior knowledge on the relations. To overcome these problems, we modeled the three constructs in the domain of equation solving as latent factors and tested (a) whether the predictive relations between conceptual and procedural knowledge were bidirectional, (b) whether these interrelations were moderated by prior knowledge, and (c) how both constructs contributed to procedural flexibility. We analyzed data from 2 measurement points each from two samples (Ns = 228 and 304) of middle school students who differed in prior knowledge. Conceptual and procedural knowledge had stable bidirectional relations that were not moderated by prior knowledge. Both kinds of knowledge contributed independently to procedural flexibility. The results demonstrate how changes in complex knowledge structures contribute to competence development.

  15. Development and validation of the Body and Appearance Self-Conscious Emotions Scale (BASES).

    PubMed

    Castonguay, Andrée L; Sabiston, Catherine M; Crocker, Peter R E; Mack, Diane E

    2014-03-01

    The purpose of these studies was to develop a psychometrically sound measure of shame, guilt, authentic pride, and hubristic pride for use in body and appearance contexts. In Study 1, 41 potential items were developed and assessed for item quality and comprehension. In Study 2, a panel of experts (N=8; M=11, SD=6.5 years of experience) reviewed the scale and items for evidence of content validity. Participants in Study 3 (n=135 males, n=300 females) completed the BASES and various body image, personality, and emotion scales. A separate sample (n=155; 35.5% male) in Study 3 completed the BASES twice using a two-week time interval. The BASES subscale scores demonstrated evidence for internal consistency, item-total correlations, concurrent, convergent, incremental, and discriminant validity, and 2-week test-retest reliability. The 4-factor solution was a good fit in confirmatory factor analysis, reflecting body-related shame, guilt, authentic and hubristic pride subscales of the BASES. The development and validation of the BASES may help advance body image and self-conscious emotion research by providing a foundation to examine the unique antecedents and outcomes of these specific emotional experiences. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. A procedure for scaling sensory attributes based on multidimensional measurements: application to sensory sharpness of kitchen knives

    NASA Astrophysics Data System (ADS)

    Takatsuji, Toshiyuki; Tanaka, Ken-ichi

    1996-06-01

    A procedure is derived by which sensory attributes can be scaled as a function of various physical and/or chemical properties of the object to be tested. This procedure consists of four successive steps: (i) design and experiment, (ii) fabrication of specimens according to the design parameters, (iii) assessment of a sensory attribute using sensory evaluation and (iv) derivation of the relationship between the parameters and the sensory attribute. In these steps an experimental design using orthogonal arrays, analysis of variance and regression analyses are used strategically. When a specimen with the design parameters cannot be physically fabricated, an alternative specimen having parameters closest to the design is selected from a group of specimens which can be physically made. The influence of the deviation of actual parameters from the desired ones is also discussed. A method of confirming the validity of the regression equation is also investigated. The procedure is applied to scale the sensory sharpness of kitchen knives as a function of the edge angle and the roughness of the cutting edge.

  17. 41 CFR 60-3.7 - Use of other validity studies.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... studies. 60-3.7 Section 60-3.7 Public Contracts and Property Management Other Provisions Relating to... of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection procedures by validity studies conducted by other users or conducted...

  18. 41 CFR 60-3.7 - Use of other validity studies.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... studies. 60-3.7 Section 60-3.7 Public Contracts and Property Management Other Provisions Relating to... of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection procedures by validity studies conducted by other users or conducted...

  19. SUPPORTING THE INDUSTRY BY DEVELOPING A DESIGN GUIDANCE FOR COMPUTER-BASED PROCEDURES FOR FIELD WORKERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; LeBlanc, Katya

    The paper-based procedures currently used for nearly all activities in the commercial nuclear power industry have a long history of ensuring safe operation of the plants. However, there is potential to greatly increase efficiency and safety by improving how the human interacts with the procedures, which can be achieved through the use of computer-based procedures (CBPs). A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools and dynamic step presentation. As a step toward the goal of improving procedure use performance, the U.S. Department of Energy Light Water Reactor Sustainability Programmore » researchers, together with the nuclear industry, have been investigating the possibility and feasibility of replacing current paper-based procedures with CBPs. The main purpose of the CBP research conducted at the Idaho National Laboratory was to provide design guidance to the nuclear industry to be used by both utilities and vendors. After studying existing design guidance for CBP systems, the researchers concluded that the majority of the existing guidance is intended for control room CBP systems, and does not necessarily address the challenges of designing CBP systems for instructions carried out in the field. Further, the guidance is often presented on a high level, which leaves the designer to interpret what is meant by the guidance and how to specifically implement it. The authors developed a design guidance to provide guidance specifically tailored to instructions that are carried out in the field based.« less

  20. Development of a tool to support holistic generic assessment of clinical procedure skills.

    PubMed

    McKinley, Robert K; Strand, Janice; Gray, Tracey; Schuwirth, Lambert; Alun-Jones, Tom; Miller, Helen

    2008-06-01

    The challenges of maintaining comprehensive banks of valid checklists make context-specific checklists for assessment of clinical procedural skills problematic. This paper reports the development of a tool which supports generic holistic assessment of clinical procedural skills. We carried out a literature review, focus groups and non-participant observation of assessments with interview of participants, participant evaluation of a pilot objective structured clinical examination (OSCE), a national modified Delphi study with prior definitions of consensus and an OSCE. Participants were volunteers from a large acute teaching trust, a teaching primary care trust and a national sample of National Health Service staff. Results In total, 86 students, trainees and staff took part in the focus groups, observation of assessments and pilot OSCE, 252 in the Delphi study and 46 candidates and 50 assessors in the final OSCE. We developed a prototype tool with 5 broad categories amongst which were distributed 38 component competencies. There was > 70% agreement (our prior definition of consensus) at the first round of the Delphi study for inclusion of all categories and themes and no consensus for inclusion of additional categories or themes. Generalisability was 0.76. An OSCE based on the instrument has a predicted reliability of 0.79 with 12 stations and 1 assessor per station or 10 stations and 2 assessors per station. This clinical procedural skills assessment tool enables reliable assessment and has content and face validity for the assessment of clinical procedural skills. We have designated it the Leicester Clinical Procedure Assessment Tool (LCAT).

  1. Human factors research on performance-based navigation instrument procedures for NextGEN

    DOT National Transportation Integrated Search

    2012-10-14

    Area navigation (RNAV) and required navigation performance (RNP) are key components of performance-based navigation (PBN). Instrument procedures that use RNAV and RNP can have more flexible and precise paths than conventional routes that are defined ...

  2. Derivation and validation of a diagnostic score based on case-mix groups to predict 30-day death or urgent readmission.

    PubMed

    van Walraven, Carl; Wong, Jenna; Forster, Alan J

    2012-01-01

    Between 5% and 10% of patients die or are urgently readmitted within 30 days of discharge from hospital. Readmission risk indexes have either excluded acute diagnoses or modelled them as multiple distinct variables. In this study, we derived and validated a score summarizing the influence of acute hospital diagnoses and procedures on death or urgent readmission within 30 days. From population-based hospital abstracts in Ontario, we randomly sampled 200 000 discharges between April 2003 and March 2009 and determined who had been readmitted urgently or died within 30 days of discharge. We used generalized estimating equation modelling, with a sample of 100 000 patients, to measure the adjusted association of various case-mix groups (CMGs-homogenous groups of acute care inpatients with similar clinical and resource-utilization characteristics) with 30-day death or urgent readmission. This final model was transformed into a scoring system that was validated in the remaining 100 000 patients. Patients in the derivation set belonged to 1 of 506 CMGs and had a 6.8% risk of 30-day death or urgent readmission. Forty-seven CMG codes (more than half of which were directly related to chronic diseases) were independently associated with this outcome, which led to a CMG score that ranged from -6 to 7 points. The CMG score was significantly associated with 30-day death or urgent readmission (unadjusted odds ratio for a 1-point increase in CMG score 1.52, 95% confidence interval [CI] 1.49-1.56). Alone, the CMG score was only moderately discriminative (C statistic 0.650, 95% CI 0.644-0.656). However, when the CMG score was added to a validated risk index for death or readmission, the C statistic increased to 0.759 (95% CI 0.753-0.765). The CMG score was well calibrated for 30-day death or readmission. In this study, we developed a scoring system for acute hospital diagnoses and procedures that could be used as part of a risk-adjustment methodology for analyses of postdischarge

  3. An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Gao, Dan; Mao, Xuming

    2018-03-01

    A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.

  4. Validation of the Continuum of Care Conceptual Model for Athletic Therapy

    PubMed Central

    Lafave, Mark R.; Butterwick, Dale; Eubank, Breda

    2015-01-01

    Utilization of conceptual models in field-based emergency care currently borrows from existing standards of medical and paramedical professions. The purpose of this study was to develop and validate a comprehensive conceptual model that could account for injuries ranging from nonurgent to catastrophic events including events that do not follow traditional medical or prehospital care protocols. The conceptual model should represent the continuum of care from the time of initial injury spanning to an athlete's return to participation in their sport. Finally, the conceptual model should accommodate both novices and experts in the AT profession. This paper chronicles the content validation steps of the Continuum of Care Conceptual Model for Athletic Therapy (CCCM-AT). The stages of model development were domain and item generation, content expert validation using a three-stage modified Ebel procedure, and pilot testing. Only the final stage of the modified Ebel procedure reached a priori 80% consensus on three domains of interest: (1) heading descriptors; (2) the order of the model; (3) the conceptual model as a whole. Future research is required to test the use of the CCCM-AT in order to understand its efficacy in teaching and practice within the AT discipline. PMID:26464897

  5. Validity of a computerized population registry of dementia based on clinical databases.

    PubMed

    Mar, J; Arrospide, A; Soto-Gordoa, M; Machón, M; Iruin, Á; Martinez-Lage, P; Gabilondo, A; Moreno-Izco, F; Gabilondo, A; Arriola, L

    2018-05-08

    The handling of information through digital media allows innovative approaches for identifying cases of dementia through computerized searches within the clinical databases that include systems for coding diagnoses. The aim of this study was to analyze the validity of a dementia registry in Gipuzkoa based on the administrative and clinical databases existing in the Basque Health Service. This is a descriptive study based on the evaluation of available data sources. First, through review of medical records, the diagnostic validity was evaluated in 2 samples of cases identified and not identified as dementia. The sensitivity, specificity and positive and negative predictive value of the diagnosis of dementia were measured. Subsequently, the cases of living dementia in December 31, 2016 were searched in the entire Gipuzkoa population to collect sociodemographic and clinical variables. The validation samples included 986 cases and 327 no cases. The calculated sensitivity was 80.2% and the specificity was 99.9%. The negative predictive value was 99.4% and positive value was 95.1%. The cases in Gipuzkoa were 10,551, representing 65% of the cases predicted according to the literature. Antipsychotic medication were taken by a 40% and a 25% of the cases were institutionalized. A registry of dementias based on clinical and administrative databases is valid and feasible. Its main contribution is to show the dimension of dementia in the health system. Copyright © 2018 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  6. 42 CFR 423.514 - Validation of Part D reporting requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 3 2011-10-01 2011-10-01 false Validation of Part D reporting requirements. 423... Procedures and Contracts with Part D plan sponsors § 423.514 Validation of Part D reporting requirements. (a... request. (g) Data validation. Each Part D sponsor must subject information collected under paragraph (a...

  7. 42 CFR 422.516 - Validation of Part C reporting requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Validation of Part C reporting requirements. 422... Procedures and Contracts for Medicare Advantage Organizations § 422.516 Validation of Part C reporting....502(f)(1) available to its enrollees upon reasonable request. (g) Data validation. Each Part C sponsor...

  8. 42 CFR 422.516 - Validation of Part C reporting requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 3 2012-10-01 2012-10-01 false Validation of Part C reporting requirements. 422... Procedures and Contracts for Medicare Advantage Organizations § 422.516 Validation of Part C reporting....502(f)(1) available to its enrollees upon reasonable request. (g) Data validation. Each Part C sponsor...

  9. 42 CFR 422.516 - Validation of Part C reporting requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Validation of Part C reporting requirements. 422... Procedures and Contracts for Medicare Advantage Organizations § 422.516 Validation of Part C reporting....502(f)(1) available to its enrollees upon reasonable request. (g) Data validation. Each Part C sponsor...

  10. Validation of a Plasma-Based Comprehensive Cancer Genotyping Assay Utilizing Orthogonal Tissue- and Plasma-Based Methodologies.

    PubMed

    Odegaard, Justin I; Vincent, John J; Mortimer, Stefanie; Vowles, James V; Ulrich, Bryan C; Banks, Kimberly C; Fairclough, Stephen R; Zill, Oliver A; Sikora, Marcin; Mokhtari, Reza; Abdueva, Diana; Nagy, Rebecca J; Lee, Christine E; Kiedrowski, Lesli A; Paweletz, Cloud P; Eltoukhy, Helmy; Lanman, Richard B; Chudova, Darya I; Talasaz, AmirAli

    2018-04-24

    Purpose: To analytically and clinically validate a circulating cell-free tumor DNA sequencing test for comprehensive tumor genotyping and demonstrate its clinical feasibility. Experimental Design: Analytic validation was conducted according to established principles and guidelines. Blood-to-blood clinical validation comprised blinded external comparison with clinical droplet digital PCR across 222 consecutive biomarker-positive clinical samples. Blood-to-tissue clinical validation comprised comparison of digital sequencing calls to those documented in the medical record of 543 consecutive lung cancer patients. Clinical experience was reported from 10,593 consecutive clinical samples. Results: Digital sequencing technology enabled variant detection down to 0.02% to 0.04% allelic fraction/2.12 copies with ≤0.3%/2.24-2.76 copies 95% limits of detection while maintaining high specificity [prevalence-adjusted positive predictive values (PPV) >98%]. Clinical validation using orthogonal plasma- and tissue-based clinical genotyping across >750 patients demonstrated high accuracy and specificity [positive percent agreement (PPAs) and negative percent agreement (NPAs) >99% and PPVs 92%-100%]. Clinical use in 10,593 advanced adult solid tumor patients demonstrated high feasibility (>99.6% technical success rate) and clinical sensitivity (85.9%), with high potential actionability (16.7% with FDA-approved on-label treatment options; 72.0% with treatment or trial recommendations), particularly in non-small cell lung cancer, where 34.5% of patient samples comprised a directly targetable standard-of-care biomarker. Conclusions: High concordance with orthogonal clinical plasma- and tissue-based genotyping methods supports the clinical accuracy of digital sequencing across all four types of targetable genomic alterations. Digital sequencing's clinical applicability is further supported by high rates of technical success and biomarker target discovery. Clin Cancer Res; 1-11. ©2018

  11. Face, Content, and Construct Validations of Endoscopic Needle Injection Simulator for Transurethral Bulking Agent in Treatment of Stress Urinary Incontinence.

    PubMed

    Farhan, Bilal; Soltani, Tandis; Do, Rebecca; Perez, Claudia; Choi, Hanul; Ghoniem, Gamal

    2018-05-02

    Endoscopic injection of urethral bulking agents is an office procedure that is used to treat stress urinary incontinence secondary to internal sphincteric deficiency. Validation studies important part of simulator evaluation and is considered important step to establish the effectiveness of simulation-based training. The endoscopic needle injection (ENI) simulator has not been formally validated, although it has been used widely at University of California, Irvine. We aimed to assess the face, content, and construct validity of the UC, Irvine ENI simulator. Dissected female porcine bladders were mounted in a modified Hysteroscopy Diagnostic Trainer. Using routine endoscopic equipment for this procedure with video monitoring, 6 urologists (experts group) and 6 urology trainee (novice group) completed urethral bulking agents injections on a total of 12 bladders using ENI simulator. Face and content validities were assessed by using structured quantitative survey which rating the realism. Construct validity was assessed by comparing the performance, time of the procedure, and the occlusive (anatomical and functional) evaluations between the experts and novices. Trainees also completed a postprocedure feedback survey. Effective injections were evaluated by measuring the retrograde urethral opening pressure, visual cystoscopic coaptation, and postprocedure gross anatomic examination. All 12 participants felt the simulator was a good training tool and should be used as essential part of urology training (face validity). ENI simulator showed good face and content validity with average score varies between the experts and the novices was 3.9/5 and 3.8/5, respectively. Content validity evaluation showed that most aspects of the simulator were adequately realistic (mean Likert scores 3.9-3.8/5). However, the bladder does not bleed, and sometimes thin. Experts significantly outperformed novices (p < 001) across all measure of performance therefore establishing construct

  12. The East London glaucoma prediction score: web-based validation of glaucoma risk screening tool

    PubMed Central

    Stephen, Cook; Benjamin, Longo-Mbenza

    2013-01-01

    AIM It is difficult for Optometrists and General Practitioners to know which patients are at risk. The East London glaucoma prediction score (ELGPS) is a web based risk calculator that has been developed to determine Glaucoma risk at the time of screening. Multiple risk factors that are available in a low tech environment are assessed to provide a risk assessment. This is extremely useful in settings where access to specialist care is difficult. Use of the calculator is educational. It is a free web based service. Data capture is user specific. METHOD The scoring system is a web based questionnaire that captures and subsequently calculates the relative risk for the presence of Glaucoma at the time of screening. Three categories of patient are described: Unlikely to have Glaucoma; Glaucoma Suspect and Glaucoma. A case review methodology of patients with known diagnosis is employed to validate the calculator risk assessment. RESULTS Data from the patient records of 400 patients with an established diagnosis has been captured and used to validate the screening tool. The website reports that the calculated diagnosis correlates with the actual diagnosis 82% of the time. Biostatistics analysis showed: Sensitivity = 88%; Positive predictive value = 97%; Specificity = 75%. CONCLUSION Analysis of the first 400 patients validates the web based screening tool as being a good method of screening for the at risk population. The validation is ongoing. The web based format will allow a more widespread recruitment for different geographic, population and personnel variables. PMID:23550097

  13. Validation of the Simbionix PROcedure Rehearsal Studio sizing module: A comparison of software for endovascular aneurysm repair sizing and planning.

    PubMed

    Velu, Juliëtte F; Groot Jebbink, Erik; de Vries, Jean-Paul P M; Slump, Cornelis H; Geelkerken, Robert H

    2017-02-01

    An important determinant of successful endovascular aortic aneurysm repair is proper sizing of the dimensions of the aortic-iliac vessels. The goal of the present study was to determine the concurrent validity, a method for comparison of test scores, for EVAR sizing and planning of the recently introduced Simbionix PROcedure Rehearsal Studio (PRORS). Seven vascular specialists analyzed anonymized computed tomography angiography scans of 70 patients with an infrarenal aneurysm of the abdominal aorta, using three different sizing software packages Simbionix PRORS (Simbionix USA Corp., Cleveland, OH, USA), 3mensio (Pie Medical Imaging BV, Maastricht, The Netherlands), and TeraRecon (Aquarius, Foster City, CA, USA). The following measurements were included in the protocol: diameter 1 mm below the most distal main renal artery, diameter 15 mm below the lowest renal artery, maximum aneurysm diameter, and length from the most distal renal artery to the left iliac artery bifurcation. Averaged over the locations, the intraclass correlation coefficient is 0.83 for Simbionix versus 3mensio, 0.81 for Simbionix versus TeraRecon, and 0.86 for 3mensio versus TeraRecon. It can be concluded that the Simbionix sizing software is as precise as two other validated and commercially available software packages.

  14. Performing Verification and Validation in Reuse-Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1999-01-01

    The implementation of reuse-based software engineering not only introduces new activities to the software development process, such as domain analysis and domain modeling, it also impacts other aspects of software engineering. Other areas of software engineering that are affected include Configuration Management, Testing, Quality Control, and Verification and Validation (V&V). Activities in each of these areas must be adapted to address the entire domain or product line rather than a specific application system. This paper discusses changes and enhancements to the V&V process, in order to adapt V&V to reuse-based software engineering.

  15. Validation of voxel-based morphometry (VBM) based on MRI

    NASA Astrophysics Data System (ADS)

    Yang, Xueyu; Chen, Kewei; Guo, Xiaojuan; Yao, Li

    2007-03-01

    Voxel-based morphometry (VBM) is an automated and objective image analysis technique for detecting differences in regional concentration or volume of brain tissue composition based on structural magnetic resonance (MR) images. VBM has been used widely to evaluate brain morphometric differences between different populations, but there isn't an evaluation system for its validation until now. In this study, a quantitative and objective evaluation system was established in order to assess VBM performance. We recruited twenty normal volunteers (10 males and 10 females, age range 20-26 years, mean age 22.6 years). Firstly, several focal lesions (hippocampus, frontal lobe, anterior cingulate, back of hippocampus, back of anterior cingulate) were simulated in selected brain regions using real MRI data. Secondly, optimized VBM was performed to detect structural differences between groups. Thirdly, one-way ANOVA and post-hoc test were used to assess the accuracy and sensitivity of VBM analysis. The results revealed that VBM was a good detective tool in majority of brain regions, even in controversial brain region such as hippocampus in VBM study. Generally speaking, much more severity of focal lesion was, better VBM performance was. However size of focal lesion had little effects on VBM analysis.

  16. Development and empirical validation of symmetric component measures of multidimensional constructs: customer and competitor orientation.

    PubMed

    Sørensen, Hans Eibe; Slater, Stanley F

    2008-08-01

    Atheoretical measure purification may lead to construct deficient measures. The purpose of this paper is to provide a theoretically driven procedure for the development and empirical validation of symmetric component measures of multidimensional constructs. Particular emphasis is placed on establishing a formalized three-step procedure for achieving a posteriori content validity. Then the procedure is applied to development and empirical validation of two symmetrical component measures of market orientation, customer orientation and competitor orientation. Analysis suggests that average variance extracted is particularly critical to reliability in the respecification of multi-indicator measures. In relation to this, the results also identify possible deficiencies in using Cronbach alpha for establishing reliable and valid measures.

  17. A hydrochemical data base for the Hanford Site, Washington

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Early, T.O.; Mitchell, M.D.; Spice, G.D.

    1986-05-01

    This data package contains a revision of the Site Hydrochemical Data Base for water samples associated with the Basalt Waste Isolation Project (BWIP). In addition to the detailed chemical analyses, a summary description of the data base format, detailed descriptions of verification procedures used to check data entries, and detailed descriptions of validation procedures used to evaluate data quality are included. 32 refs., 21 figs., 3 tabs.

  18. Development of RAD-Score: A Tool to Assess the Procedural Competence of Diagnostic Radiology Residents.

    PubMed

    Isupov, Inga; McInnes, Matthew D F; Hamstra, Stan J; Doherty, Geoffrey; Gupta, Ashish; Peddle, Susan; Jibri, Zaid; Rakhra, Kawan; Hibbert, Rebecca M

    2017-04-01

    The purpose of this study is to develop a tool to assess the procedural competence of radiology trainees, with sources of evidence gathered from five categories to support the construct validity of tool: content, response process, internal structure, relations to other variables, and consequences. A pilot form for assessing procedural competence among radiology residents, known as the RAD-Score tool, was developed by evaluating published literature and using a modified Delphi procedure involving a group of local content experts. The pilot version of the tool was tested by seven radiology department faculty members who evaluated procedures performed by 25 residents at one institution between October 2014 and June 2015. Residents were evaluated while performing multiple procedures in both clinical and simulation settings. The main outcome measure was the percentage of residents who were considered ready to perform procedures independently, with testing conducted to determine differences between levels of training. A total of 105 forms (for 52 procedures performed in a clinical setting and 53 procedures performed in a simulation setting) were collected for a variety of procedures (eight vascular or interventional, 42 body, 12 musculoskeletal, 23 chest, and 20 breast procedures). A statistically significant difference was noted in the percentage of trainees who were rated as being ready to perform a procedure independently (in postgraduate year [PGY] 2, 12% of residents; in PGY3, 61%; in PGY4, 85%; and in PGY5, 88%; p < 0.05); this difference persisted in the clinical and simulation settings. User feedback and psychometric analysis were used to create a final version of the form. This prospective study describes the successful development of a tool for assessing the procedural competence of radiology trainees with high levels of construct validity in multiple domains. Implementation of the tool in the radiology residency curriculum is planned and can play an

  19. Development and Validation of a Christian-Based Grief Recovery Scale

    ERIC Educational Resources Information Center

    Jen Der Pan, Peter; Deng, Liang-Yu F.; Tsai, S. L.; Chen, Ho-Yuan J.; Yuan, Sheng-Shiou Jenny

    2014-01-01

    The purpose of this study was to develop and validate a Christian-based Grief Recovery Scale (CGRS) which was used to measure Christians recovering from grief after a significant loss. Taiwanese Christian participants were recruited from churches and a comprehensive university in northern Taiwan. They were affected by both the Christian faith and…

  20. The Validity of Interpersonal Skills Assessment via Situational Judgment Tests for Predicting Academic Success and Job Performance

    ERIC Educational Resources Information Center

    Lievens, Filip; Sackett, Paul R.

    2012-01-01

    This study provides conceptual and empirical arguments why an assessment of applicants' procedural knowledge about interpersonal behavior via a video-based situational judgment test might be valid for academic and postacademic success criteria. Four cohorts of medical students (N = 723) were followed from admission to employment. Procedural…