Sample records for process validation general

  1. 76 FR 4360 - Guidance for Industry on Process Validation: General Principles and Practices; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-25

    ... elements of process validation for the manufacture of human and animal drug and biological products... process validation for the manufacture of human and animal drug and biological products, including APIs. This guidance describes process validation activities in three stages: In Stage 1, Process Design, the...

  2. Identifying critical success factors for designing selection processes into postgraduate specialty training: the case of UK general practice.

    PubMed

    Plint, Simon; Patterson, Fiona

    2010-06-01

    The UK national recruitment process into general practice training has been developed over several years, with incremental introduction of stages which have been piloted and validated. Previously independent processes, which encouraged multiple applications and produced inconsistent outcomes, have been replaced by a robust national process which has high reliability and predictive validity, and is perceived to be fair by candidates and allocates applicants equitably across the country. Best selection practice involves a job analysis which identifies required competencies, then designs reliable assessment methods to measure them, and over the long term ensures that the process has predictive validity against future performance. The general practitioner recruitment process introduced machine markable short listing assessments for the first time in the UK postgraduate recruitment context, and also adopted selection centre workplace simulations. The key success factors have been identified as corporate commitment to the goal of a national process, with gradual convergence maintaining locus of control rather than the imposition of change without perceived legitimate authority.

  3. 37 CFR 1.419 - Display of currently valid control number under the Paperwork Reduction Act.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Display of currently valid... UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES International Processing Provisions General Information § 1.419 Display of currently valid control...

  4. How Many Batches Are Needed for Process Validation under the New FDA Guidance?

    PubMed

    Yang, Harry

    2013-01-01

    The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of assurance that the process will consistently produce future batches meeting quality standards. Several examples based on simulated data are presented to illustrate the use of the Bayesian method in helping manufacturers make risk-based decisions for Stage 2 PPQ, and they highlight the advantages of the method over traditional Frequentist approaches. The discussions in the paper lend support for a life cycle and risk-based approach to process validation recommended in the new FDA guidance. The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of assurance that the process will consistently produce future batches meeting quality standards. Several examples based on simulated data are presented to illustrate the use of the Bayesian method in helping manufacturers make risk-based decisions for Stage 2 PPQ, and THEY highlight the advantages of the method over traditional Frequentist approaches. The discussions in the paper lend support for a life cycle and risk-based approach to process validation recommended in the new FDA guidance.

  5. Validity of GRE General Test Scores and TOEFL Scores for Graduate Admission to a Technical University in Western Europe

    ERIC Educational Resources Information Center

    Zimmermann, Judith; von Davier, Alina A.; Buhmann, Joachim M.; Heinimann, Hans R.

    2018-01-01

    Graduate admission has become a critical process in tertiary education, whereby selecting valid admissions instruments is key. This study assessed the validity of Graduate Record Examination (GRE) General Test scores for admission to Master's programmes at a technical university in Europe. We investigated the indicative value of GRE scores for the…

  6. A general software reliability process simulation technique

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.

    1991-01-01

    The structure and rationale of the generalized software reliability process, together with the design and implementation of a computer program that simulates this process are described. Given assumed parameters of a particular project, the users of this program are able to generate simulated status timelines of work products, numbers of injected anomalies, and the progress of testing, fault isolation, repair, validation, and retest. Such timelines are useful in comparison with actual timeline data, for validating the project input parameters, and for providing data for researchers in reliability prediction modeling.

  7. Validation Methods Research for Fault-Tolerant Avionics and Control Systems: Working Group Meeting, 2

    NASA Technical Reports Server (NTRS)

    Gault, J. W. (Editor); Trivedi, K. S. (Editor); Clary, J. B. (Editor)

    1980-01-01

    The validation process comprises the activities required to insure the agreement of system realization with system specification. A preliminary validation methodology for fault tolerant systems documented. A general framework for a validation methodology is presented along with a set of specific tasks intended for the validation of two specimen system, SIFT and FTMP. Two major areas of research are identified. First, are those activities required to support the ongoing development of the validation process itself, and second, are those activities required to support the design, development, and understanding of fault tolerant systems.

  8. An assessment of space shuttle flight software development processes

    NASA Technical Reports Server (NTRS)

    1993-01-01

    In early 1991, the National Aeronautics and Space Administration's (NASA's) Office of Space Flight commissioned the Aeronautics and Space Engineering Board (ASEB) of the National Research Council (NRC) to investigate the adequacy of the current process by which NASA develops and verifies changes and updates to the Space Shuttle flight software. The Committee for Review of Oversight Mechanisms for Space Shuttle Flight Software Processes was convened in Jan. 1992 to accomplish the following tasks: (1) review the entire flight software development process from the initial requirements definition phase to final implementation, including object code build and final machine loading; (2) review and critique NASA's independent verification and validation process and mechanisms, including NASA's established software development and testing standards; (3) determine the acceptability and adequacy of the complete flight software development process, including the embedded validation and verification processes through comparison with (1) generally accepted industry practices, and (2) generally accepted Department of Defense and/or other government practices (comparing NASA's program with organizations and projects having similar volumes of software development, software maturity, complexity, criticality, lines of code, and national standards); (4) consider whether independent verification and validation should continue. An overview of the study, independent verification and validation of critical software, and the Space Shuttle flight software development process are addressed. Findings and recommendations are presented.

  9. Volpe Aircraft Noise Certification DGPS Validation/Audit General Information, Data Submittal Guidelines, and Process Details; Letter Report V324-FB48B3-LR5

    DOT National Transportation Integrated Search

    2018-01-09

    As required by Federal Aviation Administration Order 8110.4C, Type Certification Process, the Volpe Center Acoustics Facility (Volpe), in support of the Federal Aviation Administration Office of Environment and Energy (AEE), has completed valid...

  10. Knowledge Activation, Integration, and Validation during Narrative Text Comprehension

    ERIC Educational Resources Information Center

    Cook, Anne E.; O'Brien, Edward J.

    2014-01-01

    Previous text comprehension studies using the contradiction paradigm primarily tested assumptions of the activation mechanism involved in reading. However, the nature of the contradiction in such studies relied on validation of information in readers' general world knowledge. We directly tested this validation process by varying the strength of…

  11. Microbiological Validation of the IVGEN System

    NASA Technical Reports Server (NTRS)

    Porter, David A.

    2013-01-01

    The principal purpose of this report is to describe a validation process that can be performed in part on the ground prior to launch, and in space for the IVGEN system. The general approach taken is derived from standard pharmaceutical industry validation schemes modified to fit the special requirements of in-space usage.

  12. A family-specific use of the Measure of Processes of Care for Service Providers (MPOC-SP).

    PubMed

    Siebes, R C; Nijhuis, B J G; Boonstra, A M; Ketelaar, M; Wijnroks, L; Reinders-Messelink, H A; Postema, K; Vermeer, A

    2008-03-01

    To examine the validity and utility of the Dutch Measure of Processes of Care for Service Providers (MPOC-SP) as a family-specific measure. A validation study. Five paediatric rehabilitation settings in the Netherlands. The MPOC-SP was utilized in a general (reflecting on services provided for all clients and clients' families) and family-specific way (filled out in reference to a particular child and his or her family). Professionals providing rehabilitation and educational services to children with cerebral palsy. For construct validity, Pearson's product-moment correlation coefficients (r ) between the scales were calculated. The ability of service providers to discriminate between general and family-specific ratings was examined by exploration of absolute difference scores. One hundred and sixteen service professionals filled out 240 family-specific MPOC-SPs. In addition, a subgroup of 81 professionals filled out a general MPOC-SP. For each professional, family-specific and general scores were paired, resulting in 151 general-family-specific MPOC-SP pairs. The construct validity analyses confirmed the scale structure: 21 items (77.8%) loaded highest in the original MPOC-SP factors, and all items correlated best and significantly with their own scale score (r 0.565 to 0.897; P<0.001). Intercorrelations between the scales ranged from r = 0.159 to r = 0.522. In total, 94.4% of the mean absolute difference scores between general and family-specific scale scores were larger than the expected difference. Service providers were able to discriminate between general and family-specific MPOC-SP item ratings. The family-specific MPOC-SP is a valid measure that can be used for individual evaluation of family-centred services and can be the impetus for family-related quality improvement.

  13. Automatic, semi-automatic and manual validation of urban drainage data.

    PubMed

    Branisavljević, N; Prodanović, D; Pavlović, D

    2010-01-01

    Advances in sensor technology and the possibility of automated long distance data transmission have made continuous measurements the preferable way of monitoring urban drainage processes. Usually, the collected data have to be processed by an expert in order to detect and mark the wrong data, remove them and replace them with interpolated data. In general, the first step in detecting the wrong, anomaly data is called the data quality assessment or data validation. Data validation consists of three parts: data preparation, validation scores generation and scores interpretation. This paper will present the overall framework for the data quality improvement system, suitable for automatic, semi-automatic or manual operation. The first two steps of the validation process are explained in more detail, using several validation methods on the same set of real-case data from the Belgrade sewer system. The final part of the validation process, which is the scores interpretation, needs to be further investigated on the developed system.

  14. Training and Validation of Standardized Patients for Unannounced Assessment of Physicians' Management of Depression

    ERIC Educational Resources Information Center

    Shirazi, Mandana; Sadeghi, Majid; Emami, A.; Kashani, A. Sabouri; Parikh, Sagar; Alaeddini, F.; Arbabi, Mohammad; Wahlstrom, Rolf

    2011-01-01

    Objective: Standardized patients (SPs) have been developed to measure practitioner performance in actual practice settings, but results have not been fully validated for psychiatric disorders. This study describes the process of creating reliable and valid SPs for unannounced assessment of general-practitioners' management of depression disorders…

  15. The Social Meaning in Life Events Scale (SMILES): A preliminary psychometric evaluation in a bereaved sample.

    PubMed

    Bellet, Benjamin W; Holland, Jason M; Neimeyer, Robert A

    2018-06-05

    A mourner's success in making meaning of a loss has proven key in predicting a wide array of bereavement outcomes. However, much of this meaning-making process takes place in an interpersonal framework that is hypothesized to either aid or obstruct this process. To date, a psychometrically validated measure of the degree to which a mourner successfully makes meaning of a loss in a social context has yet to be developed. The present study examines the factor structure, reliability, and validity of a new measure called the Social Meaning in Life Events Scale (SMILES) in a sample of bereaved college students (N = 590). The SMILES displayed a two-factor structure, with one factor assessing the extent to which a mourner's efforts at making meaning were invalidated (Social Invalidation subscale), and the other assessing the extent to which a mourner's meaning-making process was validated (Social Validation subscale). The subscales displayed good reliability and construct validity in reference to several outcome variables of interest (complicated grief, general health, and post-loss growth), as well as related but different variables (social support and meaning made). The subscales also demonstrated group differences according to two demographic variables associated with complications in the mourning process (age and mode of loss), as well as incremental validity in predicting adverse bereavement outcomes over and above general social support. Clinical and research implications involving the use of this new measure are discussed.

  16. Management of the General Process of Parenteral Nutrition Using mHealth Technologies: Evaluation and Validation Study

    PubMed Central

    2018-01-01

    Background Any system applied to the control of parenteral nutrition (PN) ought to prove that the process meets the established requirements and include a repository of records to allow evaluation of the information about PN processes at any time. Objective The goal of the research was to evaluate the mobile health (mHealth) app and validate its effectiveness in monitoring the management of the PN process. Methods We studied the evaluation and validation of the general process of PN using an mHealth app. The units of analysis were the PN bags prepared and administered at the Son Espases University Hospital, Palma, Spain, from June 1 to September 6, 2016. For the evaluation of the app, we used the Poststudy System Usability Questionnaire and subsequent analysis with the Cronbach alpha coefficient. Validation was performed by checking the compliance of control for all operations on each of the stages (validation and transcription of the prescription, preparation, conservation, and administration) and by monitoring the operative control points and critical control points. Results The results obtained from 387 bags were analyzed, with 30 interruptions of administration. The fulfillment of stages was 100%, including noncritical nonconformities in the storage control. The average deviation in the weight of the bags was less than 5%, and the infusion time did not present deviations greater than 1 hour. Conclusions The developed app successfully passed the evaluation and validation tests and was implemented to perform the monitoring procedures for the overall PN process. A new mobile solution to manage the quality and traceability of sensitive medicines such as blood-derivative drugs and hazardous drugs derived from this project is currently being deployed. PMID:29615389

  17. On the inherent competition between valid and spurious inductive inferences in Boolean data

    NASA Astrophysics Data System (ADS)

    Andrecut, M.

    Inductive inference is the process of extracting general rules from specific observations. This problem also arises in the analysis of biological networks, such as genetic regulatory networks, where the interactions are complex and the observations are incomplete. A typical task in these problems is to extract general interaction rules as combinations of Boolean covariates, that explain a measured response variable. The inductive inference process can be considered as an incompletely specified Boolean function synthesis problem. This incompleteness of the problem will also generate spurious inferences, which are a serious threat to valid inductive inference rules. Using random Boolean data as a null model, here we attempt to measure the competition between valid and spurious inductive inference rules from a given data set. We formulate two greedy search algorithms, which synthesize a given Boolean response variable in a sparse disjunct normal form, and respectively a sparse generalized algebraic normal form of the variables from the observation data, and we evaluate numerically their performance.

  18. Evaluation of the Effect of the Volume Throughput and Maximum Flux of Low-Surface-Tension Fluids on Bacterial Penetration of 0.2 Micron-Rated Filters during Process-Specific Filter Validation Testing.

    PubMed

    Folmsbee, Martha

    2015-01-01

    Approximately 97% of filter validation tests result in the demonstration of absolute retention of the test bacteria, and thus sterile filter validation failure is rare. However, while Brevundimonas diminuta (B. diminuta) penetration of sterilizing-grade filters is rarely detected, the observation that some fluids (such as vaccines and liposomal fluids) may lead to an increased incidence of bacterial penetration of sterilizing-grade filters by B. diminuta has been reported. The goal of the following analysis was to identify important drivers of filter validation failure in these rare cases. The identification of these drivers will hopefully serve the purpose of assisting in the design of commercial sterile filtration processes with a low risk of filter validation failure for vaccine, liposomal, and related fluids. Filter validation data for low-surface-tension fluids was collected and evaluated with regard to the effect of bacterial load (CFU/cm(2)), bacterial load rate (CFU/min/cm(2)), volume throughput (mL/cm(2)), and maximum filter flux (mL/min/cm(2)) on bacterial penetration. The data set (∼1162 individual filtrations) included all instances of process-specific filter validation failures performed at Pall Corporation, including those using other filter media, but did not include all successful retentive filter validation bacterial challenges. It was neither practical nor necessary to include all filter validation successes worldwide (Pall Corporation) to achieve the goals of this analysis. The percentage of failed filtration events for the selected total master data set was 27% (310/1162). Because it is heavily weighted with penetration events, this percentage is considerably higher than the actual rate of failed filter validations, but, as such, facilitated a close examination of the conditions that lead to filter validation failure. In agreement with our previous reports, two of the significant drivers of bacterial penetration identified were the total bacterial load and the bacterial load rate. In addition to these parameters, another three possible drivers of failure were also identified: volume throughput, maximum filter flux, and pressure. Of the data for which volume throughput information was available, 24% (249/1038) of the filtrations resulted in penetration. However, for the volume throughput range of 680-2260 mL/cm(2), only 9 out of 205 bacterial challenges (∼4%) resulted in penetration. Of the data for which flux information was available, 22% (212/946) resulted in bacterial penetration. However, in the maximum filter flux range from 7 to 18 mL/min/cm(2), only one out of 121 filtrations (0.6%) resulted in penetration. A slight increase in filter failure was observed in filter bacterial challenges with a differential pressure greater than 30 psid. When designing a commercial process for the sterile filtration of a low-surface-tension fluid (or any other potentially high-risk fluid), targeting the volume throughput range of 680-2260 mL/cm(2) or flux range of 7-18 mL/min/cm(2), and maintaining the differential pressure below 30 psid, could significantly decrease the risk of validation filter failure. However, it is important to keep in mind that these are general trends described in this study and some test fluids may not conform to the general trends described here. Ultimately, it is important to evaluate both filterability and bacterial retention of the test fluid under proposed process conditions prior to finalizing the manufacturing process to ensure successful process-specific filter validation of low-surface-tension fluids. An overwhelming majority of process-specific filter validation (qualification) tests result in the demonstration of absolute retention of test bacteria by sterilizing-grade membrane filters. As such, process-specific filter validation failure is rare. However, while bacterial penetration of sterilizing-grade filters during process-specific filter validation is rarely detected, some fluids (such as vaccines and liposomal fluids) have been associated with an increased incidence of bacterial penetration. The goal of the following analysis was to identify important drivers of process-specific filter validation failure. The identification of these drivers will possibly serve to assist in the design of commercial sterile filtration processes with a low risk of filter validation failure. Filter validation data for low-surface-tension fluids was collected and evaluated with regard to bacterial concentration and rates, as well as filtered fluid volume and rate (Pall Corporation). The master data set (∼1160 individual filtrations) included all recorded instances of process-specific filter validation failures but did not include all successful filter validation bacterial challenge tests. This allowed for a close examination of the conditions that lead to process-specific filter validation failure. As previously reported, two significant drivers of bacterial penetration were identified: the total bacterial load (the total number of bacteria per filter) and the bacterial load rate (the rate at which bacteria were applied to the filter). In addition to these parameters, another three possible drivers of failure were also identified: volumetric throughput, filter flux, and pressure. When designing a commercial process for the sterile filtration of a low-surface-tension fluid (or any other penetrative-risk fluid), targeting the identified bacterial challenge loads, volume throughput, and corresponding flux rates could decrease, and possibly eliminate, the risk of validation filter failure. However, it is important to keep in mind that these are general trends described in this study and some test fluids may not conform to the general trends described here. Ultimately, it is important to evaluate both filterability and bacterial retention of the test fluid under proposed process conditions prior to finalizing the manufacturing process to ensure successful filter validation of low-surface-tension fluids. © PDA, Inc. 2015.

  19. Computer Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pronskikh, V. S.

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossiblemore » to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes« less

  20. An invariance property of generalized Pearson random walks in bounded geometries

    NASA Astrophysics Data System (ADS)

    Mazzolo, Alain

    2009-03-01

    Invariance properties of random walks in bounded domains are a topic of growing interest since they contribute to improving our understanding of diffusion in confined geometries. Recently, limited to Pearson random walks with exponentially distributed straight paths, it has been shown that under isotropic uniform incidence, the average length of the trajectories through the domain is independent of the random walk characteristic and depends only on the ratio of the volume's domain over its surface. In this paper, thanks to arguments of integral geometry, we generalize this property to any isotropic bounded stochastic process and we give the conditions of its validity for isotropic unbounded stochastic processes. The analytical form for the traveled distance from the boundary to the first scattering event that ensures the validity of the Cauchy formula is also derived. The generalization of the Cauchy formula is an analytical constraint that thus concerns a very wide range of stochastic processes, from the original Pearson random walk to a Rayleigh distribution of the displacements, covering many situations of physical importance.

  1. Applying Independent Verification and Validation to Automatic Test Equipment

    NASA Technical Reports Server (NTRS)

    Calhoun, Cynthia C.

    1997-01-01

    This paper describes a general overview of applying Independent Verification and Validation (IV&V) to Automatic Test Equipment (ATE). The overview is not inclusive of all IV&V activities that can occur or of all development and maintenance items that can be validated and verified, during the IV&V process. A sampling of possible IV&V activities that can occur within each phase of the ATE life cycle are described.

  2. Management of the General Process of Parenteral Nutrition Using mHealth Technologies: Evaluation and Validation Study.

    PubMed

    Cervera Peris, Mercedes; Alonso Rorís, Víctor Manuel; Santos Gago, Juan Manuel; Álvarez Sabucedo, Luis; Wanden-Berghe, Carmina; Sanz-Valero, Javier

    2018-04-03

    Any system applied to the control of parenteral nutrition (PN) ought to prove that the process meets the established requirements and include a repository of records to allow evaluation of the information about PN processes at any time. The goal of the research was to evaluate the mobile health (mHealth) app and validate its effectiveness in monitoring the management of the PN process. We studied the evaluation and validation of the general process of PN using an mHealth app. The units of analysis were the PN bags prepared and administered at the Son Espases University Hospital, Palma, Spain, from June 1 to September 6, 2016. For the evaluation of the app, we used the Poststudy System Usability Questionnaire and subsequent analysis with the Cronbach alpha coefficient. Validation was performed by checking the compliance of control for all operations on each of the stages (validation and transcription of the prescription, preparation, conservation, and administration) and by monitoring the operative control points and critical control points. The results obtained from 387 bags were analyzed, with 30 interruptions of administration. The fulfillment of stages was 100%, including noncritical nonconformities in the storage control. The average deviation in the weight of the bags was less than 5%, and the infusion time did not present deviations greater than 1 hour. The developed app successfully passed the evaluation and validation tests and was implemented to perform the monitoring procedures for the overall PN process. A new mobile solution to manage the quality and traceability of sensitive medicines such as blood-derivative drugs and hazardous drugs derived from this project is currently being deployed. ©Mercedes Cervera Peris, Víctor Manuel Alonso Rorís, Juan Manuel Santos Gago, Luis Álvarez Sabucedo, Carmina Wanden-Berghe, Javier Sanz-Valero. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 03.04.2018.

  3. National Pipeline Mapping System (NPMS) : repository standards

    DOT National Transportation Integrated Search

    1997-07-01

    This draft document contains 7 sections. They are as follows: 1. General Topics, 2. Data Formats, 3. Metadata, 4. Attribute Data, 5. Data Flow, 6. Descriptive Process, and 7. Validation and Processing of Submitted Data. These standards were created w...

  4. Instrument Development and Validation of the Infant and Toddler Assessment for Quality Improvement

    ERIC Educational Resources Information Center

    Perlman, Michal; Brunsek, Ashley; Hepditch, Anne; Gray, Karen; Falenchuck, Olesya

    2017-01-01

    Research Findings: There is a growing need for accurate and efficient measures of classroom quality in early childhood education and care (ECEC) settings. Observational measures are costly, as their administration generally takes 3-5 hr per classroom. This article outlines the process of development and preliminary concurrent validity testing of…

  5. Assessment Methodology for Process Validation Lifecycle Stage 3A.

    PubMed

    Sayeed-Desta, Naheed; Pazhayattil, Ajay Babu; Collins, Jordan; Chen, Shu; Ingram, Marzena; Spes, Jana

    2017-07-01

    The paper introduces evaluation methodologies and associated statistical approaches for process validation lifecycle Stage 3A. The assessment tools proposed can be applied to newly developed and launched small molecule as well as bio-pharma products, where substantial process and product knowledge has been gathered. The following elements may be included in Stage 3A: number of 3A batch determination; evaluation of critical material attributes, critical process parameters, critical quality attributes; in vivo in vitro correlation; estimation of inherent process variability (IPV) and PaCS index; process capability and quality dashboard (PCQd); and enhanced control strategy. US FDA guidance on Process Validation: General Principles and Practices, January 2011 encourages applying previous credible experience with suitably similar products and processes. A complete Stage 3A evaluation is a valuable resource for product development and future risk mitigation of similar products and processes. Elements of 3A assessment were developed to address industry and regulatory guidance requirements. The conclusions made provide sufficient information to make a scientific and risk-based decision on product robustness.

  6. Participatory design of a preliminary safety checklist for general practice

    PubMed Central

    Bowie, Paul; Ferguson, Julie; MacLeod, Marion; Kennedy, Susan; de Wet, Carl; McNab, Duncan; Kelly, Moya; McKay, John; Atkinson, Sarah

    2015-01-01

    Background The use of checklists to minimise errors is well established in high reliability, safety-critical industries. In health care there is growing interest in checklists to standardise checking processes and ensure task completion, and so provide further systemic defences against error and patient harm. However, in UK general practice there is limited experience of safety checklist use. Aim To identify workplace hazards that impact on safety, health and wellbeing, and performance, and codesign a standardised checklist process. Design and setting Application of mixed methods to identify system hazards in Scottish general practices and develop a safety checklist based on human factors design principles. Method A multiprofessional ‘expert’ group (n = 7) and experienced front-line GPs, nurses, and practice managers (n = 18) identified system hazards and developed and validated a preliminary checklist using a combination of literature review, documentation review, consensus building workshops using a mini-Delphi process, and completion of content validity index exercise. Results A prototype safety checklist was developed and validated consisting of six safety domains (for example, medicines management), 22 sub-categories (for example, emergency drug supplies) and 78 related items (for example, stock balancing, secure drug storage, and cold chain temperature recording). Conclusion Hazards in the general practice work system were prioritised that can potentially impact on the safety, health and wellbeing of patients, GP team members, and practice performance, and a necessary safety checklist prototype was designed. However, checklist efficacy in improving safety processes and outcomes is dependent on user commitment, and support from leaders and promotional champions. Although further usability development and testing is necessary, the concept should be of interest in the UK and internationally. PMID:25918338

  7. A Study of General Education Astronomy Students' Understandings of Cosmology. Part I. Development and Validation of Four Conceptual Cosmology Surveys

    ERIC Educational Resources Information Center

    Wallace, Colin S.; Prather, Edward E.; Duncan, Douglas K.

    2011-01-01

    This is the first in a series of five articles describing a national study of general education astronomy students' conceptual and reasoning difficulties with cosmology. In this paper, we describe the process by which we designed four new surveys to assess general education astronomy students' conceptual cosmology knowledge. These surveys focused…

  8. Development of Standards for Nondestructive Evaluation of COPVs Used in Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Waller, Jess M.; Saulsberry, Regor L.

    2012-01-01

    Composite OverWrapped Pressure Vessels (COPVs) are currently accepted by NASA based on design and qualification requirements and generally not verified by NDE for the following reasons: (1) Manufactures and end users generally do not have experience and validated quantitative methods of detecting flaws and defects of concern (1-a) If detected, the flaws are not adequately quantified and it is unclear how they may contribute to degradation in mechanical response (1-b) Carbon-epoxy COPVs also extremely sensitive to impact damage and impacts may be below the visible detection threshold (2) If damage is detected, this generally results in rejection since the effect on mechanical response is generally not known (3) NDE response has not generally been fully characterized, probability of detection (POD) established, and processes validated for evaluation of vessel condition as manufactured and delivered.

  9. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates.

    PubMed

    LeDell, Erin; Petersen, Maya; van der Laan, Mark

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.

  10. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates

    PubMed Central

    Petersen, Maya; van der Laan, Mark

    2015-01-01

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737

  11. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  12. Developing a Data Set and Processing Methodology for Fluid/Structure Interaction Code Validation

    DTIC Science & Technology

    2007-06-01

    50 29. 9-Probe Wake Survey Rake Configurations...structural stability and fatigue in test article components and, in general, in facility support structures and rotating machinery blading . Both T&E... blade analysis and simulations. To ensure the accuracy of the U of CO technology, validation using flight-test data and test data from a wind tunnel

  13. Guidance on validation and qualification of processes and operations involving radiopharmaceuticals.

    PubMed

    Todde, S; Peitl, P Kolenc; Elsinga, P; Koziorowski, J; Ferrari, V; Ocak, E M; Hjelstuen, O; Patt, M; Mindt, T L; Behe, M

    2017-01-01

    Validation and qualification activities are nowadays an integral part of the day by day routine work in a radiopharmacy. This document is meant as an Appendix of Part B of the EANM "Guidelines on Good Radiopharmacy Practice (GRPP)" issued by the Radiopharmacy Committee of the EANM, covering the qualification and validation aspects related to the small-scale "in house" preparation of radiopharmaceuticals. The aim is to provide more detailed and practice-oriented guidance to those who are involved in the small-scale preparation of radiopharmaceuticals which are not intended for commercial purposes or distribution. The present guideline covers the validation and qualification activities following the well-known "validation chain", that begins with editing the general Validation Master Plan document, includes all the required documentation (e.g. User Requirement Specification, Qualification protocols, etc.), and leads to the qualification of the equipment used in the preparation and quality control of radiopharmaceuticals, until the final step of Process Validation. A specific guidance to the qualification and validation activities specifically addressed to small-scale hospital/academia radiopharmacies is here provided. Additional information, including practical examples, are also available.

  14. Due Process in Dual Process: Model-Recovery Simulations of Decision-Bound Strategy Analysis in Category Learning

    ERIC Educational Resources Information Center

    Edmunds, Charlotte E. R.; Milton, Fraser; Wills, Andy J.

    2018-01-01

    Behavioral evidence for the COVIS dual-process model of category learning has been widely reported in over a hundred publications (Ashby & Valentin, 2016). It is generally accepted that the validity of such evidence depends on the accurate identification of individual participants' categorization strategies, a task that usually falls to…

  15. FDA 2011 process validation guidance: lifecycle compliance model.

    PubMed

    Campbell, Cliff

    2014-01-01

    This article has been written as a contribution to the industry's efforts in migrating from a document-driven to a data-driven compliance mindset. A combination of target product profile, control engineering, and general sum principle techniques is presented as the basis of a simple but scalable lifecycle compliance model in support of modernized process validation. Unit operations and significant variables occupy pole position within the model, documentation requirements being treated as a derivative or consequence of the modeling process. The quality system is repositioned as a subordinate of system quality, this being defined as the integral of related "system qualities". The article represents a structured interpretation of the U.S. Food and Drug Administration's 2011 Guidance for Industry on Process Validation and is based on the author's educational background and his manufacturing/consulting experience in the validation field. The U.S. Food and Drug Administration's Guidance for Industry on Process Validation (2011) provides a wide-ranging and rigorous outline of compliant drug manufacturing requirements relative to its 20(th) century predecessor (1987). Its declared focus is patient safety, and it identifies three inter-related (and obvious) stages of the compliance lifecycle. Firstly, processes must be designed, both from a technical and quality perspective. Secondly, processes must be qualified, providing evidence that the manufacturing facility is fully "roadworthy" and fit for its intended purpose. Thirdly, processes must be verified, meaning that commercial batches must be monitored to ensure that processes remain in a state of control throughout their lifetime.

  16. The Interview and Personnel Selection: Is the Process Valid and Reliable?

    ERIC Educational Resources Information Center

    Niece, Richard

    1983-01-01

    Reviews recent literature concerning the job interview. Concludes that such interviews are generally ineffective and proposes that school administrators devise techniques for improving their interviewing systems. (FL)

  17. Validation of learning assessments: A primer.

    PubMed

    Peeters, Michael J; Martin, Beth A

    2017-09-01

    The Accreditation Council for Pharmacy Education's Standards 2016 has placed greater emphasis on validating educational assessments. In this paper, we describe validity, reliability, and validation principles, drawing attention to the conceptual change that highlights one validity with multiple evidence sources; to this end, we recommend abandoning historical (confusing) terminology associated with the term validity. Further, we describe and apply Kane's framework (scoring, generalization, extrapolation, and implications) for the process of validation, with its inferences and conclusions from varied uses of assessment instruments by different colleges and schools of pharmacy. We then offer five practical recommendations that can improve reporting of validation evidence in pharmacy education literature. We describe application of these recommendations, including examples of validation evidence in the context of pharmacy education. After reading this article, the reader should be able to understand the current concept of validation, and use a framework as they validate and communicate their own institution's learning assessments. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Critical review of the validity of patient satisfaction questionnaires pertaining to oral health care.

    PubMed

    Nair, Rahul; Ishaque, Sana; Spencer, Andrew John; Luzzi, Liana; Do, Loc Giang

    2018-03-30

    Review the validation process reported for oral healthcare satisfaction scales that intended to measure general oral health care that is not restricted to specific subspecialties or interventions. After preliminary searches, PUBMED and EMBASE were searched using a broad search strategy, followed by a snowball strategy using the references of the publications included from database searches. Title and abstract were screened for assessing inclusion, followed by a full-text screening of these publications. English language publications on multi-item questionnaires that report on a scale measuring patient satisfaction for oral health care were included. Publications were excluded when they did not report on any psychometric validation, or the scales were addressing specific treatments or subspecialities in oral health care. Fourteen instruments were identified from as many publications that report on their initial validation, while five more publications reported on further testing of the validity of these instruments. Number of items (range: 8-42) and dimension reported (range: 2-13) were often dissimilar between the assessed measurement instruments. There was also a lack of methodologies to incorporate patient's subjective perspective. Along with a limited reporting of psychometric properties of instruments, cross-cultural adaptations were limited to translation processes. The extent of validity and reliability of the included instruments was largely unassessed, and appropriate instruments for populations outside of those belonging to general adult populations were not present. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Incompressible inelasticity as an essential ingredient for the validity of the kinematic decomposition F =FeFi

    NASA Astrophysics Data System (ADS)

    Reina, Celia; Conti, Sergio

    2017-10-01

    The multiplicative decomposition of the total deformation F =FeFi between an elastic (Fe) and an inelastic component (Fi) is standard in the modeling of many irreversible processes such as plasticity, growth, thermoelasticity, viscoelasticty or phase transformations. The heuristic argument for such kinematic assumption is based on the chain rule for the compatible scenario (CurlFi = 0) where the individual deformation tensors are gradients of deformation mappings, i.e. F = D φ = D (φe ∘φi) = (Dφe) ∘φi (Dφi) =FeFi . Yet, the conditions for its validity in the general incompatible case (CurlFi ≠ 0) has so far remained uncertain. We show in this paper that detFi = 1 and CurlFi bounded are necessary and sufficient conditions for the validity of F =FeFi for a wide range of inelastic processes. In particular, in the context of crystal plasticity, we demonstrate via rigorous homogenization from discrete dislocations to the continuum level in two dimensions, that the volume preserving property of the mechanistics of dislocation glide, combined with a finite dislocation density, is sufficient to deliver F =FeFp at the continuum scale. We then generalize this result to general two-dimensional inelastic processes that may be described at a lower dimensional scale via a multiplicative decomposition while exhibiting a finite density of incompatibilities. The necessity of the conditions detFi = 1 and CurlFi bounded for such systems is demonstrated via suitable counterexamples.

  20. Validating emotional attention regulation as a component of emotional intelligence: A Stroop approach to individual differences in tuning in to and out of nonverbal cues.

    PubMed

    Elfenbein, Hillary Anger; Jang, Daisung; Sharma, Sudeep; Sanchez-Burks, Jeffrey

    2017-03-01

    Emotional intelligence (EI) has captivated researchers and the public alike, but it has been challenging to establish its components as objective abilities. Self-report scales lack divergent validity from personality traits, and few ability tests have objectively correct answers. We adapt the Stroop task to introduce a new facet of EI called emotional attention regulation (EAR), which involves focusing emotion-related attention for the sake of information processing rather than for the sake of regulating one's own internal state. EAR includes 2 distinct components. First, tuning in to nonverbal cues involves identifying nonverbal cues while ignoring alternate content, that is, emotion recognition under conditions of distraction by competing stimuli. Second, tuning out of nonverbal cues involves ignoring nonverbal cues while identifying alternate content, that is, the ability to interrupt emotion recognition when needed to focus attention elsewhere. An auditory test of valence included positive and negative words spoken in positive and negative vocal tones. A visual test of approach-avoidance included green- and red-colored facial expressions depicting happiness and anger. The error rates for incongruent trials met the key criteria for establishing the validity of an EI test, in that the measure demonstrated test-retest reliability, convergent validity with other EI measures, divergent validity from factors such as general processing speed and mostly personality, and predictive validity in this case for well-being. By demonstrating that facets of EI can be validly theorized and empirically assessed, results also speak to the validity of EI more generally. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. The speed of information processing of 9- to 13-year-old intellectually gifted children.

    PubMed

    Duan, Xiaoju; Dan, Zhou; Shi, Jiannong

    2013-02-01

    In general, intellectually gifted children perform better than non-gifted children across many domains. The present validation study investigated the speed with which intellectually gifted children process information. 184 children, ages 9 to 13 years old (91 gifted, M age = 10.9 yr., SD = 1.8; 93 non-gifted children, M age = 11.0 yr., SD = 1.7) were tested individually on three information processing tasks: an inspection time task, a choice reaction time task, an abstract matching task. Intellectually gifted children outperformed their non-gifted peers on all three tasks obtaining shorter reaction time and doing so with greater accuracy. The findings supported the validity of the information processing speed in identifying intellectually gifted children.

  2. Reliability and validity of the revised Gibson Test of Cognitive Skills, a computer-based test battery for assessing cognition across the lifespan.

    PubMed

    Moore, Amy Lawson; Miller, Terissa M

    2018-01-01

    The purpose of the current study is to evaluate the validity and reliability of the revised Gibson Test of Cognitive Skills, a computer-based battery of tests measuring short-term memory, long-term memory, processing speed, logic and reasoning, visual processing, as well as auditory processing and word attack skills. This study included 2,737 participants aged 5-85 years. A series of studies was conducted to examine the validity and reliability using the test performance of the entire norming group and several subgroups. The evaluation of the technical properties of the test battery included content validation by subject matter experts, item analysis and coefficient alpha, test-retest reliability, split-half reliability, and analysis of concurrent validity with the Woodcock Johnson III Tests of Cognitive Abilities and Tests of Achievement. Results indicated strong sources of evidence of validity and reliability for the test, including internal consistency reliability coefficients ranging from 0.87 to 0.98, test-retest reliability coefficients ranging from 0.69 to 0.91, split-half reliability coefficients ranging from 0.87 to 0.91, and concurrent validity coefficients ranging from 0.53 to 0.93. The Gibson Test of Cognitive Skills-2 is a reliable and valid tool for assessing cognition in the general population across the lifespan.

  3. Volpe Aircraft Noise Certification Software & Methodology Validation/Audit General Information, Data Submittal Guidelines, and Process Details; Letter Report V324-FB48B3-LR2

    DOT National Transportation Integrated Search

    2017-08-18

    As required by Federal Aviation Administration (FAA) Order 8110.4C: Type Certification Process (most recently revised as Change 5, 20 December, 2011), the Volpe Center Acoustics Facility (Volpe), in support of the FAA Office of Environmen...

  4. Statistical Calibration and Validation of a Homogeneous Ventilated Wall-Interference Correction Method for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.

    2005-01-01

    Wind tunnel experiments will continue to be a primary source of validation data for many types of mathematical and computational models in the aerospace industry. The increased emphasis on accuracy of data acquired from these facilities requires understanding of the uncertainty of not only the measurement data but also any correction applied to the data. One of the largest and most critical corrections made to these data is due to wall interference. In an effort to understand the accuracy and suitability of these corrections, a statistical validation process for wall interference correction methods has been developed. This process is based on the use of independent cases which, after correction, are expected to produce the same result. Comparison of these independent cases with respect to the uncertainty in the correction process establishes a domain of applicability based on the capability of the method to provide reasonable corrections with respect to customer accuracy requirements. The statistical validation method was applied to the version of the Transonic Wall Interference Correction System (TWICS) recently implemented in the National Transonic Facility at NASA Langley Research Center. The TWICS code generates corrections for solid and slotted wall interference in the model pitch plane based on boundary pressure measurements. Before validation could be performed on this method, it was necessary to calibrate the ventilated wall boundary condition parameters. Discrimination comparisons are used to determine the most representative of three linear boundary condition models which have historically been used to represent longitudinally slotted test section walls. Of the three linear boundary condition models implemented for ventilated walls, the general slotted wall model was the most representative of the data. The TWICS code using the calibrated general slotted wall model was found to be valid to within the process uncertainty for test section Mach numbers less than or equal to 0.60. The scatter among the mean corrected results of the bodies of revolution validation cases was within one count of drag on a typical transport aircraft configuration for Mach numbers at or below 0.80 and two counts of drag for Mach numbers at or below 0.90.

  5. Validation of alternative methods for toxicity testing.

    PubMed Central

    Bruner, L H; Carr, G J; Curren, R D; Chamberlain, M

    1998-01-01

    Before nonanimal toxicity tests may be officially accepted by regulatory agencies, it is generally agreed that the validity of the new methods must be demonstrated in an independent, scientifically sound validation program. Validation has been defined as the demonstration of the reliability and relevance of a test method for a particular purpose. This paper provides a brief review of the development of the theoretical aspects of the validation process and updates current thinking about objectively testing the performance of an alternative method in a validation study. Validation of alternative methods for eye irritation testing is a specific example illustrating important concepts. Although discussion focuses on the validation of alternative methods intended to replace current in vivo toxicity tests, the procedures can be used to assess the performance of alternative methods intended for other uses. Images Figure 1 PMID:9599695

  6. Longitudinal Validation of General and Specific Structural Features of Personality Pathology

    PubMed Central

    Wright, Aidan G.C.; Hopwood, Christopher J.; Skodol, Andrew E.; Morey, Leslie C.

    2016-01-01

    Theorists have long argued that personality disorder (PD) is best understood in terms of general impairments shared across the disorders as well as more specific instantiations of pathology. A model based on this theoretical structure was proposed as part of the DSM-5 revision process. However, only recently has this structure been subjected to formal quantitative evaluation, with little in the way of validation efforts via external correlates or prospective longitudinal prediction. We used the Collaborative Longitudinal Study of Personality Disorders dataset to: (1) estimate structural models that parse general from specific variance in personality disorder features, (2) examine patterns of growth in general and specific features over the course of 10 years, and (3) establish concurrent and dynamic longitudinal associations in PD features and a host of external validators including basic personality traits and psychosocial functioning scales. We found that general PD exhibited much lower absolute stability and was most strongly related to broad markers of psychosocial functioning, concurrently and longitudinally, whereas specific features had much higher mean stability and exhibited more circumscribed associations with functioning. However, both general and specific factors showed recognizable associations with normative and pathological traits. These results can inform efforts to refine the conceptualization and diagnosis of personality pathology. PMID:27819472

  7. The individual therapy process questionnaire: development and validation of a revised measure to evaluate general change mechanisms in psychotherapy.

    PubMed

    Mander, Johannes

    2015-01-01

    There is a dearth of measures specifically designed to assess empirically validated mechanisms of therapeutic change. To fill in this research gap, the aim of the current study was to develop a measure that covers a large variety of empirically validated mechanisms of change with corresponding versions for the patient and therapist. To develop an instrument that is based on several important change process frameworks, we combined two established change mechanisms instruments: the Scale for the Multiperspective Assessment of General Change Mechanisms in Psychotherapy (SACiP) and the Scale of the Therapeutic Alliance-Revised (STA-R). In our study, 457 psychosomatic inpatients completed the SACiP and the STA-R and diverse outcome measures in early, middle and late stages of psychotherapy. Data analyses were conducted using factor analyses and multilevel modelling. The psychometric properties of the resulting Individual Therapy Process Questionnaire were generally good to excellent, as demonstrated by (a) exploratory factor analyses on both patient and therapist ratings, (b) CFA on later measuring times, (c) high internal consistencies and (d) significant outcome predictive effects. The parallel forms of the ITPQ deliver opportunities to compare the patient and therapist perspectives for a broader range of facets of change mechanisms than was hitherto possible. Consequently, the measure can be applied in future research to more specifically analyse different change mechanism profiles in session-to-session development and outcome prediction. Key Practitioner Message This article describes the development of an instrument that measures general mechanisms of change in psychotherapy from both the patient and therapist perspectives. Post-session item ratings from both the patient and therapist can be used as feedback to optimize therapeutic processes. We provide a detailed discussion of measures developed to evaluate therapeutic change mechanisms. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Funding for the 2ND IAEA technical meeting on fusion data processing, validation and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenwald, Martin

    The International Atomic Energy Agency (IAEA) will organize the second Technical Meeting on Fusion Da Processing, Validation and Analysis from 30 May to 02 June, 2017, in Cambridge, MA USA. The meeting w be hosted by the MIT Plasma Science and Fusion Center (PSFC). The objective of the meeting is to provide a platform where a set of topics relevant to fusion data processing, validation and analysis are discussed with the view of extrapolation needs to next step fusion devices such as ITER. The validation and analysis of experimental data obtained from diagnostics used to characterize fusion plasmas are crucialmore » for a knowledge based understanding of the physical processes governing the dynamics of these plasmas. The meeting will aim at fostering, in particular, discussions of research and development results that set out or underline trends observed in the current major fusion confinement devices. General information on the IAEA, including its mission and organization, can be found at the IAEA websit Uncertainty quantification (UQ) Model selection, validation, and verification (V&V) Probability theory and statistical analysis Inverse problems & equilibrium reconstru ction Integrated data analysis Real time data analysis Machine learning Signal/image proc essing & pattern recognition Experimental design and synthetic diagnostics Data management« less

  9. The Validity of Functional Near-Infrared Spectroscopy Recordings of Visuospatial Working Memory Processes in Humans.

    PubMed

    Witmer, Joëlle S; Aeschlimann, Eva A; Metz, Andreas J; Troche, Stefan J; Rammsayer, Thomas H

    2018-04-05

    Functional near infrared spectroscopy (fNIRS) is increasingly used for investigating cognitive processes. To provide converging evidence for the validity of fNIRS recordings in cognitive neuroscience, we investigated functional activation in the frontal cortex in 43 participants during the processing of a visuospatial working memory (WM) task and a sensory duration discrimination (DD) task functionally unrelated to WM. To distinguish WM-related processes from a general effect of increased task demand, we applied an adaptive approach, which ensured that subjective task demand was virtually identical for all individuals and across both tasks. Our specified region of interest covered Brodmann Area 8 of the left hemisphere, known for its important role in the execution of WM processes. Functional activation, as indicated by an increase of oxygenated and a decrease of deoxygenated hemoglobin, was shown for the WM task, but not in the DD task. The overall pattern of results indicated that hemodynamic responses recorded by fNIRS are sensitive to specific visuospatial WM capacity-related processes and do not reflect a general effect of increased task demand. In addition, the finding that no such functional activation could be shown for participants with far above-average mental ability suggested different cognitive processes adopted by this latter group.

  10. The Validity of Functional Near-Infrared Spectroscopy Recordings of Visuospatial Working Memory Processes in Humans

    PubMed Central

    Witmer, Joëlle S.; Aeschlimann, Eva A.; Metz, Andreas J.; Rammsayer, Thomas H.

    2018-01-01

    Functional near infrared spectroscopy (fNIRS) is increasingly used for investigating cognitive processes. To provide converging evidence for the validity of fNIRS recordings in cognitive neuroscience, we investigated functional activation in the frontal cortex in 43 participants during the processing of a visuospatial working memory (WM) task and a sensory duration discrimination (DD) task functionally unrelated to WM. To distinguish WM-related processes from a general effect of increased task demand, we applied an adaptive approach, which ensured that subjective task demand was virtually identical for all individuals and across both tasks. Our specified region of interest covered Brodmann Area 8 of the left hemisphere, known for its important role in the execution of WM processes. Functional activation, as indicated by an increase of oxygenated and a decrease of deoxygenated hemoglobin, was shown for the WM task, but not in the DD task. The overall pattern of results indicated that hemodynamic responses recorded by fNIRS are sensitive to specific visuospatial WM capacity-related processes and do not reflect a general effect of increased task demand. In addition, the finding that no such functional activation could be shown for participants with far above-average mental ability suggested different cognitive processes adopted by this latter group. PMID:29621179

  11. Verifying Stability of Dynamic Soft-Computing Systems

    NASA Technical Reports Server (NTRS)

    Wen, Wu; Napolitano, Marcello; Callahan, John

    1997-01-01

    Soft computing is a general term for algorithms that learn from human knowledge and mimic human skills. Example of such algorithms are fuzzy inference systems and neural networks. Many applications, especially in control engineering, have demonstrated their appropriateness in building intelligent systems that are flexible and robust. Although recent research have shown that certain class of neuro-fuzzy controllers can be proven bounded and stable, they are implementation dependent and difficult to apply to the design and validation process. Many practitioners adopt the trial and error approach for system validation or resort to exhaustive testing using prototypes. In this paper, we describe our on-going research towards establishing necessary theoretic foundation as well as building practical tools for the verification and validation of soft-computing systems. A unified model for general neuro-fuzzy system is adopted. Classic non-linear system control theory and recent results of its applications to neuro-fuzzy systems are incorporated and applied to the unified model. It is hoped that general tools can be developed to help the designer to visualize and manipulate the regions of stability and boundedness, much the same way Bode plots and Root locus plots have helped conventional control design and validation.

  12. Airport Facility Queuing Model Validation

    DOT National Transportation Integrated Search

    1977-05-01

    Criteria are presented for selection of analytic models to represent waiting times due to queuing processes. An existing computer model by M.F. Neuts which assumes general nonparametric distributions of arrivals per unit time and service times for a ...

  13. Is There a Critical Distance for Fickian Transport? - a Statistical Approach to Sub-Fickian Transport Modelling in Porous Media

    NASA Astrophysics Data System (ADS)

    Most, S.; Nowak, W.; Bijeljic, B.

    2014-12-01

    Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.

  14. Aesthetic dermatology and emotional well-being questionnaire.

    PubMed

    Martínez-González, M Covadonga; Martínez-González, Raquel-Amaya; Guerra-Tapia, Aurora

    2014-12-01

    In recent years, there has been a great development of esthetic dermatology as a subspecialty of dermatology. It is important to know to which extent the general population regard this branch of medical surgical specialty as being of interest and contributing to emotional well-being. To analyze the technical features of a questionnaire which has been designed to reflect such perception of the general population about esthetic dermatology and its contribution to emotional well-being. Production and psychometric analysis of a self-filled in questionnaire in relation to esthetic dermatology and emotional well-being (DEBIE). This questionnaire is made of 57 items and has been applied to a sample of 770 people within the general population. The drawing-up process of the questionnaire is described to provide content validity. Items analysis was carried out together with exploratory and confirmatory factor analysis to assess the structure and construct validity of the tool. The extent of internal consistency (reliability) and concurrent validity has also been verified. DEBIE questionnaire (Spanish acronym for Aesthetic Dermatology and Emotional Well-being) revolves around six factors explaining 53.91% of the variance; there is a high level of internal consistency (Cronbach's α 0.90) and reasonable criterion validity. DEBIE questionnaire brings together adequate psychometric properties that can be applied to assess the perception that the general population have in relation to esthetic dermatology and its contribution to their emotional well-being. © 2014 Wiley Periodicals, Inc.

  15. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    PubMed

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  16. Developing, validating and consolidating the doctor-patient relationship: the patients' views of a dynamic process.

    PubMed Central

    Gore, J; Ogden, J

    1998-01-01

    BACKGROUND: Previous research has examined the doctor-patient relationship in terms of its therapeutic effect, the need to consider the patients' models of their illness, and the patients' expectations of their doctor. However, to date, no research has examined the patients' views of the doctor-patient relationship. AIM: To examine patients' views of the process of creating a relationship with their general practitioner (GP). METHOD: A qualitative design was used involving in-depth interviews with 27 frequently attending patients from four urban general practices. They were chosen to provide a heterogeneous group in terms of age, sex, and ethnicity. RESULTS: The responders described creating the relationship in terms of three stages: development, validation, and consolidation. The development stage involved overcoming initial reservations, actively searching for a doctor that met the patient's needs, or knowing from the start that the doctor was the right one for them. The validation stage involved evaluating the nature of the relationship by searching for evidence of caring, comparing their doctor with others, storing key events for illustration of the value of the relationship, recruiting the views of others to support their own perspectives, and the willingness to make tradeoffs. The consolidation stage involved testing and setting boundaries concerned with knowledge, power, and a personal relationship. CONCLUSION: Creating a relationship with a GP is a dynamic process involving an active patient who searches out a GP who matches their own representation of the 'ideal', selects and retains information to validate their choice, and locates mutually acceptable boundaries. PMID:9800396

  17. 42 CFR 493.1773 - Standard: Basic inspection requirements for all laboratories issued a CLIA certificate and CLIA...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... issued a certificate of accreditation, must permit CMS or a CMS agent to conduct validation and complaint inspections. (b) General requirements. As part of the inspection process, CMS or a CMS agent may require the... testing process (preanalytic, analytic, and postanalytic). (4) Permit CMS or a CMS agent access to all...

  18. 42 CFR 493.1773 - Standard: Basic inspection requirements for all laboratories issued a CLIA certificate and CLIA...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... issued a certificate of accreditation, must permit CMS or a CMS agent to conduct validation and complaint inspections. (b) General requirements. As part of the inspection process, CMS or a CMS agent may require the... testing process (preanalytic, analytic, and postanalytic). (4) Permit CMS or a CMS agent access to all...

  19. 42 CFR 493.1773 - Standard: Basic inspection requirements for all laboratories issued a CLIA certificate and CLIA...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... issued a certificate of accreditation, must permit CMS or a CMS agent to conduct validation and complaint inspections. (b) General requirements. As part of the inspection process, CMS or a CMS agent may require the... testing process (preanalytic, analytic, and postanalytic). (4) Permit CMS or a CMS agent access to all...

  20. 42 CFR 493.1773 - Standard: Basic inspection requirements for all laboratories issued a CLIA certificate and CLIA...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... issued a certificate of accreditation, must permit CMS or a CMS agent to conduct validation and complaint inspections. (b) General requirements. As part of the inspection process, CMS or a CMS agent may require the... testing process (preanalytic, analytic, and postanalytic). (4) Permit CMS or a CMS agent access to all...

  1. 42 CFR 493.1773 - Standard: Basic inspection requirements for all laboratories issued a CLIA certificate and CLIA...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... issued a certificate of accreditation, must permit CMS or a CMS agent to conduct validation and complaint inspections. (b) General requirements. As part of the inspection process, CMS or a CMS agent may require the... testing process (preanalytic, analytic, and postanalytic). (4) Permit CMS or a CMS agent access to all...

  2. System Identification of a Heaving Point Absorber: Design of Experiment and Device Modeling

    DOE PAGES

    Bacelli, Giorgio; Coe, Ryan; Patterson, David; ...

    2017-04-01

    Empirically based modeling is an essential aspect of design for a wave energy converter. These models are used in structural, mechanical and control design processes, as well as for performance prediction. The design of experiments and methods used to produce models from collected data have a strong impact on the quality of the model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followedmore » for wave tank testing. The general system identification processes are shown to have a number of advantages. The experimental data is then used to produce multiple models for the dynamics of the device. These models are validated and their performance is compared against one and other. Furthermore, while most models of wave energy converters use a formulation with wave elevation as an input, this study shows that a model using a hull pressure sensor to incorporate the wave excitation phenomenon has better accuracy.« less

  3. System Identification of a Heaving Point Absorber: Design of Experiment and Device Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacelli, Giorgio; Coe, Ryan; Patterson, David

    Empirically based modeling is an essential aspect of design for a wave energy converter. These models are used in structural, mechanical and control design processes, as well as for performance prediction. The design of experiments and methods used to produce models from collected data have a strong impact on the quality of the model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followedmore » for wave tank testing. The general system identification processes are shown to have a number of advantages. The experimental data is then used to produce multiple models for the dynamics of the device. These models are validated and their performance is compared against one and other. Furthermore, while most models of wave energy converters use a formulation with wave elevation as an input, this study shows that a model using a hull pressure sensor to incorporate the wave excitation phenomenon has better accuracy.« less

  4. Reliability and validity of a short version of the general functioning subscale of the McMaster Family Assessment Device.

    PubMed

    Boterhoven de Haan, Katrina L; Hafekost, Jennifer; Lawrence, David; Sawyer, Michael G; Zubrick, Stephen R

    2015-03-01

    The General Functioning 12-item subscale (GF12) of The McMaster Family Assessment Device (FAD) has been validated as a single index measure to assess family functioning. This study reports on the reliability and validity of using only the six positive items from the General Functioning subscale (GF6+). Existing data from two Western Australian studies, the Raine Study (RS) and the Western Australian Child Health Survey (WACHS), was used to analyze the psychometric properties of the GF6+ subscale. The results demonstrated that the GF6+ subscale had virtually equivalent psychometric properties and was able to identify almost all of the same families who had healthy or unhealthy levels of functioning as the full GF12 subscale. In consideration of the constraints faced by large-scale population-based surveys, the findings of this study support the use of a GF6+ subscale from the FAD, as a quick and effective tool to assess the overall functioning of families. © 2014 Family Process Institute.

  5. 21 CFR 1311.100 - General.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... pharmacy may process electronic prescriptions for controlled substances only if all of the following conditions are met: (1) The pharmacy uses a pharmacy application that meets all of the applicable... pharmacy, specified in part 1306 of this chapter, to ensure the validity of a controlled substance...

  6. On vital aid: the why, what and how of validation

    PubMed Central

    Kleywegt, Gerard J.

    2009-01-01

    Limitations to the data and subjectivity in the structure-determination process may cause errors in macromolecular crystal structures. Appropriate validation techniques may be used to reveal problems in structures, ideally before they are analysed, published or deposited. Additionally, such tech­niques may be used a posteriori to assess the (relative) merits of a model by potential users. Weak validation methods and statistics assess how well a model reproduces the information that was used in its construction (i.e. experimental data and prior knowledge). Strong methods and statistics, on the other hand, test how well a model predicts data or information that were not used in the structure-determination process. These may be data that were excluded from the process on purpose, general knowledge about macromolecular structure, information about the biological role and biochemical activity of the molecule under study or its mutants or complexes and predictions that are based on the model and that can be tested experimentally. PMID:19171968

  7. Multiple Imputation based Clustering Validation (MIV) for Big Longitudinal Trial Data with Missing Values in eHealth.

    PubMed

    Zhang, Zhaoyang; Fang, Hua; Wang, Honggang

    2016-06-01

    Web-delivered trials are an important component in eHealth services. These trials, mostly behavior-based, generate big heterogeneous data that are longitudinal, high dimensional with missing values. Unsupervised learning methods have been widely applied in this area, however, validating the optimal number of clusters has been challenging. Built upon our multiple imputation (MI) based fuzzy clustering, MIfuzzy, we proposed a new multiple imputation based validation (MIV) framework and corresponding MIV algorithms for clustering big longitudinal eHealth data with missing values, more generally for fuzzy-logic based clustering methods. Specifically, we detect the optimal number of clusters by auto-searching and -synthesizing a suite of MI-based validation methods and indices, including conventional (bootstrap or cross-validation based) and emerging (modularity-based) validation indices for general clustering methods as well as the specific one (Xie and Beni) for fuzzy clustering. The MIV performance was demonstrated on a big longitudinal dataset from a real web-delivered trial and using simulation. The results indicate MI-based Xie and Beni index for fuzzy-clustering are more appropriate for detecting the optimal number of clusters for such complex data. The MIV concept and algorithms could be easily adapted to different types of clustering that could process big incomplete longitudinal trial data in eHealth services.

  8. Multiple Imputation based Clustering Validation (MIV) for Big Longitudinal Trial Data with Missing Values in eHealth

    PubMed Central

    Zhang, Zhaoyang; Wang, Honggang

    2016-01-01

    Web-delivered trials are an important component in eHealth services. These trials, mostly behavior-based, generate big heterogeneous data that are longitudinal, high dimensional with missing values. Unsupervised learning methods have been widely applied in this area, however, validating the optimal number of clusters has been challenging. Built upon our multiple imputation (MI) based fuzzy clustering, MIfuzzy, we proposed a new multiple imputation based validation (MIV) framework and corresponding MIV algorithms for clustering big longitudinal eHealth data with missing values, more generally for fuzzy-logic based clustering methods. Specifically, we detect the optimal number of clusters by auto-searching and -synthesizing a suite of MI-based validation methods and indices, including conventional (bootstrap or cross-validation based) and emerging (modularity-based) validation indices for general clustering methods as well as the specific one (Xie and Beni) for fuzzy clustering. The MIV performance was demonstrated on a big longitudinal dataset from a real web-delivered trial and using simulation. The results indicate MI-based Xie and Beni index for fuzzy-clustering is more appropriate for detecting the optimal number of clusters for such complex data. The MIV concept and algorithms could be easily adapted to different types of clustering that could process big incomplete longitudinal trial data in eHealth services. PMID:27126063

  9. Establishing usability heuristics for heuristics evaluation in a specific domain: Is there a consensus?

    PubMed

    Hermawati, Setia; Lawson, Glyn

    2016-09-01

    Heuristics evaluation is frequently employed to evaluate usability. While general heuristics are suitable to evaluate most user interfaces, there is still a need to establish heuristics for specific domains to ensure that their specific usability issues are identified. This paper presents a comprehensive review of 70 studies related to usability heuristics for specific domains. The aim of this paper is to review the processes that were applied to establish heuristics in specific domains and identify gaps in order to provide recommendations for future research and area of improvements. The most urgent issue found is the deficiency of validation effort following heuristics proposition and the lack of robustness and rigour of validation method adopted. Whether domain specific heuristics perform better or worse than general ones is inconclusive due to lack of validation quality and clarity on how to assess the effectiveness of heuristics for specific domains. The lack of validation quality also affects effort in improving existing heuristics for specific domain as their weaknesses are not addressed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Invention and Gain Analysis.

    ERIC Educational Resources Information Center

    Weber, Robert J.; Dixon, Stacey

    1989-01-01

    Gain analysis is applied to the invention of the sewing needle as well as different sewing implements and modes of sewing. The analysis includes a two-subject experiment. To validate the generality of gain heuristics and underlying switching processes, the invention of the assembly line is also analyzed. (TJH)

  11. Verification and Validation of the General Mission Analysis Tool (GMAT)

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Qureshi, Rizwan H.; Cooley, D. Steven; Parker, Joel J. K.; Grubb, Thomas G.

    2014-01-01

    This paper describes the processes and results of Verification and Validation (V&V) efforts for the General Mission Analysis Tool (GMAT). We describe the test program and environments, the tools used for independent test data, and comparison results. The V&V effort produced approximately 13,000 test scripts that are run as part of the nightly buildtest process. In addition, we created approximately 3000 automated GUI tests that are run every two weeks. Presenting all test results are beyond the scope of a single paper. Here we present high-level test results in most areas, and detailed test results for key areas. The final product of the V&V effort presented in this paper was GMAT version R2013a, the first Gold release of the software with completely updated documentation and greatly improved quality. Release R2013a was the staging release for flight qualification performed at Goddard Space Flight Center (GSFC) ultimately resulting in GMAT version R2013b.

  12. Diurnal cycle of precipitation at Dakar in the model LMDZ

    NASA Astrophysics Data System (ADS)

    Sane, Y.; Bonazzola, M.; Hourdin, F.; Diongue-Niang, A.

    2009-04-01

    Most diurnal cycles of precipitation are not well represented in general circulation models (GCMs). It is a concern for climate modeling because of the key role of clouds in the radiative and water budgets. The diurnal phasing of deep convection is a challenge, the pact of deep convection being generally simulated too early in the day (Guichard et al., 2004). Thus a "thermal plume model" - a mass flux scheme combined with a classical diffusive approach - originally developed to represent turbulent transport in the dry convective boundary layer, is extented to the representation of cloud processes. The modified parametrization was validated in a 1D configuration against results of large eddy simulations (Rio, 2008). It is here validated in a 3D configuration against in situ precipitation measurements of the AMMA campaign. A data analysis of the diurnal cycle of precipitation as measured by the pluviometers net in the Dakar area is performed. The improvement of the diurnal cyle of convection in the GCM is demonstrated, and the involved processes are analysed.

  13. Validation of the Chinese expanded Euthanasia Attitude Scale.

    PubMed

    Chong, Alice Ming-Lin; Fok, Shiu-Yeu

    2013-01-01

    This article reports the validation of the Chinese version of an expanded 31-item Euthanasia Attitude Scale. A 4-stage validation process included a pilot survey of 119 college students and a randomized household survey with 618 adults in Hong Kong. Confirmatory factor analysis confirmed a 4-factor structure of the scale, which can therefore be used to examine attitudes toward general, active, passive, and non-voluntary euthanasia. The scale considers the role effect in decision-making about euthanasia requests and facilitates cross-cultural comparison of attitudes toward euthanasia. The new Chinese scale is more robust than its Western predecessors conceptually and measurement-wise.

  14. Validity and reliability of the Traditional Chinese version of the Multidimensional Fatigue Inventory in general population.

    PubMed

    Chuang, Li-Ling; Chuang, Yu-Fen; Hsu, Miao-Ju; Huang, Ying-Zu; Wong, Alice M K; Chang, Ya-Ju

    2018-01-01

    Fatigue is a common symptom in the general population and has a substantial effect on individuals' quality of life. The Multidimensional Fatigue Inventory (MFI) has been widely used to quantify the impact of fatigue, but no Traditional Chinese translation has yet been validated. The goal of this study was to translate the MFI from English into Traditional Chinese ('the MFI-TC') and subsequently to examine its validity and reliability. The study recruited a convenience sample of 123 people from various age groups in Taiwan. The MFI was examined using a two-step process: (1) translation and back-translation of the instrument; and (2) examination of construct validity, convergent validity, internal consistency, test-retest reliability, and measurement error. The validity and reliability of the MFI-TC were assessed by factor analysis, Spearman rho correlation coefficient, Cronbach's alpha coefficient, intraclass correlation coefficient (ICC), minimal detectable change (MDC), and Bland-Altman analysis. All participants completed the Short-Form-36 Health Survey Taiwan Form (SF-36-T) and the Chinese version of the Pittsburgh Sleep Quality Index (PSQI) concurrently to test the convergent validity of the MFI-TC. Test-retest reliability was assessed by readministration of the MFI-TC after a 1-week interval. Factor analysis confirmed the four dimensions of fatigue: general/physical fatigue, reduced activity, reduced motivation, and mental fatigue. A four-factor model was extracted, combining general fatigue and physical fatigue as one factor. The results demonstrated moderate convergent validity when correlating fatigue (MFI-TC) with quality of life (SF-36-T) and sleep disturbances (PSQI) (Spearman's rho = 0.68 and 0.47, respectively). Cronbach's alpha for the MFI-TC total scale and subscales ranged from 0.73 (mental fatigue subscale) to 0.92 (MFI-TC total scale). ICCs ranged from 0.85 (reduced motivation) to 0.94 (MFI-TC total scale), and the MDC ranged from 2.33 points (mental fatigue) to 9.5 points (MFI-TC total scale). The Bland-Altman analyses showed no significant systematic bias between the repeated assessments. The results support the use of the Traditional Chinese version of the MFI as a comprehensive instrument for measuring specific aspects of fatigue. Clinicians and researchers should consider interpreting general fatigue and physical fatigue as one subscale when measuring fatigue in Traditional Chinese-speaking populations.

  15. Validity and reliability of the Traditional Chinese version of the Multidimensional Fatigue Inventory in general population

    PubMed Central

    Chuang, Li-Ling; Chuang, Yu-Fen; Hsu, Miao-Ju; Huang, Ying-Zu; Wong, Alice M. K.

    2018-01-01

    Background Fatigue is a common symptom in the general population and has a substantial effect on individuals’ quality of life. The Multidimensional Fatigue Inventory (MFI) has been widely used to quantify the impact of fatigue, but no Traditional Chinese translation has yet been validated. The goal of this study was to translate the MFI from English into Traditional Chinese (‘the MFI-TC’) and subsequently to examine its validity and reliability. Methods The study recruited a convenience sample of 123 people from various age groups in Taiwan. The MFI was examined using a two-step process: (1) translation and back-translation of the instrument; and (2) examination of construct validity, convergent validity, internal consistency, test-retest reliability, and measurement error. The validity and reliability of the MFI-TC were assessed by factor analysis, Spearman rho correlation coefficient, Cronbach’s alpha coefficient, intraclass correlation coefficient (ICC), minimal detectable change (MDC), and Bland-Altman analysis. All participants completed the Short-Form-36 Health Survey Taiwan Form (SF-36-T) and the Chinese version of the Pittsburgh Sleep Quality Index (PSQI) concurrently to test the convergent validity of the MFI-TC. Test-retest reliability was assessed by readministration of the MFI-TC after a 1-week interval. Results Factor analysis confirmed the four dimensions of fatigue: general/physical fatigue, reduced activity, reduced motivation, and mental fatigue. A four-factor model was extracted, combining general fatigue and physical fatigue as one factor. The results demonstrated moderate convergent validity when correlating fatigue (MFI-TC) with quality of life (SF-36-T) and sleep disturbances (PSQI) (Spearman's rho = 0.68 and 0.47, respectively). Cronbach’s alpha for the MFI-TC total scale and subscales ranged from 0.73 (mental fatigue subscale) to 0.92 (MFI-TC total scale). ICCs ranged from 0.85 (reduced motivation) to 0.94 (MFI-TC total scale), and the MDC ranged from 2.33 points (mental fatigue) to 9.5 points (MFI-TC total scale). The Bland-Altman analyses showed no significant systematic bias between the repeated assessments. Conclusions The results support the use of the Traditional Chinese version of the MFI as a comprehensive instrument for measuring specific aspects of fatigue. Clinicians and researchers should consider interpreting general fatigue and physical fatigue as one subscale when measuring fatigue in Traditional Chinese-speaking populations. PMID:29746466

  16. Bilingual advantages in executive functioning: problems in convergent validity, discriminant validity, and the identification of the theoretical constructs

    PubMed Central

    Paap, Kenneth R.; Sawi, Oliver

    2014-01-01

    A sample of 58 bilingual and 62 monolingual university students completed four tasks commonly used to test for bilingual advantages in executive functioning (EF): antisaccade, attentional network test, Simon, and color-shape switching. Across the four tasks, 13 different indices were derived that are assumed to reflect individual differences in inhibitory control, monitoring, or switching. The effects of bilingualism on the 13 measures were explored by directly comparing the means of the two language groups and through regression analyses using a continuous measure of bilingualism and multiple demographic characteristics as predictors. Across the 13 different measures and two types of data analysis there were very few significant results and those that did occur supported a monolingual advantage. An equally important goal was to assess the convergent validity through cross-task correlations of indices assume to measure the same component of executive functioning. Most of the correlations using difference-score measures were non-significant and many near zero. Although modestly higher levels of convergent validity are sometimes reported, a review of the existing literature suggests that bilingual advantages (or disadvantages) may reflect task-specific differences that are unlikely to generalize to important general differences in EF. Finally, as cautioned by Salthouse, assumed measures of executive functioning may also be threatened by a lack of discriminant validity that separates individual or group differences in EF from those in general fluid intelligence or simple processing speed. PMID:25249988

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckdahn, Rainer, E-mail: Rainer.Buckdahn@univ-brest.fr; Li, Juan, E-mail: juanli@sdu.edu.cn; Ma, Jin, E-mail: jinma@usc.edu

    In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and wemore » extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.« less

  18. The forensic validity of visual analytics

    NASA Astrophysics Data System (ADS)

    Erbacher, Robert F.

    2008-01-01

    The wider use of visualization and visual analytics in wide ranging fields has led to the need for visual analytics capabilities to be legally admissible, especially when applied to digital forensics. This brings the need to consider legal implications when performing visual analytics, an issue not traditionally examined in visualization and visual analytics techniques and research. While digital data is generally admissible under the Federal Rules of Evidence [10][21], a comprehensive validation of the digital evidence is considered prudent. A comprehensive validation requires validation of the digital data under rules for authentication, hearsay, best evidence rule, and privilege. Additional issues with digital data arise when exploring digital data related to admissibility and the validity of what information was examined, to what extent, and whether the analysis process was sufficiently covered by a search warrant. For instance, a search warrant generally covers very narrow requirements as to what law enforcement is allowed to examine and acquire during an investigation. When searching a hard drive for child pornography, how admissible is evidence of an unrelated crime, i.e. drug dealing. This is further complicated by the concept of "in plain view". When performing an analysis of a hard drive what would be considered "in plain view" when analyzing a hard drive. The purpose of this paper is to discuss the issues of digital forensics and the related issues as they apply to visual analytics and identify how visual analytics techniques fit into the digital forensics analysis process, how visual analytics techniques can improve the legal admissibility of digital data, and identify what research is needed to further improve this process. The goal of this paper is to open up consideration of legal ramifications among the visualization community; the author is not a lawyer and the discussions are not meant to be inclusive of all differences in laws between states and countries.

  19. The linguistic validation of Russian version of Dutch four-dimensional symptoms questionnaire (4DSQ) for assessing distress, depression, anxiety and somatization in patients with borderline psychosomatic disorders.

    PubMed

    Arnautov, V S; Reyhart, D V; Smulevich, A B; Yakhno, N N; Terluin, B; Zakharova, E K; Andryushchenko, A V; Parfenov, V A; Zamergrad, M V; Romanov, D V

    2015-12-12

    The four-dimensional symptom questionnaire (4DSQ) is an originally Dutch self-report questionnaire that has been developed in primary care to distinguish non-specific general distress from depression, anxiety and somatization. In order to produce the appropriate translated Russian version the process of linguistic validation has been initiated. This process has been done according to the "Linguistic Validation Manual for Health Outcome Assessments" developed by MAPI institute. To produce the appropriate Russian version of the 4DSQ that is conceptually and linguistically equivalent to the original questionnaire. The original Dutch version of the 4DSQ was translated by one translator into Russian. The validated English version of the 4DSQ was translated by another translator into Russian without mutual consultation. The consensus version was created based on two translated versions. After that the back translation was performed to Dutch, some changes were implemented to the consensus Russian version and the second target version was developed based on these results. The second target version was sent to an appropriate group of reviewers. Based on their comments, the second target version was updated. After wards this version was tested in patients during cognitive interview. The study protocol was approved by the Independent Interdisciplinary Ethics Committee on Ethical Review for Clinical Studies, and in compliance with the Helsinki Declaration and ICH-GCP guidelines and local regulations. Enrolled patients provided written informed consent. After the process of forward and backward translation, consultant and developer's comments, clinicians and cognitive review the final version of Russian 4DSQ was developed for assessment of distress, depression, anxiety and somatization. The Russian 4DSQ as a result of translation procedures and cognitive interviews linguistically corresponds to the original Dutch 4DSQ and could be assessed in psychometric validation for the further using in general practice.

  20. A Shared Memory Algorithm and Proof for the Generalized Alternative Construct in CSP (Communicating Sequential Processes)

    DTIC Science & Technology

    1987-06-01

    shared variables. This will be discussed later. One procedure merits special attention. CheckAndCommit(m, g ,): INTEGER is called by process P, (I...denotes the local process) to check that "valid" communications can take place between P, using guard g , and Pm (m denotes the remote process). If so, P...local guard gi. By matching we mean gj contains an 1/O operation with P. By compatible we mean g , and gj do not both contain input (output) commands

  1. Validity of GRE General Test scores and TOEFL scores for graduate admission to a technical university in Western Europe

    NASA Astrophysics Data System (ADS)

    Zimmermann, Judith; von Davier, Alina A.; Buhmann, Joachim M.; Heinimann, Hans R.

    2018-01-01

    Graduate admission has become a critical process in tertiary education, whereby selecting valid admissions instruments is key. This study assessed the validity of Graduate Record Examination (GRE) General Test scores for admission to Master's programmes at a technical university in Europe. We investigated the indicative value of GRE scores for the Master's programme grade point average (GGPA) with and without the addition of the undergraduate GPA (UGPA) and the TOEFL score, and of GRE scores for study completion and Master's thesis performance. GRE scores explained 20% of the variation in the GGPA, while additional 7% were explained by the TOEFL score and 3% by the UGPA. Contrary to common belief, the GRE quantitative reasoning score showed only little explanatory power. GRE scores were also weakly related to study progress but not to thesis performance. Nevertheless, GRE and TOEFL scores were found to be sensible admissions instruments. Rigorous methodology was used to obtain highly reliable results.

  2. Origin of the violation of the Gottfried sum rule

    NASA Astrophysics Data System (ADS)

    Hwang, W.-Y. P.; Speth, J.

    1992-08-01

    Using generalized Sullivan processes to generate sea-quark distributions of a nucleon at Q2=4 GeV2, we find that the recent finding by the New Muon Collaboration on the violation of the Gottfried sum rule can be understood quantitatively, including the shape of Fp2(x)-Fn2(x) as a function of x. The agreement may be seen as a clear evidence toward the validity of a recent suggestion of Hwang, Speth, and Brown that the sea distributions of a hadron, at low and moderate Q2 (at least up to a few GeV2), may be attributed primarily to generalized Sullivan processes.

  3. Invariance in the recurrence of large returns and the validation of models of price dynamics

    NASA Astrophysics Data System (ADS)

    Chang, Lo-Bin; Geman, Stuart; Hsieh, Fushing; Hwang, Chii-Ruey

    2013-08-01

    Starting from a robust, nonparametric definition of large returns (“excursions”), we study the statistics of their occurrences, focusing on the recurrence process. The empirical waiting-time distribution between excursions is remarkably invariant to year, stock, and scale (return interval). This invariance is related to self-similarity of the marginal distributions of returns, but the excursion waiting-time distribution is a function of the entire return process and not just its univariate probabilities. Generalized autoregressive conditional heteroskedasticity (GARCH) models, market-time transformations based on volume or trades, and generalized (Lévy) random-walk models all fail to fit the statistical structure of excursions.

  4. INTERPRETING PHYSICAL AND BEHAVIORAL HEALTH SCORES FROM NEW WORK DISABILITY INSTRUMENTS

    PubMed Central

    Marfeo, Elizabeth E.; Ni, Pengsheng; Chan, Leighton; Rasch, Elizabeth K.; McDonough, Christine M.; Brandt, Diane E.; Bogusz, Kara; Jette, Alan M.

    2015-01-01

    Objective To develop a system to guide interpretation of scores generated from 2 new instruments measuring work-related physical and behavioral health functioning (Work Disability – Physical Function (WD-PF) and WD – Behavioral Function (WD-BH)). Design Cross-sectional, secondary data from 3 independent samples to develop and validate the functional levels for physical and behavioral health functioning. Subjects Physical group: 999 general adult subjects, 1,017 disability applicants and 497 work-disabled subjects. Behavioral health group: 1,000 general adult subjects, 1,015 disability applicants and 476 work-disabled subjects. Methods Three-phase analytic approach including item mapping, a modified-Delphi technique, and known-groups validation analysis were used to develop and validate cut-points for functional levels within each of the WD-PF and WD-BH instrument’s scales. Results Four and 5 functional levels were developed for each of the scales in the WD-PF and WD-BH instruments. Distribution of the comparative samples was in the expected direction: the general adult samples consistently demonstrated scores at higher functional levels compared with the claimant and work-disabled samples. Conclusion Using an item-response theory-based methodology paired with a qualitative process appears to be a feasible and valid approach for translating the WD-BH and WD-PF scores into meaningful levels useful for interpreting a person’s work-related physical and behavioral health functioning. PMID:25729901

  5. The Effects of Differing Response Criteria on the Assessment of Writing Competence.

    ERIC Educational Resources Information Center

    Winters, Lynn

    The purpose of this study was to investigate the relative validities of four essay scoring systems, reflecting alternative conceptualizations of the writing process, for identifying "competent" writers. Each rater was trained in two of the four scoring systems: General Impression Scoring (GI), Diederich Expository Scale (DES), CSE…

  6. Predicting Organizational Performance: Application of Neurocomputing as an Alternative to Statistical Regression

    DTIC Science & Technology

    1989-09-01

    separate network architetures would otherwise have to be performed for each 𔃼 5 of the nearly 70 cross-validation regressions. Fixing the composition...presentation. The generalized delta rule says the weight of each connection should be changed by an amount proportional to the product of the processing

  7. Validating and Optimizing the Effects of Model Progression in Simulation-Based Inquiry Learning

    ERIC Educational Resources Information Center

    Mulder, Yvonne G.; Lazonder, Ard W.; de Jong, Ton; Anjewierden, Anjo; Bollen, Lars

    2012-01-01

    Model progression denotes the organization of the inquiry learning process in successive phases of increasing complexity. This study investigated the effectiveness of model progression in general, and explored the added value of either broadening or narrowing students' possibilities to change model progression phases. Results showed that…

  8. Internal Labor Markets: An Empirical Investigation.

    ERIC Educational Resources Information Center

    Mahoney, Thomas A.; Milkovich, George T.

    Methods of internal labor market analysis for three organizational areas are presented, along with some evidence about the validity and utility of conceptual descriptions of such markets. The general concept of an internal labor market refers to the process of pricing and allocation of manpower resources with an employing organization and rests…

  9. A simple enrichment correction factor for improving erosion estimation by rare earth oxide tracers

    USDA-ARS?s Scientific Manuscript database

    Spatially distributed soil erosion data are needed to better understanding soil erosion processes and validating distributed erosion models. Rare earth element (REE) oxides were used to generate spatial erosion data. However, a general concern on the accuracy of the technique arose due to selective ...

  10. The effect of feature-based attention on flanker interference processing: An fMRI-constrained source analysis.

    PubMed

    Siemann, Julia; Herrmann, Manfred; Galashan, Daniela

    2018-01-25

    The present study examined whether feature-based cueing affects early or late stages of flanker conflict processing using EEG and fMRI. Feature cues either directed participants' attention to the upcoming colour of the target or were neutral. Validity-specific modulations during interference processing were investigated using the N200 event-related potential (ERP) component and BOLD signal differences. Additionally, both data sets were integrated using an fMRI-constrained source analysis. Finally, the results were compared with a previous study in which spatial instead of feature-based cueing was applied to an otherwise identical flanker task. Feature-based and spatial attention recruited a common fronto-parietal network during conflict processing. Irrespective of attention type (feature-based; spatial), this network responded to focussed attention (valid cueing) as well as context updating (invalid cueing), hinting at domain-general mechanisms. However, spatially and non-spatially directed attention also demonstrated domain-specific activation patterns for conflict processing that were observable in distinct EEG and fMRI data patterns as well as in the respective source analyses. Conflict-specific activity in visual brain regions was comparable between both attention types. We assume that the distinction between spatially and non-spatially directed attention types primarily applies to temporal differences (domain-specific dynamics) between signals originating in the same brain regions (domain-general localization).

  11. Active Interaction Mapping as a tool to elucidate hierarchical functions of biological processes.

    PubMed

    Farré, Jean-Claude; Kramer, Michael; Ideker, Trey; Subramani, Suresh

    2017-07-03

    Increasingly, various 'omics data are contributing significantly to our understanding of novel biological processes, but it has not been possible to iteratively elucidate hierarchical functions in complex phenomena. We describe a general systems biology approach called Active Interaction Mapping (AI-MAP), which elucidates the hierarchy of functions for any biological process. Existing and new 'omics data sets can be iteratively added to create and improve hierarchical models which enhance our understanding of particular biological processes. The best datatypes to further improve an AI-MAP model are predicted computationally. We applied this approach to our understanding of general and selective autophagy, which are conserved in most eukaryotes, setting the stage for the broader application to other cellular processes of interest. In the particular application to autophagy-related processes, we uncovered and validated new autophagy and autophagy-related processes, expanded known autophagy processes with new components, integrated known non-autophagic processes with autophagy and predict other unexplored connections.

  12. Cognitive Bias in the Verification and Validation of Space Flight Systems

    NASA Technical Reports Server (NTRS)

    Larson, Steve

    2012-01-01

    Cognitive bias is generally recognized as playing a significant role in virtually all domains of human decision making. Insight into this role is informally built into many of the system engineering practices employed in the aerospace industry. The review process, for example, typically has features that help to counteract the effect of bias. This paper presents a discussion of how commonly recognized biases may affect the verification and validation process. Verifying and validating a system is arguably more challenging than development, both technically and cognitively. Whereas there may be a relatively limited number of options available for the design of a particular aspect of a system, there is a virtually unlimited number of potential verification scenarios that may be explored. The probability of any particular scenario occurring in operations is typically very difficult to estimate, which increases reliance on judgment that may be affected by bias. Implementing a verification activity often presents technical challenges that, if they can be overcome at all, often result in a departure from actual flight conditions (e.g., 1-g testing, simulation, time compression, artificial fault injection) that may raise additional questions about the meaningfulness of the results, and create opportunities for the introduction of additional biases. In addition to mitigating the biases it can introduce directly, the verification and validation process must also overcome the cumulative effect of biases introduced during all previous stages of development. A variety of cognitive biases will be described, with research results for illustration. A handful of case studies will be presented that show how cognitive bias may have affected the verification and validation process on recent JPL flight projects, identify areas of strength and weakness, and identify potential changes or additions to commonly used techniques that could provide a more robust verification and validation of future systems.

  13. [Factor Analysis: Principles to Evaluate Measurement Tools for Mental Health].

    PubMed

    Campo-Arias, Adalberto; Herazo, Edwin; Oviedo, Heidi Celina

    2012-09-01

    The validation of a measurement tool in mental health is a complex process that usually starts by estimating reliability, to later approach its validity. Factor analysis is a way to know the number of dimensions, domains or factors of a measuring tool, generally related to the construct validity of the scale. The analysis could be exploratory or confirmatory, and helps in the selection of the items with better performance. For an acceptable factor analysis, it is necessary to follow some steps and recommendations, conduct some statistical tests, and rely on a proper sample of participants. Copyright © 2012 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.

  14. Development and Validation of the Elder Learning Barriers Scale Among Older Chinese Adults.

    PubMed

    Wang, Renfeng; De Donder, Liesbeth; De Backer, Free; He, Tao; Van Regenmortel, Sofie; Li, Shihua; Lombaerts, Koen

    2017-12-01

    This study describes the development and validation of the Elder Learning Barriers (ELB) scale, which seeks to identify the obstacles that affect the level of educational participation of older adults. The process of item pool design and scale development is presented, as well as the testing and scale refinement procedure. The data were collected from a sample of 579 older Chinese adults (aged over 55) in the Xi'an region of China. After randomly splitting the sample for cross-validation purposes, the construct validity of the ELB scale was confirmed containing five dimensions: dispositional, informational, physical, situational, and institutional barriers. Furthermore, developmental differences in factor structure have been examined among older age groups. The results indicated that the scale demonstrated good reliability and validity. We conclude in general that the ELB scale appears to be a valuable instrument for examining the learning barriers that older Chinese citizens experience for participating in organized educational activities.

  15. Validation of the ENVISAT atmospheric chemistry instruments

    NASA Astrophysics Data System (ADS)

    Snoeij, P.; Koopman, R.; Attema, E.; Zehner, C.; Wursteisen, P.; Dehn, A.; de Laurentius, M.; Frerick, J.; Mantovani, R.; Saavedra de Miguel, L.

    Three atmospheric-chemistry sensors form part of the ENVISAT payload that has been placed into orbit in March 2002. This paper presents the ENVISAT mission status and data policy, and reviews the end-to-end performance of the GOMOS, MIPAS and SCIAMACHY observation systems and will discuss the validation aspects of these instruments. In particular, for each instrument, the review addresses mission planning, in-orbit performance, calibration, data processor algorithms and configuration, reprocessing strategy, and product quality control assessment. An important part of the quality assessment is the Geophysical Validation. At the ACVT Validation workshop held in Frascati, Italy, from 3-7 May 2004, scientists and engineers presented analyses of the exhaustive series of tests that have been run on each of ENVISAT atmospheric chemistry sensors since the spacecraft was launched in March 2002. On the basis of workshop results it was decided that most of the data products provided by the ENVISAT atmospheric chemistry instruments are ready for operational delivery. Although the main validation phase for the atmospheric instruments of ENVISAT will be completed soon, ongoing validation products will continue throughout the lifetime of the ENVISAT mission. The long-term validation phase will: Provide assurance of data quality and accuracy for applications such as climate change research Investigate the fully representative range of geophysical conditions Investigate the fully representative range of seasonal cycles Perform long term monitoring for instrumental drifts and other artefacts Validate new products. This paper will also discuss the general status of the validation activities for GOMOS, MIPAS and SCIAMACHY. The main and long-term geophysical validation programme will be presented. The flight and ground-segment planning, configuration and performance characterization will be discussed. The evolution of each of the observation systems has been distinct during the mission history: the GOMOS instrument operation has undergone an important change, and its processing chain is subject of two upgrades. For MIPAS intervention on one of the on-board subsystems has proven necessary, and an important data processing improvement cycle has been completed through reconfiguration of the processing chain. SCIAMACHY operations have required only minor interventions, and the presentation will focus on the processing chain evolution.

  16. Methods of testing parameterizations: Vertical ocean mixing

    NASA Technical Reports Server (NTRS)

    Tziperman, Eli

    1992-01-01

    The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.

  17. 29 CFR 1607.5 - General standards for validity studies.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false General standards for validity studies. 1607.5 Section 1607... studies. A. Acceptable types of validity studies. For the purposes of satisfying these guidelines, users may rely upon criterion-related validity studies, content validity studies or construct validity...

  18. The consultation and relational empathy (CARE) measure: development and preliminary validation and reliability of an empathy-based consultation process measure.

    PubMed

    Mercer, Stewart W; Maxwell, Margaret; Heaney, David; Watt, Graham Cm

    2004-12-01

    Empathy is a key aspect of the clinical encounter but there is a lack of patient-assessed measures suitable for general clinical settings. Our aim was to develop a consultation process measure based on a broad definition of empathy, which is meaningful to patients irrespective of their socio-economic background. Qualitative and quantitative approaches were used to develop and validate the new measure, which we have called the consultation and relational empathy (CARE) measure. Concurrent validity was assessed by correlational analysis against other validated measures in a series of three pilot studies in general practice (in areas of high or low socio-economic deprivation). Face and content validity was investigated by 43 interviews with patients from both types of areas, and by feedback from GPs and expert researchers in the field. The initial version of the new measure (pilot 1; high deprivation practice) correlated strongly (r = 0.85) with the Reynolds empathy measure (RES) and the Barrett-Lennard empathy subscale (BLESS) (r = 0.63), but had a highly skewed distribution (skew -1.879, kurtosis 3.563). Statistical analysis, and feedback from the 20 patients interviewed, the GPs and the expert researchers, led to a number of modifications. The revised, second version of the CARE measure, tested in an area of low deprivation (pilot 2) also correlated strongly with the established empathy measures (r = 0.84 versus RES and r = 0.77 versus BLESS) but had a less skewed distribution (skew -0.634, kurtosis -0.067). Internal reliability of the revised version was high (Cronbach's alpha 0.92). Patient feedback at interview (n = 13) led to only minor modification. The final version of the CARE measure, tested in pilot 3 (high deprivation practice) confirmed the validation with the other empathy measures (r = 0.85 versus RES and r = 0.84 versus BLESS) and the face validity (feedback from 10 patients). These preliminary results support the validity and reliability of the CARE measure as a tool for measuring patients' perceptions of relational empathy in the consultation.

  19. Initial development and preliminary validation of a new negative symptom measure: the Clinical Assessment Interview for Negative Symptoms (CAINS).

    PubMed

    Forbes, Courtney; Blanchard, Jack J; Bennett, Melanie; Horan, William P; Kring, Ann; Gur, Raquel

    2010-12-01

    As part of an ongoing scale development process, this study provides an initial examination of the psychometric properties and validity of a new interview-based negative symptom instrument, the Clinical Assessment Interview for Negative Symptoms (CAINS), in outpatients with schizophrenia or schizoaffective disorder (N = 37). The scale was designed to address limitations of existing measures and to comprehensively assess five consensus-based negative symptoms: asociality, avolition, anhedonia (consummatory and anticipatory), affective flattening, and alogia. Results indicated satisfactory internal consistency reliability for the total CAINS scale score and promising inter-rater agreement, with clear areas identified in need of improvement. Convergent validity was evident in general agreement between the CAINS and alternative negative symptom measures. Further, CAINS subscales significantly correlated with relevant self-report emotional experience measures as well as with social functioning. Discriminant validity of the CAINS was strongly supported by its small, non-significant relations with positive symptoms, general psychiatric symptoms, and depression. These preliminary data on an early beta-version of the CAINS provide initial support for this new assessment approach to negative symptoms and suggest directions for further scale development. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. Bayesian truthing as experimental verification of C4ISR sensors

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Forrester, Thomas; Romanov, Volodymyr; Wang, Wenjian; Nielsen, Thomas; Kostrzewski, Andrew

    2015-05-01

    In this paper, the general methodology for experimental verification/validation of C4ISR and other sensors' performance, is presented, based on Bayesian inference, in general, and binary sensors, in particular. This methodology, called Bayesian Truthing, defines Performance Metrics for binary sensors in: physics, optics, electronics, medicine, law enforcement, C3ISR, QC, ATR (Automatic Target Recognition), terrorism related events, and many others. For Bayesian Truthing, the sensing medium itself is not what is truly important; it is how the decision process is affected.

  1. Evolution of female multiple mating: A quantitative model of the “sexually selected sperm” hypothesis

    PubMed Central

    Bocedi, Greta; Reid, Jane M

    2015-01-01

    Explaining the evolution and maintenance of polyandry remains a key challenge in evolutionary ecology. One appealing explanation is the sexually selected sperm (SSS) hypothesis, which proposes that polyandry evolves due to indirect selection stemming from positive genetic covariance with male fertilization efficiency, and hence with a male's success in postcopulatory competition for paternity. However, the SSS hypothesis relies on verbal analogy with “sexy-son” models explaining coevolution of female preferences for male displays, and explicit models that validate the basic SSS principle are surprisingly lacking. We developed analogous genetically explicit individual-based models describing the SSS and “sexy-son” processes. We show that the analogy between the two is only partly valid, such that the genetic correlation arising between polyandry and fertilization efficiency is generally smaller than that arising between preference and display, resulting in less reliable coevolution. Importantly, indirect selection was too weak to cause polyandry to evolve in the presence of negative direct selection. Negatively biased mutations on fertilization efficiency did not generally rescue runaway evolution of polyandry unless realized fertilization was highly skewed toward a single male, and coevolution was even weaker given random mating order effects on fertilization. Our models suggest that the SSS process is, on its own, unlikely to generally explain the evolution of polyandry. PMID:25330405

  2. Assessing the Culture of Residency Using the C - Change Resident Survey: Validity Evidence in 34 U.S. Residency Programs.

    PubMed

    Pololi, Linda H; Evans, Arthur T; Civian, Janet T; Shea, Sandy; Brennan, Robert T

    2017-07-01

    A practical instrument is needed to reliably measure the clinical learning environment and professionalism for residents. To develop and present evidence of validity of an instrument to assess the culture of residency programs and the clinical learning environment. During 2014-2015, we surveyed residents using the C - Change Resident Survey to assess residents' perceptions of the culture in their programs. Residents in all years of training in 34 programs in internal medicine, pediatrics, and general surgery in 14 geographically diverse public and private academic health systems. The C - Change Resident Survey assessed residents' perceptions of 13 dimensions of the culture: Vitality, Self-Efficacy, Institutional Support, Relationships/Inclusion, Values Alignment, Ethical/Moral Distress, Respect, Mentoring, Work-Life Integration, Gender Equity, Racial/Ethnic Minority Equity, and self-assessed Competencies. We measured the internal reliability of each of the 13 dimensions and evaluated response process, content validity, and construct-related evidence validity by assessing relationships predicted by our conceptual model and prior research. We also assessed whether the measurements were sensitive to differences in specialty and across institutions. A total of 1708 residents completed the survey [internal medicine: n = 956, pediatrics: n = 411, general surgery: n = 311 (51% women; 16% underrepresented in medicine minority)], with a response rate of 70% (range across programs, 51-87%). Internal consistency of each dimension was high (Cronbach α: 0.73-0.90). The instrument was able to detect significant differences in the learning environment across programs and sites. Evidence of validity was supported by a good response process and the demonstration of several relationships predicted by our conceptual model. The C - Change Resident Survey assesses the clinical learning environment for residents, and we encourage further study of validity in different contexts. Results could be used to facilitate and monitor improvements in the clinical learning environment and resident well-being.

  3. An Employer Needs Assessment for Vocational Education.

    ERIC Educational Resources Information Center

    Muraski, Ed J.; Whiteman, Dick

    An employer needs assessment study was performed at Porterville College (PC), in California in 1991 as part of a comprehensive educational planning process for PC and the surrounding area. A validated survey instrument was sent to a stratified random sampling of 593 employers in the community, asking them to provide general information about their…

  4. A Methodology for Zumbo's Third Generation DIF Analyses and the Ecology of Item Responding

    ERIC Educational Resources Information Center

    Zumbo, Bruno D.; Liu, Yan; Wu, Amery D.; Shear, Benjamin R.; Olvera Astivia, Oscar L.; Ark, Tavinder K.

    2015-01-01

    Methods for detecting differential item functioning (DIF) and item bias are typically used in the process of item analysis when developing new measures; adapting existing measures for different populations, languages, or cultures; or more generally validating test score inferences. In 2007 in "Language Assessment Quarterly," Zumbo…

  5. U.S. Army Research Institute Program in Basic Research-FY 2010

    DTIC Science & Technology

    2010-11-01

    2007). Do learning protocols support learning strategies and outcomes? The role of cognitive and metacognitive prompts. Learning and Instruction ...73 Achievement in Complex Learning Environments as a Function of Information Processing Ability ...Development and Validation of a Situational Judgment Test to Predict Attrition Incrementally Over General Cognitive Ability and a Forced-Choice

  6. [Dimensional structure of the Brazilian version of the Scale of Satisfaction with Interpersonal Processes of General Medical Care].

    PubMed

    Nascimento, Maria Isabel do; Reichenheim, Michael Eduardo; Monteiro, Gina Torres Rego

    2011-12-01

    The objective of this study was to reassess the dimensional structure of a Brazilian version of the Scale of Satisfaction with Interpersonal Processes of General Medical Care, proposed originally as a one-dimensional instrument. Strict confirmatory factor analysis (CFA) and exploratory factor analysis modeled within a CFA framework (E/CFA) were used to identify the best model. An initial CFA rejected the one-dimensional structure, while an E/CFA suggested a two-dimensional structure. The latter structure was followed by a new CFA, which showed that the model without cross-loading was the most parsimonious, with adequate fit indices (CFI = 0.982 and TLI = 0.988), except for RMSEA (0.062). Although the model achieved convergent validity, discriminant validity was questionable, with the square-root of the mean variance extracted from dimension 1 estimates falling below the respective factor correlation. According to these results, there is not sufficient evidence to recommend the immediate use of the instrument, and further studies are needed for a more in-depth analysis of the postulated structures.

  7. Generalized ISAR--part II: interferometric techniques for three-dimensional location of scatterers.

    PubMed

    Given, James A; Schmidt, William R

    2005-11-01

    This paper is the second part of a study dedicated to optimizing diagnostic inverse synthetic aperture radar (ISAR) studies of large naval vessels. The method developed here provides accurate determination of the position of important radio-frequency scatterers by combining accurate knowledge of ship position and orientation with specialized signal processing. The method allows for the simultaneous presence of substantial Doppler returns from both change of roll angle and change of aspect angle by introducing generalized ISAR ates. The first paper provides two modes of interpreting ISAR plots, one valid when roll Doppler is dominant, the other valid when the aspect angle Doppler is dominant. Here, we provide, for each type of ISAR plot technique, a corresponding interferometric ISAR (InSAR) technique. The former, aspect-angle dominated InSAR, is a generalization of standard InSAR; the latter, roll-angle dominated InSAR, seems to be new to this work. Both methods are shown to be efficient at identifying localized scatterers under simulation conditions.

  8. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    PubMed

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed framework should usefully inform guidelines for preparing submissions to reimbursement bodies.

  9. Experimentally validated computational modeling of organic binder burnout from green ceramic compacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewsuk, K.G.; Cochran, R.J.; Blackwell, B.F.

    The properties and performance of a ceramic component is determined by a combination of the materials from which it was fabricated and how it was processed. Most ceramic components are manufactured by dry pressing a powder/binder system in which the organic binder provides formability and green compact strength. A key step in this manufacturing process is the removal of the binder from the powder compact after pressing. The organic binder is typically removed by a thermal decomposition process in which heating rate, temperature, and time are the key process parameters. Empirical approaches are generally used to design the burnout time-temperaturemore » cycle, often resulting in excessive processing times and energy usage, and higher overall manufacturing costs. Ideally, binder burnout should be completed as quickly as possible without damaging the compact, while using a minimum of energy. Process and computational modeling offer one means to achieve this end. The objective of this study is to develop an experimentally validated computer model that can be used to better understand, control, and optimize binder burnout from green ceramic compacts.« less

  10. Validation of educational assessments: a primer for simulation and beyond.

    PubMed

    Cook, David A; Hatala, Rose

    2016-01-01

    Simulation plays a vital role in health professions assessment. This review provides a primer on assessment validation for educators and education researchers. We focus on simulation-based assessment of health professionals, but the principles apply broadly to other assessment approaches and topics. Validation refers to the process of collecting validity evidence to evaluate the appropriateness of the interpretations, uses, and decisions based on assessment results. Contemporary frameworks view validity as a hypothesis, and validity evidence is collected to support or refute the validity hypothesis (i.e., that the proposed interpretations and decisions are defensible). In validation, the educator or researcher defines the proposed interpretations and decisions, identifies and prioritizes the most questionable assumptions in making these interpretations and decisions (the "interpretation-use argument"), empirically tests those assumptions using existing or newly-collected evidence, and then summarizes the evidence as a coherent "validity argument." A framework proposed by Messick identifies potential evidence sources: content, response process, internal structure, relationships with other variables, and consequences. Another framework proposed by Kane identifies key inferences in generating useful interpretations: scoring, generalization, extrapolation, and implications/decision. We propose an eight-step approach to validation that applies to either framework: Define the construct and proposed interpretation, make explicit the intended decision(s), define the interpretation-use argument and prioritize needed validity evidence, identify candidate instruments and/or create/adapt a new instrument, appraise existing evidence and collect new evidence as needed, keep track of practical issues, formulate the validity argument, and make a judgment: does the evidence support the intended use? Rigorous validation first prioritizes and then empirically evaluates key assumptions in the interpretation and use of assessment scores. Validation science would be improved by more explicit articulation and prioritization of the interpretation-use argument, greater use of formal validation frameworks, and more evidence informing the consequences and implications of assessment.

  11. [The validation of the process and the results of an information system in primary care].

    PubMed

    Bolíbar Ribas, B; Juncosa Font, S

    1992-01-01

    The needs of information for the primary health care centers planning and management, and the poor situation we started from, have generated a large amount of information systems, which, as a general rule, have not been sufficiently evaluated. Since 1986, in the Area de Gestión, 7, Centro, of the ICS here exists an information system of the general medicine services with a sampling method (ANAC-2). The validation of some aspects of the process and content is shown in order to evaluate the quality of information. The problems arisen during the process of collecting data from nine centers are analyzed during six months and its information content is compared with the one of each system with a standard respect their value. To evaluate the concordance, we have used a graphic representation of the differences of each system with a standard respect their mean value, and the "limits of agreement". On relation with the problems of collecting data, two centers show a nonfulfillment of the observation calendar higher than 20% and the logical divergences are not important. The kind of visits distribution is quite correct, even if the estimate of the whole number of visits is higher than 20% in two centers. In the activity indicators, the system of reference has a tendency to give average values lower than the ANAC-2, with the exception of prescription/visit. In referrals and prescriptions, the use of different sources of information between systems produces an average difference of 3.3 interconsults/100 visits and 0.8 prescriptions/visit respectively. Generally, the limits of agreement are wide and become unacceptable in laboratory. The study carried out is evaluated positively, for it detects the problematical areas which can be modified or require further studies. The importance of validating the information systems is emphasized, in spite of difficulties.

  12. Sleep apps: what role do they play in clinical medicine?

    PubMed

    Lorenz, Christopher P; Williams, Adrian J

    2017-11-01

    Today's smartphones boast more computing power than the Apollo Guidance Computer. Given the ubiquity and popularity of smartphones, are we already carrying around miniaturized sleep labs in our pockets? There is still a lack of validation studies for consumer sleep technologies in general and apps for monitoring sleep in particular. To overcome this gap, multidisciplinary teams are needed that focus on feasibility work at the intersection of software engineering, data science and clinical sleep medicine. To date, no smartphone app for monitoring sleep through movement sensors has been successfully validated against polysomnography, despite the role and validity of actigraphy in sleep medicine having been well established. Missing separation of concerns, not methodology, poses the key limiting factor: The two essential steps in the monitoring process, data collection and scoring, are chained together inside a black box due to the closed nature of consumer devices. This leaves researchers with little room for influence nor can they access raw data. Multidisciplinary teams that wield complete power over the sleep monitoring process are sorely needed.

  13. DOD Financial Management: Challenges in Attaining Audit Readiness and Improving Business Processes and Systems

    DTIC Science & Technology

    2012-04-18

    enterprise architecture and transition plan, and improved investment control processes. This statement is primarily based on GAO’s prior work...business system investments , they had not yet performed the key step of validating assessment results. GAO has made prior recommendations to address...be prepared no later than March 1, 1997. See 31 U.S.C. § 3515. 2An agency’s general fund accounts are those accounts in the U.S. Treasury holding

  14. Development and pilot validation of a sensory reactivity scale for adults with high functioning autism spectrum conditions: Sensory Reactivity in Autism Spectrum (SR-AS).

    PubMed

    Elwin, Marie; Schröder, Agneta; Ek, Lena; Kjellin, Lars

    2016-01-01

    Unusual reactions to sensory stimuli are experienced by 90-95% of people with an autism spectrum condition (ASC). Self-reported sensory reactivity in ASC has mainly been measured with generic questionnaires developed and validated on data from the general population. Interest in sensory reactivity in ASC increased after the inclusion of hyper- and hypo-reactivity together with unusual sensory interest as diagnostic markers of ASC in the DSM-5. To develop and pilot validate a self-report questionnaire designed from first-hand descriptions of the target group of adults diagnosed with high functioning ASC. Psychometric properties of the questionnaire were evaluated on a sample of participants with ASC diagnoses (N = 71) and a random sample from the general population (N = 162). The Sensory Reactivity in Autism Spectrum (SR-AS is intended to be used as a screening tool in diagnostic processes with adults and for support in adapting compensating strategies and environmental adjustments. The internal consistency was high for both the SR-AS and its subscales. The total scale Cronbach's alpha was 0.96 and the subscales alphas were ≥ 0.80. Confirmatory factor analysis (CFA) showed best fit for a four-factor model of inter-correlated factors: hyper and hypo-reactivity, strong sensory interest and a sensory/motor factor. The questionnaire discriminated well between ASC-diagnosed participants and participants from the general population. The SR-AS displayed good internal consistency and discriminatory power and promising factorial validity.

  15. Assessing teachers' positive psychological functioning at work: Development and validation of the Teacher Subjective Wellbeing Questionnaire.

    PubMed

    Renshaw, Tyler L; Long, Anna C J; Cook, Clayton R

    2015-06-01

    This study reports on the initial development and validation of the Teacher Subjective Wellbeing Questionnaire (TSWQ) with 2 samples of educators-a general sample of 185 elementary and middle school teachers, and a target sample of 21 elementary school teachers experiencing classroom management challenges. The TSWQ is an 8-item self-report instrument for assessing teachers' subjective wellbeing, which is operationalized via subscales measuring school connectedness and teaching efficacy. The conceptualization and development processes underlying the TSWQ are described, and results from a series of preliminary psychometric and exploratory analyses are reported to establish initial construct validity. Findings indicated that the TSWQ was characterized by 2 conceptually sound latent factors, that both subscales and the composite scale demonstrated strong internal consistency, and that all scales demonstrated convergent validity with self-reported school supports and divergent validity with self-reported stress and emotional burnout. Furthermore, results indicated that TSWQ scores did not differ according to teachers' school level (i.e., elementary vs. middle), but that they did differ according to unique school environment (e.g., 1 middle school vs. another middle school) and teacher stressors (i.e., general teachers vs. teachers experiencing classroom management challenges). Results also indicated that, for teachers experiencing classroom challenges, the TSWQ had strong short-term predictive validity for psychological distress, accounting for approximately half of the variance in teacher stress and emotional burnout. Implications for theory, research, and the practice of school psychology are discussed. (c) 2015 APA, all rights reserved).

  16. The FIRE Project

    NASA Technical Reports Server (NTRS)

    Mcdougal, D.

    1986-01-01

    The International Satellite Cloud Climatology Project's (ISCCP) First ISCCP Regional Experiment (FIRE) project is a program to validate the cloud parameters derived by the ISCCP. The 4- to 5-year program will concentrate on clouds in the continental United States, particularly cirrus and marine stratocumulus clouds. As part of the validation process, FIRE will acquire satellite, aircraft, balloon, and surface data. These data (except for the satellite data) will be amalgamated into one common data set. Plans are to generate a standardized format structure for use in the PCDS. Data collection will begin in April 1986, but will not be available to the general scientific community until 1987 or 1988. Additional pertinent data sets already reside in the PCDS. Other qualifications of the PCDS for use in this validation program were enumerated.

  17. Measuring Listening Effort: Convergent Validity, Sensitivity, and Links With Cognitive and Personality Measures.

    PubMed

    Strand, Julia F; Brown, Violet A; Merchant, Madeleine B; Brown, Hunter E; Smith, Julia

    2018-06-19

    Listening effort (LE) describes the attentional or cognitive requirements for successful listening. Despite substantial theoretical and clinical interest in LE, inconsistent operationalization makes it difficult to make generalizations across studies. The aims of this large-scale validation study were to evaluate the convergent validity and sensitivity of commonly used measures of LE and assess how scores on those tasks relate to cognitive and personality variables. Young adults with normal hearing (N = 111) completed 7 tasks designed to measure LE, 5 tests of cognitive ability, and 2 personality measures. Scores on some behavioral LE tasks were moderately intercorrelated but were generally not correlated with subjective and physiological measures of LE, suggesting that these tasks may not be tapping into the same underlying construct. LE measures differed in their sensitivity to changes in signal-to-noise ratio and the extent to which they correlated with cognitive and personality variables. Given that LE measures do not show consistent, strong intercorrelations and differ in their relationships with cognitive and personality predictors, these findings suggest caution in generalizing across studies that use different measures of LE. The results also indicate that people with greater cognitive ability appear to use their resources more efficiently, thereby diminishing the detrimental effects associated with increased background noise during language processing.

  18. Educational Milestone Development in the First 7 Specialties to Enter the Next Accreditation System

    PubMed Central

    Swing, Susan R.; Beeson, Michael S.; Carraccio, Carol; Coburn, Michael; Iobst, William; Selden, Nathan R.; Stern, Peter J.; Vydareny, Kay

    2013-01-01

    Background The Accreditation Council for Graduate Medical Education (ACGME) Outcome Project introduced 6 general competencies relevant to medical practice but fell short of its goal to create a robust assessment system that would allow program accreditation based on outcomes. In response, the ACGME, the specialty boards, and other stakeholders collaborated to develop educational milestones, observable steps in residents' professional development that describe progress from entry to graduation and beyond. Objectives We summarize the development of the milestones, focusing on 7 specialties, moving to the next accreditation system in July 2013, and offer evidence of their validity. Methods Specialty workgroups with broad representation used a 5-level developmental framework and incorporated information from literature reviews, specialty curricula, dialogue with constituents, and pilot testing. Results The workgroups produced richly diverse sets of milestones that reflect the community's consideration of attributes of competence relevant to practice in the given specialty. Both their development process and the milestones themselves establish a validity argument, when contemporary views of validity for complex performance assessment are used. Conclusions Initial evidence for validity emerges from the development processes and the resulting milestones. Further advancing a validity argument will require research on the use of milestone data in resident assessment and program accreditation. PMID:24404235

  19. Selection of Surrogate Bacteria for Use in Food Safety Challenge Studies: A Review.

    PubMed

    Hu, Mengyi; Gurtler, Joshua B

    2017-09-01

    Nonpathogenic surrogate bacteria are prevalently used in a variety of food challenge studies in place of foodborne pathogens such as Listeria monocytogenes, Salmonella, Escherichia coli O157:H7, and Clostridium botulinum because of safety and sanitary concerns. Surrogate bacteria should have growth characteristics and/or inactivation kinetics similar to those of target pathogens under given conditions in challenge studies. It is of great importance to carefully select and validate potential surrogate bacteria when verifying microbial inactivation processes. A validated surrogate responds similar to the targeted pathogen when tested for inactivation kinetics, growth parameters, or survivability under given conditions in agreement with appropriate statistical analyses. However, a considerable number of food studies involving putative surrogate bacteria lack convincing validation sources or adequate validation processes. Most of the validation information for surrogates in these studies is anecdotal and has been collected from previous publications but may not be sufficient for given conditions in the study at hand. This review is limited to an overview of select studies and discussion of the general criteria and approaches for selecting potential surrogate bacteria under given conditions. The review also includes a list of documented bacterial pathogen surrogates and their corresponding food products and treatments to provide guidance for future studies.

  20. Cultural adaptation of the Tuberculosis-related stigma scale to Brazil.

    PubMed

    Crispim, Juliane de Almeida; Touso, Michelle Mosna; Yamamura, Mellina; Popolin, Marcela Paschoal; Garcia, Maria Concebida da Cunha; Santos, Cláudia Benedita Dos; Palha, Pedro Fredemir; Arcêncio, Ricardo Alexandre

    2016-06-01

    The process of stigmatization associated with TB has been undervalued in national research as this social aspect is important in the control of the disease, especially in marginalized populations. This paper introduces the stages of the process of cultural adaptation in Brazil of the Tuberculosis-related stigma scale for TB patients. It is a methodological study in which the items of the scale were translated and back-translated with semantic validation with 15 individuals of the target population. After translation, the reconciled back-translated version was compared with the original version by the project coordinator in Southern Thailand, who approved the final version in Brazilian Portuguese. The results of the semantic validation conducted with TB patients enable the identification that, in general, the scale was well accepted and easily understood by the participants.

  1. Confirmatory factor analysis of the Feeding Emotions Scale. A measure of parent emotions in the context of feeding.

    PubMed

    Frankel, Leslie; Fisher, Jennifer O; Power, Thomas G; Chen, Tzu-An; Cross, Matthew B; Hughes, Sheryl O

    2015-08-01

    Assessing parent affect is important because studies examining the parent-child dyad have shown that parent affect has a profound impact on parent-child interactions and related outcomes. Although some measures that assess general affect during daily lives exist, to date there are only few tools that assess parent affect in the context of feeding. The aim of this study was to develop an instrument to measure parent affect specific to the feeding context and determine its validity and reliability. A brief instrument consisting of 20 items was developed that specifically asks how parents feel during the feeding process. This brief instrument draws on the structure of a well-validated general affect measure. A total of 296 Hispanic and Black Head Start parents of preschoolers completed the Feeding Emotions Scale along with other parent-report measures as part of a larger study designed to better understand feeding interactions during the dinner meal. Confirmatory factor analysis supported a two-factor model with independent subscales of positive affect and negative affect (Cronbach's alphas of 0.85 and 0.84, respectively). Concurrent and convergent construct validity was evaluated by correlating the subscales of the Feeding Emotions Scale with positive emotionality and negative emotionality from the Differential Emotions Scale - a measure of general adult emotions. Concurrent and convergent criterion validity was evaluated by testing mean differences in affect across parent feeding styles using ANOVA. A significant difference was found across maternal weight status for positive feeding affect. The resulting validated measure can be used to assess parent affect in studies of feeding to better understand how interactions during feeding may impact the development of child eating behaviors and possibly weight status. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Kinematics, influence functions and field quantities for disturbance propagation from moving disturbance sources

    NASA Technical Reports Server (NTRS)

    Das, A.

    1984-01-01

    A unified method is presented for deriving the influence functions of moving singularities which determine the field quantities in aerodynamics and aeroacoustics. The moving singularities comprise volume and surface distributions having arbitrary orientations in space and to the trajectory. Hence one generally valid formula for the influence functions which reveal some universal relationships and remarkable properties in the disturbance fields. The derivations used are completely consistent with the physical processes in the propagation field, such that treatment renders new descriptions for some standard concepts. The treatment is uniformly valid for subsonic and supersonic Mach numbers.

  3. An Effective Measured Data Preprocessing Method in Electrical Impedance Tomography

    PubMed Central

    Yu, Chenglong; Yue, Shihong; Wang, Jianpei; Wang, Huaxiang

    2014-01-01

    As an advanced process detection technology, electrical impedance tomography (EIT) has widely been paid attention to and studied in the industrial fields. But the EIT techniques are greatly limited to the low spatial resolutions. This problem may result from the incorrect preprocessing of measuring data and lack of general criterion to evaluate different preprocessing processes. In this paper, an EIT data preprocessing method is proposed by all rooting measured data and evaluated by two constructed indexes based on all rooted EIT measured data. By finding the optimums of the two indexes, the proposed method can be applied to improve the EIT imaging spatial resolutions. In terms of a theoretical model, the optimal rooting times of the two indexes range in [0.23, 0.33] and in [0.22, 0.35], respectively. Moreover, these factors that affect the correctness of the proposed method are generally analyzed. The measuring data preprocessing is necessary and helpful for any imaging process. Thus, the proposed method can be generally and widely used in any imaging process. Experimental results validate the two proposed indexes. PMID:25165735

  4. Design and Evaluation Process of a Personal and Motive-Based Competencies Questionnaire in Spanish-Speaking Contexts.

    PubMed

    Batista-Foguet, Joan; Sipahi-Dantas, Alaide; Guillén, Laura; Martínez Arias, Rosario; Serlavós, Ricard

    2016-03-22

    Most questionnaires used for managerial purposes have been developed in Anglo-Saxon countries and then adapted for other cultures. However, this process is controversial. This paper fills the gap for more culturally sensitive assessment instruments in the specific field of human resources while also addressing the methodological issues that scientists and practitioners face in the development of questionnaires. First, we present the development process of a Personal and Motive-based competencies questionnaire targeted to Spanish-speaking countries. Second, we address the validation process by guiding the reader through testing the questionnaire construct validity. We performed two studies: a first study with 274 experts and practitioners of competency development and a definitive study with 482 members of the general public. Our results support a model of nineteen competencies grouped into four higher-order factors. To assure valid construct comparisons we have tested the factorial invariance of gender and work experience. Subsequent analysis have found that women self-rate themselves significantly higher than men on only two of the nineteen competencies, empathy (p < .001) and service orientation (p < .05). The effect of work experience was significant in twelve competencies (p < .001), in which less experienced workers self-rate higher than experienced workers. Finally, we derive theoretical and practical implications.

  5. Validation of a home food inventory among low-income Spanish- and Somali-speaking families.

    PubMed

    Hearst, Mary O; Fulkerson, Jayne A; Parke, Michelle; Martin, Lauren

    2013-07-01

    To refine and validate an existing home food inventory (HFI) for low-income Somali- and Spanish-speaking families. Formative assessment was conducted using two focus groups, followed by revisions of the HFI, translation of written materials and instrument validation in participants’ homes. Twin Cities Metropolitan Area, Minnesota, USA. Thirty low-income families with children of pre-school age (fifteen Spanish-speaking; fifteen Somali-speaking) completed the HFI simultaneously with, but independently of, a trained staff member. Analysis consisted of calculation of both item-specific and average food group kappa coefficients, specificity, sensitivity and Spearman’s correlation between participants’ and staff scores as a means of assessing criterion validity of individual items, food categories and the obesogenic score. The formative assessment revealed the need for few changes/additions for food items typically found in Spanish-speaking households. Somali-speaking participants requested few additions, but many deletions, including frozen processed food items, non-perishable produce and many sweets as they were not typical food items kept in the home. Generally, all validity indices were within an acceptable range, with the exception of values associated with items such as ‘whole wheat bread’ (k = 0.16). The obesogenic score (presence of high-fat, high-energy foods) had high criterion validity with k = 0.57, sensitivity = 91.8%, specificity = 70.6% and Spearman correlation = 0.78. The revised HFI is a valid assessment tool for use among Spanish and Somali households. This instrument refinement and validation process can be replicated with other population groups.

  6. Factors in the Admissions Process Influencing Persistence in a Master's of Science Program in Marine Science

    ERIC Educational Resources Information Center

    Dore, Melissa L.

    2017-01-01

    This applied dissertation was conducted to provide the graduate program in marine sciences a valid predictor for success in the admissions scoring systems that include the general Graduate Record Exam. The dependent variable was persistence: successfully graduating from the marine sciences master's programs. This dissertation evaluated other…

  7. Inside The Zone of Proximal Development: Validating A Multifactor Model Of Learning Potential With Gifted Students And Their Peers

    ERIC Educational Resources Information Center

    Kanevsky, Lannie; Geake, John

    2004-01-01

    Kanevsky (1995b) proposed a model of learning potential based on Vygotsky?s notions of "good learning" and the zone of proximal development. This study investigated the contributions of general knowledge, information processing efficiency, and metacognition to differences in the learning potential of 5 gifted nongifted students.…

  8. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    PubMed

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  9. Interaction of Theory and Practice to Assess External Validity.

    PubMed

    Leviton, Laura C; Trujillo, Mathew D

    2016-01-18

    Variations in local context bedevil the assessment of external validity: the ability to generalize about effects of treatments. For evaluation, the challenges of assessing external validity are intimately tied to the translation and spread of evidence-based interventions. This makes external validity a question for decision makers, who need to determine whether to endorse, fund, or adopt interventions that were found to be effective and how to ensure high quality once they spread. To present the rationale for using theory to assess external validity and the value of more systematic interaction of theory and practice. We review advances in external validity, program theory, practitioner expertise, and local adaptation. Examples are provided for program theory, its adaptation to diverse contexts, and generalizing to contexts that have not yet been studied. The often critical role of practitioner experience is illustrated in these examples. Work is described that the Robert Wood Johnson Foundation is supporting to study treatment variation and context more systematically. Researchers and developers generally see a limited range of contexts in which the intervention is implemented. Individual practitioners see a different and often a wider range of contexts, albeit not a systematic sample. Organized and taken together, however, practitioner experiences can inform external validity by challenging the developers and researchers to consider a wider range of contexts. Researchers have developed a variety of ways to adapt interventions in light of such challenges. In systematic programs of inquiry, as opposed to individual studies, the problems of context can be better addressed. Evaluators have advocated an interaction of theory and practice for many years, but the process can be made more systematic and useful. Systematic interaction can set priorities for assessment of external validity by examining the prevalence and importance of context features and treatment variations. Practitioner interaction with researchers and developers can assist in sharpening program theory, reducing uncertainty about treatment variations that are consistent or inconsistent with the theory, inductively ruling out the ones that are harmful or irrelevant, and helping set priorities for more rigorous study of context and treatment variation. © The Author(s) 2016.

  10. A framework for the direct evaluation of large deviations in non-Markovian processes

    NASA Astrophysics Data System (ADS)

    Cavallaro, Massimo; Harris, Rosemary J.

    2016-11-01

    We propose a general framework to simulate stochastic trajectories with arbitrarily long memory dependence and efficiently evaluate large deviation functions associated to time-extensive observables. This extends the ‘cloning’ procedure of Giardiná et al (2006 Phys. Rev. Lett. 96 120603) to non-Markovian systems. We demonstrate the validity of this method by testing non-Markovian variants of an ion-channel model and the totally asymmetric exclusion process, recovering results obtainable by other means.

  11. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    NASA Astrophysics Data System (ADS)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  12. Are the binary typology models of alcoholism valid in polydrug abusers?

    PubMed

    Pombo, Samuel; da Costa, Nuno F; Figueira, Maria L

    2015-01-01

    To evaluate the dichotomy of type I/II and type A/B alcoholism typologies in opiate-dependent patients with a comorbid alcohol dependence problem (ODP-AP). The validity assessment process comprised the information regarding the history of alcohol use (internal validity), cognitive-behavioral variables regarding substance use (external validity), and indicators of treatment during 6-month follow-up (predictive validity). ODP-AP subjects classified as type II/B presented an early and much more severe drinking problem and a worse clinical prognosis when considering opiate treatment variables as compared with ODP-AP subjects defined as type I/A. Furthermore, type II/B patients endorse more general positive beliefs and expectancies related to the effect of alcohol and tend to drink heavily across several intra- and interpersonal situations as compared with type I/A patients. These findings confirm two different forms of alcohol dependence, recognized as a low-severity/vulnerability subgroup and a high-severity/vulnerability subgroup, in an opiate-dependent population with a lifetime diagnosis of alcohol dependence.

  13. Cross-cultural adaptation, validation and reliability of the brazilian version of the Richmond Compulsive Buying Scale.

    PubMed

    Leite, Priscilla; Rangé, Bernard; Kukar-Kiney, Monika; Ridgway, Nancy; Monroe, Kent; Ribas Junior, Rodolfo; Landeira Fernandez, J; Nardi, Antonio Egidio; Silva, Adriana

    2013-03-01

    To present the process of transcultural adaptation of the Richmond Compulsive Buying Scale to Brazilian Portuguese. For the semantic adaptation step, the scale was translated to Portuguese and then back-translated to English by two professional translators and one psychologist, without any communication between them. The scale was then applied to 20 participants from the general population for language adjustments. For the construct validation step, an exploratory factor analysis was performed, using the scree plot test, principal component analysis for factor extraction, and Varimax rotation. For convergent validity, the correlation matrix was analyzed through Pearson's coefficient. The scale showed easy applicability, satisfactory internal consistency (Cronbach's alpha=.87), and a high correlation with other rating scales for compulsive buying disorder, indicating that it is suitable to be used in the assessment and diagnosis of compulsive buying disorder, as it presents psychometric validity. The Brazilian Portuguese version of the Richmond Compulsive Buying Scale has good validity and reliability.

  14. [Validation of the knowledge and attitudes of health professionals in the Living Will Declaration process].

    PubMed

    Contreras-Fernández, Eugenio; Barón-López, Francisco Javier; Méndez-Martínez, Camila; Canca-Sánchez, José Carlos; Cabezón Rodríguez, Isabel; Rivas-Ruiz, Francisco

    2017-04-01

    Evaluate the validity and reliability of the knowledge and attitudes of health professionals questionnaire on the Living Will Declaration (LWD) process. Cross-sectional study structured into 3 phases: (i)pilot questionnaire administered with paper to assess losses and adjustment problems; (ii)assessment of the validity and internal reliability, and (iii)assessment of the pre-filtering questionnaire stability (test-retest). Costa del Sol (Malaga) Health Area. January 2014 to April 2015. Healthcare professionals of the Costa del Sol Primary Care District and the Costa del Sol Health Agency. There were 391 (23.6%) responses, and 100 participated in the stability assessment (83 responses). The questionnaire consisted of 2 parts: (i)Knowledge (5 dimensions and 41 items), and (ii)Attitudes (2 dimensions and 17 items). In the pilot study, none of the items lost over 10%. In the evaluation phase of validity and reliability, the questionnaire was reduced to 41 items (29 of knowledge, and 12 of attitudes). In the stability evaluation phase, all items evaluated met the requirement of a kappa higher than 0.2, or had a percentage of absolute agreement exceeding 75%. The questionnaire will identify the status and areas for improvement in the health care setting, and then will allow an improved culture of LWD process in general population. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.

  15. A Latent Variables Examination of Processing Speed, Response Inhibition, and Working Memory during Typical Development

    PubMed Central

    McAuley, Tara; White, Desirée

    2010-01-01

    The present study addressed three related aims: (1) to replicate and extend previous work regarding the non-unitary nature of processing speed, response inhibition, and working memory during development, (2) to quantify the rate at which processing speed, response inhibition, and working memory develop and the extent to which the development of these latter abilities reflect general changes in processing speed, and (3) to evaluate whether commonly used tasks of processing speed, response inhibition, and working memory are valid and reliable when used with a developmentally diverse group. To address these aims, a latent variables approach was used to analyze data from 147 participants 6 to 24 years of age. Results showed that processing speed, response inhibition, and working memory were separable abilities and that the extent of this separability was stable cross the age range of participants. All three constructs improved as a function of age; however, only the effect of age on working memory remained significant after processing speed was controlled. The psychometric properties of tasks used to assess the constructs were age invariant, thus validating their use in studies of executive development. PMID:20888572

  16. Statistical analysis of general aviation VG-VGH data

    NASA Technical Reports Server (NTRS)

    Clay, L. E.; Dickey, R. L.; Moran, M. S.; Payauys, K. W.; Severyn, T. P.

    1974-01-01

    To represent the loads spectra of general aviation aircraft operating in the Continental United States, VG and VGH data collected since 1963 in eight operational categories were processed and analyzed. Adequacy of data sample and current operational categories, and parameter distributions required for valid data extrapolation were studied along with envelopes of equal probability of exceeding the normal load factor (n sub z) versus airspeed for gust and maneuver loads and the probability of exceeding current design maneuver, gust, and landing impact n sub z limits. The significant findings are included.

  17. Self-validating type C thermocouples to 2300 °C using high temperature fixed points

    NASA Astrophysics Data System (ADS)

    Pearce, J. V.; Elliott, C. J.; Machin, G.; Ongrai, O.

    2013-09-01

    Above 1500 °C, tungsten-rhenium (W-Re) thermocouples are the most commonly used contact thermometers because they are practical and inexpensive. However in general loss of calibration is very rapid, and, due to their embrittlement at high temperature, it is generally not possible to remove them for recalibration from the process environments in which they are used. Even if removal for recalibration was possible this would be of, at best, very limited use due to large inhomogeneity effects. Ideally, these thermocouples require some mechanism to monitor their drift in-situ. In this study, we describe self-validation of Type C (W5%Re/W26%Re) thermocouples by means of miniature high temperature fixed points comprising crucibles containing respectively Co-C, Pt-C, Ru-C, and Ir-C eutectic alloys. An overview of developments in this area is presented.

  18. 41 CFR 60-3.5 - General standards for validity studies.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... should avoid making employment decisions on the basis of measures of knowledges, skills, or abilities... General standards for validity studies. A. Acceptable types of validity studies. For the purposes of... of these guidelines, section 14 of this part. New strategies for showing the validity of selection...

  19. A new dataset validation system for the Planetary Science Archive

    NASA Astrophysics Data System (ADS)

    Manaud, N.; Zender, J.; Heather, D.; Martinez, S.

    2007-08-01

    The Planetary Science Archive is the official archive for the Mars Express mission. It has received its first data by the end of 2004. These data are delivered by the PI teams to the PSA team as datasets, which are formatted conform to the Planetary Data System (PDS). The PI teams are responsible for analyzing and calibrating the instrument data as well as the production of reduced and calibrated data. They are also responsible of the scientific validation of these data. ESA is responsible of the long-term data archiving and distribution to the scientific community and must ensure, in this regard, that all archived products meet quality. To do so, an archive peer-review is used to control the quality of the Mars Express science data archiving process. However a full validation of its content is missing. An independent review board recently recommended that the completeness of the archive as well as the consistency of the delivered data should be validated following well-defined procedures. A new validation software tool is being developed to complete the overall data quality control system functionality. This new tool aims to improve the quality of data and services provided to the scientific community through the PSA, and shall allow to track anomalies in and to control the completeness of datasets. It shall ensure that the PSA end-users: (1) can rely on the result of their queries, (2) will get data products that are suitable for scientific analysis, (3) can find all science data acquired during a mission. We defined dataset validation as the verification and assessment process to check the dataset content against pre-defined top-level criteria, which represent the general characteristics of good quality datasets. The dataset content that is checked includes the data and all types of information that are essential in the process of deriving scientific results and those interfacing with the PSA database. The validation software tool is a multi-mission tool that has been designed to provide the user with the flexibility of defining and implementing various types of validation criteria, to iteratively and incrementally validate datasets, and to generate validation reports.

  20. Experimental Test of the Differential Fluctuation Theorem and a Generalized Jarzynski Equality for Arbitrary Initial States

    NASA Astrophysics Data System (ADS)

    Hoang, Thai M.; Pan, Rui; Ahn, Jonghoon; Bang, Jaehoon; Quan, H. T.; Li, Tongcang

    2018-02-01

    Nonequilibrium processes of small systems such as molecular machines are ubiquitous in biology, chemistry, and physics but are often challenging to comprehend. In the past two decades, several exact thermodynamic relations of nonequilibrium processes, collectively known as fluctuation theorems, have been discovered and provided critical insights. These fluctuation theorems are generalizations of the second law and can be unified by a differential fluctuation theorem. Here we perform the first experimental test of the differential fluctuation theorem using an optically levitated nanosphere in both underdamped and overdamped regimes and in both spatial and velocity spaces. We also test several theorems that can be obtained from it directly, including a generalized Jarzynski equality that is valid for arbitrary initial states, and the Hummer-Szabo relation. Our study experimentally verifies these fundamental theorems and initiates the experimental study of stochastic energetics with the instantaneous velocity measurement.

  1. Validation of a scale for assessing attitudes towards outcomes of genetic cancer testing among primary care providers and breast specialists

    PubMed Central

    N’Diaye, Khadim; Evans, D. Gareth; Harris, Hilary; Tibben, Aad; van Asperen, Christi; Schmidtke, Joerg; Nippert, Irmgard; Mancini, Julien; Julian-Reynier, Claire

    2017-01-01

    Objective To develop a generic scale for assessing attitudes towards genetic testing and to psychometrically assess these attitudes in the context of BRCA1/2 among a sample of French general practitioners, breast specialists and gyneco-obstetricians. Study design and setting Nested within the questionnaire developed for the European InCRisC (International Cancer Risk Communication Study) project were 14 items assessing expected benefits (8 items) and drawbacks (6 items) of the process of breast/ovarian genetic cancer testing (BRCA1/2). Another item assessed agreement with the statement that, overall, the expected health benefits of BRCA1/2 testing exceeded its drawbacks, thereby justifying its prescription. The questionnaire was mailed to a sample of 1,852 French doctors. Of these, 182 breast specialists, 275 general practitioners and 294 gyneco-obstetricians completed and returned the questionnaire to the research team. Principal Component Analysis, Cronbach’s α coefficient, and Pearson’s correlation coefficients were used in the statistical analyses of collected data. Results Three dimensions emerged from the respondents’ responses, and were classified under the headings: “Anxiety, Conflict and Discrimination”, “Risk Information”, and “Prevention and Surveillance”. Cronbach’s α coefficient for the 3 dimensions was 0.79, 0.76 and 0.62, respectively, and each dimension exhibited strong correlation with the overall indicator of agreement (criterion validity). Conclusions The validation process of the 15 items regarding BRCA1/2 testing revealed satisfactory psychometric properties for the creation of a new scale entitled the Attitudes Towards Genetic Testing for BRCA1/2 (ATGT-BRCA1/2) Scale. Further testing is required to confirm the validity of this tool which could be used generically in other genetic contexts. PMID:28570656

  2. Cultural adaptation into Spanish of the generalized anxiety disorder-7 (GAD-7) scale as a screening tool

    PubMed Central

    2010-01-01

    Background Generalized anxiety disorder (GAD) is a prevalent mental health condition which is underestimated worldwide. This study carried out the cultural adaptation into Spanish of the 7-item self-administered GAD-7 scale, which is used to identify probable patients with GAD. Methods The adaptation was performed by an expert panel using a conceptual equivalence process, including forward and backward translations in duplicate. Content validity was assessed by interrater agreement. Criteria validity was explored using ROC curve analysis, and sensitivity, specificity, predictive positive value and negative value for different cut-off values were determined. Concurrent validity was also explored using the HAM-A, HADS, and WHO-DAS-II scales. Results The study sample consisted of 212 subjects (106 patients with GAD) with a mean age of 50.38 years (SD = 16.76). Average completion time was 2'30''. No items of the scale were left blank. Floor and ceiling effects were negligible. No patients with GAD had to be assisted to fill in the questionnaire. The scale was shown to be one-dimensional through factor analysis (explained variance = 72%). A cut-off point of 10 showed adequate values of sensitivity (86.8%) and specificity (93.4%), with AUC being statistically significant [AUC = 0.957-0.985); p < 0.001]. The scale significantly correlated with HAM-A (0.852, p < 0.001), HADS (anxiety domain, 0.903, p < 0.001), and WHO-DAS II (0.696, p > 0.001). Limitations Elderly people, particularly those very old, may need some help to complete the scale. Conclusion After the cultural adaptation process, a Spanish version of the GAD-7 scale was obtained. The validity of its content and the relevance and adequacy of items in the Spanish cultural context were confirmed. PMID:20089179

  3. CheckMyMetal: a macromolecular metal-binding validation tool

    PubMed Central

    Porebski, Przemyslaw J.

    2017-01-01

    Metals are essential in many biological processes, and metal ions are modeled in roughly 40% of the macromolecular structures in the Protein Data Bank (PDB). However, a significant fraction of these structures contain poorly modeled metal-binding sites. CheckMyMetal (CMM) is an easy-to-use metal-binding site validation server for macromolecules that is freely available at http://csgid.org/csgid/metal_sites. The CMM server can detect incorrect metal assignments as well as geometrical and other irregularities in the metal-binding sites. Guidelines for metal-site modeling and validation in macromolecules are illustrated by several practical examples grouped by the type of metal. These examples show CMM users (and crystallographers in general) problems they may encounter during the modeling of a specific metal ion. PMID:28291757

  4. Signal processing and neural network toolbox and its application to failure diagnosis and prognosis

    NASA Astrophysics Data System (ADS)

    Tu, Fang; Wen, Fang; Willett, Peter K.; Pattipati, Krishna R.; Jordan, Eric H.

    2001-07-01

    Many systems are comprised of components equipped with self-testing capability; however, if the system is complex involving feedback and the self-testing itself may occasionally be faulty, tracing faults to a single or multiple causes is difficult. Moreover, many sensors are incapable of reliable decision-making on their own. In such cases, a signal processing front-end that can match inference needs will be very helpful. The work is concerned with providing an object-oriented simulation environment for signal processing and neural network-based fault diagnosis and prognosis. In the toolbox, we implemented a wide range of spectral and statistical manipulation methods such as filters, harmonic analyzers, transient detectors, and multi-resolution decomposition to extract features for failure events from data collected by data sensors. Then we evaluated multiple learning paradigms for general classification, diagnosis and prognosis. The network models evaluated include Restricted Coulomb Energy (RCE) Neural Network, Learning Vector Quantization (LVQ), Decision Trees (C4.5), Fuzzy Adaptive Resonance Theory (FuzzyArtmap), Linear Discriminant Rule (LDR), Quadratic Discriminant Rule (QDR), Radial Basis Functions (RBF), Multiple Layer Perceptrons (MLP) and Single Layer Perceptrons (SLP). Validation techniques, such as N-fold cross-validation and bootstrap techniques, are employed for evaluating the robustness of network models. The trained networks are evaluated for their performance using test data on the basis of percent error rates obtained via cross-validation, time efficiency, generalization ability to unseen faults. Finally, the usage of neural networks for the prediction of residual life of turbine blades with thermal barrier coatings is described and the results are shown. The neural network toolbox has also been applied to fault diagnosis in mixed-signal circuits.

  5. Noncognitive constructs in graduate admissions: an integrative review of available instruments.

    PubMed

    Megginson, Lucy

    2009-01-01

    In the graduate admission process, both cognitive and noncognitive instruments evaluate a candidate's potential success in a program of study. Traditional cognitive measures include the Graduate Record Examination or graduate grade point average, while noncognitive constructs such as personality, attitude, and motivation are generally measured through letters of recommendation, interviews, or personality inventories. Little consensus exists as to what criteria constitute valid and effective measurements of graduate student potential. This integrative review of available tools to measure noncognitive constructs will assist graduate faculty in identifying valid and reliable instruments that will enhance a more holistic assessment of nursing graduate candidates. Finally, as evidence-based practice begins to penetrate academic processes and as graduate faculty realize the predictive significance of noncognitive attributes, faculty can use the information in this integrative review to guide future research.

  6. The generalized second law of thermodynamics in Hořava-Lifshitz cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamil, Mubasher; Saridakis, Emmanuel N.; Setare, M.R., E-mail: mjamil@camp.nust.edu.pk, E-mail: msaridak@phys.uoa.gr, E-mail: rezakord@ipm.ir

    2010-11-01

    We investigate the validity of the generalized second law of thermodynamics in a universe governed by Hořava-Lifshitz gravity. Under the equilibrium assumption, that is in the late-time cosmological regime, we calculate separately the entropy time-variation for the matter fluid and, using the modified entropy relation, that of the apparent horizon itself. We find that under detailed balance the generalized second law is generally valid for flat and closed geometry and it is conditionally valid for an open universe, while beyond detailed balance it is only conditionally valid for all curvatures. Furthermore, we also follow the effective approach showing that itmore » can lead to misleading results. The non-complete validity of the generalized second law could either provide a suggestion for its different application, or act as an additional problematic feature of Hořava-Lifshitz gravity.« less

  7. Analytical Concept: Development of a Multinational Information Strategy

    DTIC Science & Technology

    2008-10-31

    16 1.4.3 Training and Mentoring/ Coaching ............................................................. 16 1.5 Analysis Requirements...America. These priority focus areas will become subject to experimentation in a number of consecutive phases of the 2008 Major Integrating Event ( MIE ...factor in general, in the MNE 5 CD&E program, focused on supporting concept validation in the 2008 MIE . The Analytical Concept outlines processes and

  8. Health Facilities: New York State's Oversight of Nursing Homes and Hospitals. Report to the Honorable Bill Green, House of Representatives.

    ERIC Educational Resources Information Center

    General Accounting Office, New York, NY. Regional Office.

    At the request of Congressman William Green, the General Accounting Office (GAO) evaluated the validity of allegations about deficiencies in the New York State Department of Health's nursing home and hospital inspection processes for certification for participation in the Medicare and Medicaid programs. Health Care Financing Administration and…

  9. An extended protocol for usability validation of medical devices: Research design and reference model.

    PubMed

    Schmettow, Martin; Schnittker, Raphaela; Schraagen, Jan Maarten

    2017-05-01

    This paper proposes and demonstrates an extended protocol for usability validation testing of medical devices. A review of currently used methods for the usability evaluation of medical devices revealed two main shortcomings. Firstly, the lack of methods to closely trace the interaction sequences and derive performance measures. Secondly, a prevailing focus on cross-sectional validation studies, ignoring the issues of learnability and training. The U.S. Federal Drug and Food Administration's recent proposal for a validation testing protocol for medical devices is then extended to address these shortcomings: (1) a novel process measure 'normative path deviations' is introduced that is useful for both quantitative and qualitative usability studies and (2) a longitudinal, completely within-subject study design is presented that assesses learnability, training effects and allows analysis of diversity of users. A reference regression model is introduced to analyze data from this and similar studies, drawing upon generalized linear mixed-effects models and a Bayesian estimation approach. The extended protocol is implemented and demonstrated in a study comparing a novel syringe infusion pump prototype to an existing design with a sample of 25 healthcare professionals. Strong performance differences between designs were observed with a variety of usability measures, as well as varying training-on-the-job effects. We discuss our findings with regard to validation testing guidelines, reflect on the extensions and discuss the perspectives they add to the validation process. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Hydrological Validation of The Lpj Dynamic Global Vegetation Model - First Results and Required Actions

    NASA Astrophysics Data System (ADS)

    Haberlandt, U.; Gerten, D.; Schaphoff, S.; Lucht, W.

    Dynamic global vegetation models are developed with the main purpose to describe the spatio-temporal dynamics of vegetation at the global scale. Increasing concern about climate change impacts has put the focus of recent applications on the sim- ulation of the global carbon cycle. Water is a prime driver of biogeochemical and biophysical processes, thus an appropriate representation of the water cycle is crucial for their proper simulation. However, these models usually lack thorough validation of the water balance they produce. Here we present a hydrological validation of the current version of the LPJ (Lund- Potsdam-Jena) model, a dynamic global vegetation model operating at daily time steps. Long-term simulated runoff and evapotranspiration are compared to literature values, results from three global hydrological models, and discharge observations from various macroscale river basins. It was found that the seasonal and spatial patterns of the LPJ-simulated average values correspond well both with the measurements and the results from the stand-alone hy- drological models. However, a general underestimation of runoff occurs, which may be attributable to the low input dynamics of precipitation (equal distribution within a month), to the simulated vegetation pattern (potential vegetation without anthro- pogenic influence), and to some generalizations of the hydrological components in LPJ. Future research will focus on a better representation of the temporal variability of climate forcing, improved description of hydrological processes, and on the consider- ation of anthropogenic land use.

  11. Validation of Alternative In Vitro Methods to Animal Testing: Concepts, Challenges, Processes and Tools.

    PubMed

    Griesinger, Claudius; Desprez, Bertrand; Coecke, Sandra; Casey, Warren; Zuang, Valérie

    This chapter explores the concepts, processes, tools and challenges relating to the validation of alternative methods for toxicity and safety testing. In general terms, validation is the process of assessing the appropriateness and usefulness of a tool for its intended purpose. Validation is routinely used in various contexts in science, technology, the manufacturing and services sectors. It serves to assess the fitness-for-purpose of devices, systems, software up to entire methodologies. In the area of toxicity testing, validation plays an indispensable role: "alternative approaches" are increasingly replacing animal models as predictive tools and it needs to be demonstrated that these novel methods are fit for purpose. Alternative approaches include in vitro test methods, non-testing approaches such as predictive computer models up to entire testing and assessment strategies composed of method suites, data sources and decision-aiding tools. Data generated with alternative approaches are ultimately used for decision-making on public health and the protection of the environment. It is therefore essential that the underlying methods and methodologies are thoroughly characterised, assessed and transparently documented through validation studies involving impartial actors. Importantly, validation serves as a filter to ensure that only test methods able to produce data that help to address legislative requirements (e.g. EU's REACH legislation) are accepted as official testing tools and, owing to the globalisation of markets, recognised on international level (e.g. through inclusion in OECD test guidelines). Since validation creates a credible and transparent evidence base on test methods, it provides a quality stamp, supporting companies developing and marketing alternative methods and creating considerable business opportunities. Validation of alternative methods is conducted through scientific studies assessing two key hypotheses, reliability and relevance of the test method for a given purpose. Relevance encapsulates the scientific basis of the test method, its capacity to predict adverse effects in the "target system" (i.e. human health or the environment) as well as its applicability for the intended purpose. In this chapter we focus on the validation of non-animal in vitro alternative testing methods and review the concepts, challenges, processes and tools fundamental to the validation of in vitro methods intended for hazard testing of chemicals. We explore major challenges and peculiarities of validation in this area. Based on the notion that validation per se is a scientific endeavour that needs to adhere to key scientific principles, namely objectivity and appropriate choice of methodology, we examine basic aspects of study design and management, and provide illustrations of statistical approaches to describe predictive performance of validated test methods as well as their reliability.

  12. Validation of the American version of the CareGiver Oncology Quality of Life (CarGOQoL) questionnaire.

    PubMed

    Kaveney, Sarah C; Baumstarck, Karine; Minaya-Flores, Patricia; Shannon, Tarrah; Symes, Philip; Loundou, Anderson; Auquier, Pascal

    2016-05-28

    The CareGiver Oncology Quality of Life (CarGOQoL) questionnaire, a 29-item, multidimensional, self-administered questionnaire, was validated using a large French sample. We reported the linguistic validation process and the metric validity of the English version of CarGOQoL in the United- States. The translation process consisted of 3 consecutive steps: forward-backward translation, acceptability testing, and cognitive interviews. The psychometric testing was applied to caregivers of consecutive patients with representative cancers who were recruited from the Regional Cancer Center in northwestern Pennsylvania. All individuals completed the CarGOQoL at baseline, day- 30, and day- 90. Internal consistency, reliability, external validity, reproducibility, and sensitivity to change were tested. The translated version was validated on a total of 87 American cancer caregivers. The dimensions of the CarGOQoL generally demonstrated a high internal consistency (Cronbach's alpha > 0.70 for all but four domain scores). External validity testing revealed that the CarGOQoL index score correlated significantly with all SF-36 dimension scores except the physical composite score (Pearson's correlation: 0.28-0.70). Reproducibility was satisfactory at day- 30 (intraclass correlation coefficient: 0.46-0.94) and day- 90 (0.43-0.92). Four specific dimensions of CarGOQoL showed responsiveness: the Psychological well-being, the Relationships with health care system, the Social support and the Finances. The American version of the CarGOQoL constitutes a useful instrument to measure QoL in caregivers of cancer patients in the United- States.

  13. Validation of general job satisfaction in the Korean Labor and Income Panel Study.

    PubMed

    Park, Shin Goo; Hwang, Sang Hee

    2017-01-01

    The purpose of this study is to assess the validity and reliability of general job satisfaction (JS) in the Korean Labor and Income Panel Study (KLIPS). We used the data from the 17th wave (2014) of the nationwide KLIPS, which selected a representative panel sample of Korean households and individuals aged 15 or older residing in urban areas. We included in this study 7679 employed subjects (4529 males and 3150 females). The general JS instrument consisted of five items rated on a scale from 1 (strongly disagree) to 5 (strongly agree). The general JS reliability was assessed using the corrected item-total correlation and Cronbach's alpha coefficient. The validity of general JS was assessed using confirmatory factor analysis (CFA) and Pearson's correlation. The corrected item-total correlations ranged from 0.736 to 0.837. Therefore, no items were removed. Cronbach's alpha for general JS was 0.925, indicating excellent internal consistency. The CFA of the general JS model showed a good fit. Pearson's correlation coefficients for convergent validity showed moderate or strong correlations. The results obtained in our study confirm the validity and reliability of general JS.

  14. The Stroop test as a measure of performance validity in adults clinically referred for neuropsychological assessment.

    PubMed

    Erdodi, Laszlo A; Sagar, Sanya; Seke, Kristian; Zuccato, Brandon G; Schwartz, Eben S; Roth, Robert M

    2018-06-01

    This study was designed to develop performance validity indicators embedded within the Delis-Kaplan Executive Function Systems (D-KEFS) version of the Stroop task. Archival data from a mixed clinical sample of 132 patients (50% male; M Age = 43.4; M Education = 14.1) clinically referred for neuropsychological assessment were analyzed. Criterion measures included the Warrington Recognition Memory Test-Words and 2 composites based on several independent validity indicators. An age-corrected scaled score ≤6 on any of the 4 trials reliably differentiated psychometrically defined credible and noncredible response sets with high specificity (.87-.94) and variable sensitivity (.34-.71). An inverted Stroop effect was less sensitive (.14-.29), but comparably specific (.85-90) to invalid performance. Aggregating the newly developed D-KEFS Stroop validity indicators further improved classification accuracy. Failing the validity cutoffs was unrelated to self-reported depression or anxiety. However, it was associated with elevated somatic symptom report. In addition to processing speed and executive function, the D-KEFS version of the Stroop task can function as a measure of performance validity. A multivariate approach to performance validity assessment is generally superior to univariate models. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Validation of MIL-F-9490D. General Specification for Flight Control System for Piloted Military Aircraft. Volume III. C-5A Heavy Logistics Transport Validation

    DTIC Science & Technology

    1977-04-01

    U* AFFDL-TR-77-7 0 VOLUME III " 󈧦 VALIDATION OF MIL-F-9490D - GENERAL SPECIFICATION FOR FLIGHT CONTROL SYSTEM "FOR PILOTED MILITARY AIRCRAFT VOLUME...ý A1O 1 C I\\.FFBL Ti(-77-7. Vol. III f Validatio~n of UL-P-9-490D#,*. General Spacificatior "~inal 1’l -_t e for Flight ContrsA Zyn’om for Piloted...cation MIL-F-9490D (USAF), "Flight Control Systems - Design, Installation and Test of Piloted Aircraft, General Specifications for," dated 6 June 1975, by

  16. Linguistic validation of the US Spanish work productivity and activity impairment questionnaire, general health version.

    PubMed

    Gawlicki, Mary C; Reilly, Margaret C; Popielnicki, Ana; Reilly, Kate

    2006-01-01

    There are no measures of health-related absenteeism and presenteeism validated for use in the large and increasing US Spanish-speaking population. Before using a Spanish translation of an available English-language questionnaire, the linguistic validity of the Spanish version must be established to ensure its conceptual equivalence to the original and its cultural appropriateness. The objective of this study was to evaluate the linguistic validity of the US Spanish version of the Work Productivity and Activity Impairment questionnaire, General Health Version (WPAI:GH). A US Spanish translation of the US English WPAI:GH was created through a reiterative process of creating harmonized forward and back translations by independent translators. Spanish-speaking and English-speaking subjects residing in the US self-administered the WPAI:GH in their primary language and were subsequently debriefed by a bilingual (Spanish-English) interviewer. US Spanish subjects (N = 31) and English subjects (N = 35), stratified equally by educational level, with and without a high school degree participated in the study. The WPAI-GH item comprehension rate was 98.6% for Spanish and 99.6% for English. Response revision rates during debriefing were 1.6% for Spanish and 0.5% for English. Responses to hypothetical scenarios indicated that both language versions adequately differentiate sick time taken for health and non-health reasons and between absenteeism and presenteeism. Linguistic validity of the US Spanish translation of the WPAI:GH was established among a diverse US Spanish-speaking population, including those with minimal education.

  17. The Role of Sub- and Supercritical CO2 as "Processing Solvent" for the Recycling and Sample Preparation of Lithium Ion Battery Electrolytes.

    PubMed

    Nowak, Sascha; Winter, Martin

    2017-03-06

    Quantitative electrolyte extraction from lithium ion batteries (LIB) is of great interest for recycling processes. Following the generally valid EU legal guidelines for the recycling of batteries, 50 wt % of a LIB cell has to be recovered, which cannot be achieved without the electrolyte; hence, the electrolyte represents a target component for the recycling of LIBs. Additionally, fluoride or fluorinated compounds, as inevitably present in LIB electrolytes, can hamper or even damage recycling processes in industry and have to be removed from the solid LIB parts, as well. Finally, extraction is a necessary tool for LIB electrolyte aging analysis as well as for post-mortem investigations in general, because a qualitative overview can already be achieved after a few minutes of extraction for well-aged, apparently "dry" LIB cells, where the electrolyte is deeply penetrated or even gellified in the solid battery materials.

  18. External validation of the Society of Thoracic Surgeons General Thoracic Surgery Database.

    PubMed

    Magee, Mitchell J; Wright, Cameron D; McDonald, Donna; Fernandez, Felix G; Kozower, Benjamin D

    2013-11-01

    The Society of Thoracic Surgeons (STS) General Thoracic Surgery Database (GTSD) reports outstanding results for lung and esophageal cancer resection. However, a major weakness of the GTSD has been the lack of validation of this voluntary registry. The purpose of this study was to perform an external, independent audit to assess the accuracy of the data collection process and the quality of the database. An independent firm was contracted to audit 5% of sites randomly selected from the GTDB in 2011. Audits were performed remotely to maximize the number of audits performed and reduce cost. Auditors compared lobectomy cases submitted to the GTSD with the hospital operative logs to evaluate completeness of the data. In addition, 20 lobectomy records from each site were audited in detail. Agreement rates were calculated for 32 individual data elements, 7 data categories pertaining to patient status or care delivery, and an overall agreement rate for each site. Six process variables were also evaluated to assess best practice for data collection and submission. Ten sites were audited from the 222 participants. Comparison of the 559 submitted lobectomy cases with operative logs from each site identified 28 omissions, a 94.6% agreement rate (discrepancies/site range, 2 to 27). Importantly, cases not submitted had no mortality or major morbidity, indicating a lack of purposeful omission. The aggregate agreement rates for all categories were greater than 90%. The overall data accuracy was 94.9%. External audits of the GTSD validate the accuracy and completeness of the data. Careful examination of unreported cases demonstrated no purposeful omission or gaming. Although these preliminary results are quite good, it is imperative that the audit process is refined and continues to expand along with the GTSD to insure reliability of the database. The audit results are currently being incorporated into educational and quality improvement processes to add further value. Copyright © 2013 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  19. Hypermedia and visual technology

    NASA Technical Reports Server (NTRS)

    Walker, Lloyd

    1990-01-01

    Applications of a codified professional practice that uses visual representations of the thoughts and ideas of a working group are reported in order to improve productivity, problem solving, and innovation. This visual technology process was developed under the auspices of General Foods as part of a multi-year study. The study resulted in the validation of this professional service as a way to use art and design to facilitate productivity and innovation and to define new opportunities. It was also used by NASA for planning Lunar/Mars exploration and by other companies for general business and advanced strategic planning, developing new product concepts, and litigation support. General Foods has continued to use the service for packaging innovation studies.

  20. Design and validation of general biology learning program based on scientific inquiry skills

    NASA Astrophysics Data System (ADS)

    Cahyani, R.; Mardiana, D.; Noviantoro, N.

    2018-03-01

    Scientific inquiry is highly recommended to teach science. The reality in the schools and colleges is that many educators still have not implemented inquiry learning because of their lack of understanding. The study aims to1) analyze students’ difficulties in learning General Biology, 2) design General Biology learning program based on multimedia-assisted scientific inquiry learning, and 3) validate the proposed design. The method used was Research and Development. The subjects of the study were 27 pre-service students of general elementary school/Islamic elementary schools. The workflow of program design includes identifying learning difficulties of General Biology, designing course programs, and designing instruments and assessment rubrics. The program design is made for four lecture sessions. Validation of all learning tools were performed by expert judge. The results showed that: 1) there are some problems identified in General Biology lectures; 2) the designed products include learning programs, multimedia characteristics, worksheet characteristics, and, scientific attitudes; and 3) expert validation shows that all program designs are valid and can be used with minor revisions. The first section in your paper.

  1. A survey of Applied Psychological Services' models of the human operator

    NASA Technical Reports Server (NTRS)

    Siegel, A. I.; Wolf, J. J.

    1979-01-01

    A historical perspective is presented in terms of the major features and status of two families of computer simulation models in which the human operator plays the primary role. Both task oriented and message oriented models are included. Two other recent efforts are summarized which deal with visual information processing. They involve not whole model development but a family of subroutines customized to add the human aspects to existing models. A global diagram of the generalized model development/validation process is presented and related to 15 criteria for model evaluation.

  2. On the validity of the Middlesex Hospital Questionnaire: a comparison of diagnostic self-ratings in psychiatric out-patients, general practice patients, and 'normals' based on the Hebrew version.

    PubMed

    Dasberg, H; Shalif, I

    1978-09-01

    The short clinical diagnostic self-rating scale for psycho-neurotic patients (The Middlesex Hospital Questionnaire) was translated into everyday Hebrew and tested on 216 subjects for: (1) concurrent validity with clinical diagnoses; (2) discriminatory validity on a psychoneurotic gradient of psychiatric out-patients, general practice patients, and normal controls; (3) validity of subscales and discrete items using matrices of Spearman rank correlation coefficients; (4) construct validity using Guttman's smallest space analysis based on coefficients of similarity. The Hebrew MHQ was found to retain its validity and to be easily applicable in waiting-room situations. It is a useful method for generating and substantiating hypotheses on psychosomatic and psychosocial interrelationships. The MHQ seems to enable the expression of the 'neurotic load' of a general practice subpopulation as a centile on a scale, thereby corroborating previous epidemiological findings on the high prevalence of neurotic illness in general practice. There is reason to believe that the MHQ is a valid instrument for the analysis of symptom profiles of subjects involved in future drug trials.

  3. Analysis and synthesis of abstract data types through generalization from examples

    NASA Technical Reports Server (NTRS)

    Wild, Christian

    1987-01-01

    The discovery of general patterns of behavior from a set of input/output examples can be a useful technique in the automated analysis and synthesis of software systems. These generalized descriptions of the behavior form a set of assertions which can be used for validation, program synthesis, program testing and run-time monitoring. Describing the behavior is characterized as a learning process in which general patterns can be easily characterized. The learning algorithm must choose a transform function and define a subset of the transform space which is related to equivalence classes of behavior in the original domain. An algorithm for analyzing the behavior of abstract data types is presented and several examples are given. The use of the analysis for purposes of program synthesis is also discussed.

  4. Semantics and pragmatics of social influence: how affirmations and denials affect beliefs in referent propositions.

    PubMed

    Gruenfeld, D H; Wyer, R S

    1992-01-01

    Ss read either affirmations or denials of target propositions that ostensibly came from either newspapers or reference volumes. Denials of the validity of a proposition that was already assumed to be false increased Ss' beliefs in this proposition. The effect generalized to beliefs in related propositions that could be used to support the target's validity. When denials came from a newspaper, their "boomerang effect" was nearly equal in magnitude to the direct effect of affirming the target proposition's validity. When Ss were asked explicitly to consider the implications of the assertions, however, the impact of denials was eliminated. Affirmations of a target proposition that was already assumed to be true also had a boomerang effect. Results have implications for the effects of both semantic and pragmatic processing of assertions on belief change.

  5. Improved Concrete Cutting and Excavation Capabilities for Crater Repair Phase 2

    DTIC Science & Technology

    2015-05-01

    production rate and ease of execution. The current ADR techniques, tactics, and procedures (TTPs) indicate cutting of pavement around a small crater...demonstrations and evaluations were used to create the techniques, tactics, and procedures (TTPs) manual describing the processes and requirements of...was more difficult when dowels were present. In general, the OUA demonstration validated that the new materials, equipment, and procedures were

  6. Quality Control Analysis of Selected Aspects of Programs Administered by the Bureau of Student Financial Assistance. Task 1 and Quality Control Sample; Error-Prone Modeling Analysis Plan.

    ERIC Educational Resources Information Center

    Saavedra, Pedro; And Others

    Parameters and procedures for developing an error-prone model (EPM) to predict financial aid applicants who are likely to misreport on Basic Educational Opportunity Grant (BEOG) applications are introduced. Specifications to adapt these general parameters to secondary data analysis of the Validation, Edits, and Applications Processing Systems…

  7. 7 CFR 1.216 - Appearance as a witness or production of documents on behalf of a party other than the United...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Agriculture Office of the Secretary of Agriculture ADMINISTRATIVE REGULATIONS Appearance of USDA Employees as... employee of USDA served with a valid summons, subpoena, or other compulsory process demanding his or her... notify the head of his or her USDA agency and the General Counsel or his or her designee of the existence...

  8. 7 CFR 1.216 - Appearance as a witness or production of documents on behalf of a party other than the United...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Agriculture Office of the Secretary of Agriculture ADMINISTRATIVE REGULATIONS Appearance of USDA Employees as... employee of USDA served with a valid summons, subpoena, or other compulsory process demanding his or her... notify the head of his or her USDA agency and the General Counsel or his or her designee of the existence...

  9. 7 CFR 1.216 - Appearance as a witness or production of documents on behalf of a party other than the United...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Agriculture Office of the Secretary of Agriculture ADMINISTRATIVE REGULATIONS Appearance of USDA Employees as... employee of USDA served with a valid summons, subpoena, or other compulsory process demanding his or her... notify the head of his or her USDA agency and the General Counsel or his or her designee of the existence...

  10. 7 CFR 1.216 - Appearance as a witness or production of documents on behalf of a party other than the United...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Agriculture Office of the Secretary of Agriculture ADMINISTRATIVE REGULATIONS Appearance of USDA Employees as... employee of USDA served with a valid summons, subpoena, or other compulsory process demanding his or her... notify the head of his or her USDA agency and the General Counsel or his or her designee of the existence...

  11. 7 CFR 1.216 - Appearance as a witness or production of documents on behalf of a party other than the United...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Agriculture Office of the Secretary of Agriculture ADMINISTRATIVE REGULATIONS Appearance of USDA Employees as... employee of USDA served with a valid summons, subpoena, or other compulsory process demanding his or her... notify the head of his or her USDA agency and the General Counsel or his or her designee of the existence...

  12. Assessing Anger Expression: Construct Validity of Three Emotion Expression-Related Measures

    PubMed Central

    Jasinski, Matthew J.; Lumley, Mark A.; Latsch, Deborah V.; Schuster, Erik; Kinner, Ellen; Burns, John W.

    2016-01-01

    Self-report measures of emotional expression are common, but their validity to predict objective emotional expression, particularly of anger, is unclear. We tested the validity of the Anger Expression Inventory (AEI; Spielberger et al., 1985)), Emotional Approach Coping Scale (EAC; Stanton, Kirk, Cameron & Danoff-Burg, 2000), and Toronto Alexithymia Scale-20 (TAS-20; Bagby, Taylor, & Parker, 1994) to predict objective anger expression in 95 adults with chronic back pain. Participants attempted to solve a difficult computer maze by following the directions of a confederate who treated them rudely and unjustly. Participants then expressed their feelings for 4 minutes. Blinded raters coded the videos for anger expression, and a software program analyzed expression transcripts for anger-related words. Analyses related each questionnaire to anger expression. The AEI anger-out scale predicted greater anger expression, as expected, but AEI anger-in did not. The EAC emotional processing scale predicted less anger expression, but the EAC emotional expression scale was unrelated to anger expression. Finally, the TAS-20 predicted greater anger expression. Findings support the validity of the AEI anger-out scale but raise questions about the other measures. The assessment of emotional expression by self-report is complex and perhaps confounded by general emotional experience, the specificity or generality of the emotion(s) assessed, and self-awareness limitations. Performance-based or clinician-rated measures of emotion expression are needed. PMID:27248355

  13. The Irrational Beliefs Inventory: psychometric properties and cross-cultural validation of its Arabic version.

    PubMed

    Al-Heeti, Khalaf N M; Hamid, Abdalla A R M; Alghorani, Mohammad A

    2012-08-01

    The purpose of this study was to examine the psychometric properties of the adapted Irrational Beliefs Inventory (IBI-34) and thus begin the process of assessing its adequacy for use in an Arab culture. The scale was translated and then administered to two samples of undergraduate students from the United Arab Emirates University. Data from 384 students were used in the main analysis, and data from 251 students were used for cross-validation. Principal components analysis (PCA) with varimax rotation followed by PCA with oblimin rotation yielded the same five components in both the main sample and the validation sample, thus consistent with the original Dutch study. Only 34 of the original 50 items were adequate to represent the five constructs. Cronbach's alpha coefficient for the overall scale was .76 and for the subscales ranged between .71 and .76, except for the Rigidity subscale, which was .54. The adapted IBI-34 correlated significantly and negatively with the General Health Questionnaire and Beck Depression Inventory, providing support for concurrent validity. Due to the non-significant differences between male and female participants on the total score of the IBI-34, the scale can be used for both sexes by summing across all items to give a total score that can be used as a general indicator of the irrational thinking.

  14. Acoustic evidence for phonologically mismatched speech errors.

    PubMed

    Gormley, Andrea

    2015-04-01

    Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of speech errors that uncovers non-accommodated or mismatch errors. A mismatch error is a sub-phonemic error that results in an incorrect surface phonology. This type of error could arise during the processing of phonological rules or they could be made at the motor level of implementation. The results of this work have important implications for both experimental and theoretical research. For experimentalists, it validates the tools used for error induction and the acoustic determination of errors free of the perceptual bias. For theorists, this methodology can be used to test the nature of the processes proposed in language production.

  15. Understanding diagnosis and management of dementia and guideline implementation in general practice: a qualitative study using the theoretical domains framework.

    PubMed

    Murphy, Kerry; O'Connor, Denise A; Browning, Colette J; French, Simon D; Michie, Susan; Francis, Jill J; Russell, Grant M; Workman, Barbara; Flicker, Leon; Eccles, Martin P; Green, Sally E

    2014-03-03

    Dementia is a growing problem, causing substantial burden for patients, their families, and society. General practitioners (GPs) play an important role in diagnosing and managing dementia; however, there are gaps between recommended and current practice. The aim of this study was to explore GPs' reported practice in diagnosing and managing dementia and to describe, in theoretical terms, the proposed explanations for practice that was and was not consistent with evidence-based guidelines. Semi-structured interviews were conducted with GPs in Victoria, Australia. The Theoretical Domains Framework (TDF) guided data collection and analysis. Interviews explored the factors hindering and enabling achievement of 13 recommended behaviours. Data were analysed using content and thematic analysis. This paper presents an in-depth description of the factors influencing two behaviours, assessing co-morbid depression using a validated tool, and conducting a formal cognitive assessment using a validated scale. A total of 30 GPs were interviewed. Most GPs reported that they did not assess for co-morbid depression using a validated tool as per recommended guidance. Barriers included the belief that depression can be adequately assessed using general clinical indicators and that validated tools provide little additional information (theoretical domain of 'Beliefs about consequences'); discomfort in using validated tools ('Emotion'), possibly due to limited training and confidence ('Skills'; 'Beliefs about capabilities'); limited awareness of the need for, and forgetting to conduct, a depression assessment ('Knowledge'; 'Memory, attention and decision processes'). Most reported practising in a manner consistent with the recommendation that a formal cognitive assessment using a validated scale be undertaken. Key factors enabling this were having an awareness of the need to conduct a cognitive assessment ('Knowledge'); possessing the necessary skills and confidence ('Skills'; 'Beliefs about capabilities'); and having adequate time and resources ('Environmental context and resources'). This is the first study to our knowledge to use a theoretical approach to investigate the barriers and enablers to guideline-recommended diagnosis and management of dementia in general practice. It has identified key factors likely to explain GPs' uptake of the guidelines. The results have informed the design of an intervention aimed at supporting practice change in line with dementia guidelines, which is currently being evaluated in a cluster randomised trial.

  16. At-line process analytical technology (PAT) for more efficient scale up of biopharmaceutical microfiltration unit operations.

    PubMed

    Watson, Douglas S; Kerchner, Kristi R; Gant, Sean S; Pedersen, Joseph W; Hamburger, James B; Ortigosa, Allison D; Potgieter, Thomas I

    2016-01-01

    Tangential flow microfiltration (MF) is a cost-effective and robust bioprocess separation technique, but successful full scale implementation is hindered by the empirical, trial-and-error nature of scale-up. We present an integrated approach leveraging at-line process analytical technology (PAT) and mass balance based modeling to de-risk MF scale-up. Chromatography-based PAT was employed to improve the consistency of an MF step that had been a bottleneck in the process used to manufacture a therapeutic protein. A 10-min reverse phase ultra high performance liquid chromatography (RP-UPLC) assay was developed to provide at-line monitoring of protein concentration. The method was successfully validated and method performance was comparable to previously validated methods. The PAT tool revealed areas of divergence from a mass balance-based model, highlighting specific opportunities for process improvement. Adjustment of appropriate process controls led to improved operability and significantly increased yield, providing a successful example of PAT deployment in the downstream purification of a therapeutic protein. The general approach presented here should be broadly applicable to reduce risk during scale-up of filtration processes and should be suitable for feed-forward and feed-back process control. © 2015 American Institute of Chemical Engineers.

  17. On the nature of persistence in dendrochronologic records with implications for hydrology

    USGS Publications Warehouse

    Landwehr, J.M.; Matalas, N.C.

    1986-01-01

    Hydrologic processes are generally held to be persistent and not secularly independent. Impetus for this view was given by Hurst in his work which dealt with properties of the rescaled range of many types of long geophysical records, in particular dendrochronologic records, in addition to hydrologic records. Mandelbrot introduced an infinite memory stationary process, the fractional Gaussian noise process (F), as an explanation for Hurst's observations. This is in contrast to other explanations which have been predicated on the implicit non-stationarity of the process underlying the construction of the records. In this work, we introduce a stationary finite memory process which arises naturally from a physical concept and show that it can accommodate the persistence structures observed for dendrochronological records more successfully than an F or any other of a family of related processes examined herein. Further, some question arises as to the empirical plausibility of an F process. Dendrochronologic records are used because they are widely held to be surrogates for records of average hydrologic phenomena and the length of these records allows one to explore questions of stochastic process structure which cannot be explored with great validity in the case of generally much shorter hydrologic records. ?? 1986.

  18. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekechukwu, A

    Method validation is the process of evaluating whether an analytical method is acceptable for its intended purpose. For pharmaceutical methods, guidelines from the United States Pharmacopeia (USP), International Conference on Harmonisation (ICH), and the United States Food and Drug Administration (USFDA) provide a framework for performing such valications. In general, methods for regulatory compliance must include studies on specificity, linearity, accuracy, precision, range, detection limit, quantitation limit, and robustness. Elements of these guidelines are readily adapted to the issue of validation for beryllium sampling and analysis. This document provides a listing of available sources which can be used to validatemore » analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers and books reviewed is given in the Appendix. Available validation documents and guides are listed therein; each has a brief description of application and use. In the referenced sources, there are varying approches to validation and varying descriptions of the valication process at different stages in method development. This discussion focuses on valication and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all referenced documents were published in English.« less

  19. NOAA Unique CrIS/ATMS Processing System (NUCAPS) Environmental Data Record and Validation

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Nalli, N. R.; Gambacorta, A.; Iturbide, F.; Tan, C.; Zhang, K.; Wilson, M.; Reale, A.; Sun, B.; Mollner, A.

    2015-12-01

    This presentation introduces the NOAA sounding products to AGU community. The NOAA Unique CrIS/ATMS Processing System (NUCAPS) operationally generates vertical profiles of atmospheric temperature (AVTP), moisture (AVMP), carbonate products (CO, CO2, and CH4) and other trace gases as well as outgoing long-wave radiation (OLR). These products have been publicly released through NOAA CLASS from April 8, 2014 to present. This paper presents the validation of these products. For AVTP and AVMP are validated by comparing against ECMWF analysis data and dedicated radiosondes. The dedicated radiosondes achieve higher quality and reach higher altitudes than conventional radiosondes. In addition, the launch times of dedicated radiosondes specifically fit Suomi NPP overpass times within 1 hour generally. We also use ground based lidar data provided by collaborators (The Aerospace Corporation) to validate the retrieved temperature profiles above 100 hPa up to 1 hPa. Both NOAA VALAR and NPROVS validation systems are applied. The Suomi NPP FM5-Ed1A OLR from CERES prior to the end of May 2012 is available now for us to validate real-time CrIS OLR environmental data records (EDRs) for NOAA/CPC operational precipitation verification. However, the quality of CrIS sensor data records (SDRs) for this time frame on CLASS is suboptimal and many granules (more than three-quarters) are invalid. Using the current offline ADL reprocessed CrIS SDR data from NOAA/STAR AIT, which includes all CrIS SDR improvements to date, we have subsequently obtained a well-distributed OLR EDR. This paper will also discuss the validation of the CrIS infrared ozone profile.

  20. Improving the quality of consent to randomised controlled trials by using continuous consent and clinician training in the consent process.

    PubMed

    Allmark, P; Mason, S

    2006-08-01

    To assess whether continuous consent, a process in which information is given to research participants at different stages in a trial, and clinician training in that process were effective when used by clinicians while gaining consent to the Total Body Hypothermia (TOBY) trial. The TOBY trial is a randomised controlled trial (RCT) investigating the use of whole-body cooling for neonates with evidence of perinatal asphyxia. Obtaining valid informed consent for the TOBY trial is difficult, but is a good test of the effectiveness of continuous consent. Semistructured interviews were conducted with 30 sets of parents who consented to the TOBY trial and with 10 clinicians who sought it by the continuous consent process. Analysis was focused on the validity of parental consent based on the consent components of competence, information, understanding and voluntariness. No marked problems with consent validity at the point of signature were observed in 19 of 27 (70%) couples. Problems were found mainly to lie with the competence and understanding of the parents: mothers, particularly, had problems with competence in the early stages of consent. Problems in understanding were primarily to do with side effects. Problems in both competence and understanding were observed to reduce markedly, particularly for mothers, in the post-signature phase, when further discussion took place. Randomisation was generally understood but unpopular. Information was not always given by clinicians in stages during the short period available before parents gave consent. Most clinicians, however, were able to give follow-up information. Consent validity was found to compare favourably with similar trials examined in the Euricon study. Adopting the elements of the continuous consent process and clinician training in RCTs should be considered by researchers, particularly when they have concerns about the quality of consent they are likely to obtain by using a conventional process.

  1. The Chelsea critical care physical assessment tool (CPAx): validation of an innovative new tool to measure physical morbidity in the general adult critical care population; an observational proof-of-concept pilot study.

    PubMed

    Corner, E J; Wood, H; Englebretsen, C; Thomas, A; Grant, R L; Nikoletou, D; Soni, N

    2013-03-01

    To develop a scoring system to measure physical morbidity in critical care - the Chelsea Critical Care Physical Assessment Tool (CPAx). The development process was iterative involving content validity indices (CVI), a focus group and an observational study of 33 patients to test construct validity against the Medical Research Council score for muscle strength, peak cough flow, Australian Therapy Outcome Measures score, Glasgow Coma Scale score, Bloomsbury sedation score, Sequential Organ Failure Assessment score, Short Form 36 (SF-36) score, days of mechanical ventilation and inter-rater reliability. Trauma and general critical care patients from two London teaching hospitals. Users of the CPAx felt that it possessed content validity, giving a final CVI of 1.00 (P<0.05). Construct validation data showed moderate to strong significant correlations between the CPAx score and all secondary measures, apart from the mental component of the SF-36 which demonstrated weak correlation with the CPAx score (r=0.024, P=0.720). Reliability testing showed internal consistency of α=0.798 and inter-rater reliability of κ=0.988 (95% confidence interval 0.791 to 1.000) between five raters. This pilot work supports proof of concept of the CPAx as a measure of physical morbidity in the critical care population, and is a cogent argument for further investigation of the scoring system. Copyright © 2012 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  2. Construction and validation of a measure of integrative well-being in seven languages: the Pemberton Happiness Index.

    PubMed

    Hervás, Gonzalo; Vázquez, Carmelo

    2013-04-22

    We introduce the Pemberton Happiness Index (PHI), a new integrative measure of well-being in seven languages, detailing the validation process and presenting psychometric data. The scale includes eleven items related to different domains of remembered well-being (general, hedonic, eudaimonic, and social well-being) and ten items related to experienced well-being (i.e., positive and negative emotional events that possibly happened the day before); the sum of these items produces a combined well-being index. A distinctive characteristic of this study is that to construct the scale, an initial pool of items, covering the remembered and experienced well-being domains, were subjected to a complete selection and validation process. These items were based on widely used scales (e.g., PANAS, Satisfaction With Life Scale, Subjective Happiness Scale, and Psychological Well-Being Scales). Both the initial items and reference scales were translated into seven languages and completed via Internet by participants (N = 4,052) aged 16 to 60 years from nine countries (Germany, India, Japan, Mexico, Russia, Spain, Sweden, Turkey, and USA). Results from this initial validation study provided very good support for the psychometric properties of the PHI (i.e., internal consistency, a single-factor structure, and convergent and incremental validity). Given the PHI's good psychometric properties, this simple and integrative index could be used as an instrument to monitor changes in well-being. We discuss the utility of this integrative index to explore well-being in individuals and communities.

  3. The validity of multiphase DNS initialized on the basis of single--point statistics

    NASA Astrophysics Data System (ADS)

    Subramaniam, Shankar

    1999-11-01

    A study of the point--process statistical representation of a spray reveals that single--point statistical information contained in the droplet distribution function (ddf) is related to a sequence of single surrogate--droplet pdf's, which are in general different from the physical single--droplet pdf's. The results of this study have important consequences for the initialization and evolution of direct numerical simulations (DNS) of multiphase flows, which are usually initialized on the basis of single--point statistics such as the average number density in physical space. If multiphase DNS are initialized in this way, this implies that even the initial representation contains certain implicit assumptions concerning the complete ensemble of realizations, which are invalid for general multiphase flows. Also the evolution of a DNS initialized in this manner is shown to be valid only if an as yet unproven commutation hypothesis holds true. Therefore, it is questionable to what extent DNS that are initialized in this manner constitute a direct simulation of the physical droplets.

  4. 38 CFR 12.23 - Recognition of valid claim against the General Post Fund.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... claim against the General Post Fund. 12.23 Section 12.23 Pensions, Bonuses, and Veterans' Relief... claim against the General Post Fund. Effective December 26, 1941, the assets of the estate of a veteran theretofore or thereafter deposited to the General Post Fund are subject to the valid claims of creditors...

  5. 38 CFR 12.23 - Recognition of valid claim against the General Post Fund.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... claim against the General Post Fund. 12.23 Section 12.23 Pensions, Bonuses, and Veterans' Relief... claim against the General Post Fund. Effective December 26, 1941, the assets of the estate of a veteran theretofore or thereafter deposited to the General Post Fund are subject to the valid claims of creditors...

  6. 38 CFR 12.23 - Recognition of valid claim against the General Post Fund.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... claim against the General Post Fund. 12.23 Section 12.23 Pensions, Bonuses, and Veterans' Relief... claim against the General Post Fund. Effective December 26, 1941, the assets of the estate of a veteran theretofore or thereafter deposited to the General Post Fund are subject to the valid claims of creditors...

  7. 38 CFR 12.23 - Recognition of valid claim against the General Post Fund.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... claim against the General Post Fund. 12.23 Section 12.23 Pensions, Bonuses, and Veterans' Relief... claim against the General Post Fund. Effective December 26, 1941, the assets of the estate of a veteran theretofore or thereafter deposited to the General Post Fund are subject to the valid claims of creditors...

  8. 38 CFR 12.23 - Recognition of valid claim against the General Post Fund.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... claim against the General Post Fund. 12.23 Section 12.23 Pensions, Bonuses, and Veterans' Relief... claim against the General Post Fund. Effective December 26, 1941, the assets of the estate of a veteran theretofore or thereafter deposited to the General Post Fund are subject to the valid claims of creditors...

  9. "Not just little adults": qualitative methods to support the development of pediatric patient-reported outcomes.

    PubMed

    Arbuckle, Rob; Abetz-Webb, Linda

    2013-01-01

    The US FDA and the European Medicines Agency (EMA) have issued incentives and laws mandating clinical research in pediatrics. While guidances for the development and validation of patient-reported outcomes (PROs) or health-related quality of life (HRQL) measures have been issued by these agencies, little attention has focused on pediatric PRO development methods. With reference to the literature, this article provides an overview of specific considerations that should be made with regard to the development of pediatric PRO measures, with a focus on performing qualitative research to ensure content validity. Throughout the questionnaire development process it is critical to use developmentally appropriate language and techniques to ensure outcomes have content validity, and will be reliable and valid within narrow age bands (0-2, 3-5, 6-8, 9-11, 12-14, 15-17 years). For qualitative research, sample sizes within those age bands must be adequate to demonstrate saturation while taking into account children's rapid growth and development. Interview methods, interview guides, and length of interview must all take developmental stage into account. Drawings, play-doh, or props can be used to engage the child. Care needs to be taken during cognitive debriefing, where repeated questioning can lead a child to change their answers, due to thinking their answer is incorrect. For the PROs themselves, the greatest challenge is in measuring outcomes in children aged 5-8 years. In this age range, while self-report is generally more valid, parent reports of observable behaviors are generally more reliable. As such, 'team completion' or a parent-administered child report is often the best option for children aged 5-8 years. For infants and very young children (aged 0-4 years), patient rating of observable behaviors is necessary, and, for adolescents and children aged 9 years and older, self-reported outcomes are generally valid and reliable. In conclusion, the development of PRO measures for use in children requires careful tailoring of qualitative methods, and performing research within narrow age bands. The best reporter should be carefully considered dependent on the child's age, developmental ability, and the concept being measured, and team completion should be considered alongside self-completion and observer measures.

  10. Design and validation of a real-time spiking-neural-network decoder for brain-machine interfaces.

    PubMed

    Dethier, Julie; Nuyujukian, Paul; Ryu, Stephen I; Shenoy, Krishna V; Boahen, Kwabena

    2013-06-01

    Cortically-controlled motor prostheses aim to restore functions lost to neurological disease and injury. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage cortex. One possible solution is to use ultra-low power neuromorphic chips to decode neural signals for these intracortical implants. The first step is to explore in simulation the feasibility of translating decoding algorithms for brain-machine interface (BMI) applications into spiking neural networks (SNNs). Here we demonstrate the validity of the approach by implementing an existing Kalman-filter-based decoder in a simulated SNN using the Neural Engineering Framework (NEF), a general method for mapping control algorithms onto SNNs. To measure this system's robustness and generalization, we tested it online in closed-loop BMI experiments with two rhesus monkeys. Across both monkeys, a Kalman filter implemented using a 2000-neuron SNN has comparable performance to that of a Kalman filter implemented using standard floating point techniques. These results demonstrate the tractability of SNN implementations of statistical signal processing algorithms on different monkeys and for several tasks, suggesting that a SNN decoder, implemented on a neuromorphic chip, may be a feasible computational platform for low-power fully-implanted prostheses. The validation of this closed-loop decoder system and the demonstration of its robustness and generalization hold promise for SNN implementations on an ultra-low power neuromorphic chip using the NEF.

  11. The Mars Climate Database (MCD version 5.2)

    NASA Astrophysics Data System (ADS)

    Millour, E.; Forget, F.; Spiga, A.; Navarro, T.; Madeleine, J.-B.; Montabone, L.; Pottier, A.; Lefevre, F.; Montmessin, F.; Chaufray, J.-Y.; Lopez-Valverde, M. A.; Gonzalez-Galindo, F.; Lewis, S. R.; Read, P. L.; Huot, J.-P.; Desjean, M.-C.; MCD/GCM development Team

    2015-10-01

    The Mars Climate Database (MCD) is a database of meteorological fields derived from General Circulation Model (GCM) numerical simulations of the Martian atmosphere and validated using available observational data. The MCD includes complementary post-processing schemes such as high spatial resolution interpolation of environmental data and means of reconstructing the variability thereof. We have just completed (March 2015) the generation of a new version of the MCD, MCD version 5.2

  12. Functional outcomes assessment in shoulder surgery

    PubMed Central

    Wylie, James D; Beckmann, James T; Granger, Erin; Tashjian, Robert Z

    2014-01-01

    The effective evaluation and management of orthopaedic conditions including shoulder disorders relies upon understanding the level of disability created by the disease process. Validated outcome measures are critical to the evaluation process. Traditionally, outcome measures have been physician derived objective evaluations including range of motion and radiologic evaluations. However, these measures can marginalize a patient’s perception of their disability or outcome. As a result of these limitations, patient self-reported outcomes measures have become popular over the last quarter century and are currently primary tools to evaluate outcomes of treatment. Patient reported outcomes measures can be general health related quality of life measures, health utility measures, region specific health related quality of life measures or condition specific measures. Several patients self-reported outcomes measures have been developed and validated for evaluating patients with shoulder disorders. Computer adaptive testing will likely play an important role in the arsenal of measures used to evaluate shoulder patients in the future. The purpose of this article is to review the general health related quality-of-life measures as well as the joint-specific and condition specific measures utilized in evaluating patients with shoulder conditions. Advances in computer adaptive testing as it relates to assessing dysfunction in shoulder conditions will also be reviewed. PMID:25405091

  13. A Student Assessment Tool for Standardized Patient Simulations (SAT-SPS): Psychometric analysis.

    PubMed

    Castro-Yuste, Cristina; García-Cabanillas, María José; Rodríguez-Cornejo, María Jesús; Carnicer-Fuentes, Concepción; Paloma-Castro, Olga; Moreno-Corral, Luis Javier

    2018-05-01

    The evaluation of the level of clinical competence acquired by the student is a complex process that must meet various requirements to ensure its quality. The psychometric analysis of the data collected by the assessment tools used is a fundamental aspect to guarantee the student's competence level. To conduct a psychometric analysis of an instrument which assesses clinical competence in nursing students at simulation stations with standardized patients in OSCE-format tests. The construct of clinical competence was operationalized as a set of observable and measurable behaviors, measured by the newly-created Student Assessment Tool for Standardized Patient Simulations (SAT-SPS), which was comprised of 27 items. The categories assigned to the items were 'incorrect or not performed' (0), 'acceptable' (1), and 'correct' (2). 499 nursing students. Data were collected by two independent observers during the assessment of the students' performance at a four-station OSCE with standardized patients. Descriptive statistics were used to summarize the variables. The difficulty levels and floor and ceiling effects were determined for each item. Reliability was analyzed using internal consistency and inter-observer reliability. The validity analysis was performed considering face validity, content and construct validity (through exploratory factor analysis), and criterion validity. Internal reliability and inter-observer reliability were higher than 0.80. The construct validity analysis suggested a three-factor model accounting for 37.1% of the variance. These three factors were named 'Nursing process', 'Communication skills', and 'Safe practice'. A significant correlation was found between the scores obtained and the students' grades in general, as well as with the grades obtained in subjects with clinical content. The assessment tool has proven to be sufficiently reliable and valid for the assessment of the clinical competence of nursing students using standardized patients. This tool has three main components: the nursing process, communication skills, and safety management. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  15. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  16. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  17. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  18. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  19. Validated simulator for space debris removal with nets and other flexible tethers applications

    NASA Astrophysics Data System (ADS)

    Gołębiowski, Wojciech; Michalczyk, Rafał; Dyrek, Michał; Battista, Umberto; Wormnes, Kjetil

    2016-12-01

    In the context of active debris removal technologies and preparation activities for the e.Deorbit mission, a simulator for net-shaped elastic bodies dynamics and their interactions with rigid bodies, has been developed. Its main application is to aid net design and test scenarios for space debris deorbitation. The simulator can model all the phases of the debris capturing process: net launch, flight and wrapping around the target. It handles coupled simulation of rigid and flexible bodies dynamics. Flexible bodies were implemented using Cosserat rods model. It allows to simulate flexible threads or wires with elasticity and damping for stretching, bending and torsion. Threads may be combined into structures of any topology, so the software is able to simulate nets, pure tethers, tether bundles, cages, trusses, etc. Full contact dynamics was implemented. Programmatic interaction with simulation is possible - i.e. for control implementation. The underlying model has been experimentally validated and due to significant gravity influence, experiment had to be performed in microgravity conditions. Validation experiment for parabolic flight was a downscaled process of Envisat capturing. The prepacked net was launched towards the satellite model, it expanded, hit the model and wrapped around it. The whole process was recorded with 2 fast stereographic camera sets for full 3D trajectory reconstruction. The trajectories were used to compare net dynamics to respective simulations and then to validate the simulation tool. The experiments were performed on board of a Falcon-20 aircraft, operated by National Research Council in Ottawa, Canada. Validation results show that model reflects phenomenon physics accurately enough, so it may be used for scenario evaluation and mission design purposes. The functionalities of the simulator are described in detail in the paper, as well as its underlying model, sample cases and methodology behind validation. Results are presented and typical use cases are discussed showing that the software may be used to design throw nets for space debris capturing, but also to simulate deorbitation process, chaser control system or general interactions between rigid and elastic bodies - all in convenient and efficient way. The presented work was led by SKA Polska under the ESA contract, within the CleanSpace initiative.

  20. Draft Plan for Characterizing Commercial Data Products in Support of Earth Science Research

    NASA Technical Reports Server (NTRS)

    Ryan, Robert E.; Terrie, Greg; Berglund, Judith

    2006-01-01

    This presentation introduces a draft plan for characterizing commercial data products for Earth science research. The general approach to the commercial product verification and validation includes focused selection of a readily available commercial remote sensing products that support Earth science research. Ongoing product verification and characterization will question whether the product meets specifications and will examine its fundamental properties, potential and limitations. Validation will encourage product evaluation for specific science research and applications. Specific commercial products included in the characterization plan include high-spatial-resolution multispectral (HSMS) imagery and LIDAR data products. Future efforts in this process will include briefing NASA headquarters and modifying plans based on feedback, increased engagement with the science community and refinement of details, coordination with commercial vendors and The Joint Agency Commercial Imagery Evaluation (JACIE) for HSMS satellite acquisitions, acquiring waveform LIDAR data and performing verification and validation.

  1. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    PubMed

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  2. Validity and reliability of portfolio assessment of student competence in two dental school populations: a four-year study.

    PubMed

    Gadbury-Amyot, Cynthia C; McCracken, Michael S; Woldt, Janet L; Brennan, Robert L

    2014-05-01

    The purpose of this study was to empirically investigate the validity and reliability of portfolio assessment in two U.S. dental schools using a unified framework for validity. In the process of validation, it is not the test that is validated but rather the claims (interpretations and uses) about test scores that are validated. Kane's argument-based validation framework provided the structure for reporting results where validity claims are followed by evidence to support the argument. This multivariate generalizability theory study found that the greatest source of variance was attributable to faculty raters, suggesting that portfolio assessment would benefit from two raters' evaluating each portfolio independently. The results are generally supportive of holistic scoring, but analytical scoring deserves further research. Correlational analyses between student portfolios and traditional measures of student competence and readiness for licensure resulted in significant correlations between portfolios and National Board Dental Examination Part I (r=0.323, p<0.01) and Part II scores (r=0.268, p<0.05) and small and non-significant correlations with grade point average and scores on the Western Regional Examining Board (WREB) exam. It is incumbent upon the users of portfolio assessment to determine if the claims and evidence arguments set forth in this study support the proposed claims for and decisions about portfolio assessment in their respective institutions.

  3. Validity aspects of the patient feedback questionnaire on consultation skills (PFC), a promising learning instrument in medical education.

    PubMed

    Reinders, Marcel E; Blankenstein, Annette H; Knol, Dirk L; de Vet, Henrica C W; van Marwijk, Harm W J

    2009-08-01

    A focus on the communicator competency is considered to be an important requirement to help physicians to acquire consultation skills. A feedback questionnaire, in which patients assess consultation skills might be a useful learning tool. An existing questionnaire on patient perception of patient-centeredness (PPPC) was adapted to cover the 'communicator' items in the competency profile. We assessed the face and content validity, the construct validity and the internal consistency of this new patient feedback on consultation skills (PFC) questionnaire. We assessed the face validity of the PFC by interviewing patients and general practice trainees (GPTs) during the developmental process. The content validity was determined by experts (n=10). First-year GPTs (23) collected 222 PFCs, from which the data were used to assess the construct validity (factor analysis), internal consistency, response rates and ceiling effects. The PFC adequately covers the corresponding 'communicator' competency (face and content validity). Factor analysis showed a one-dimensional construct. The internal consistency was high (Cronbach's alpha 0.89). For the single items, the response rate varied from 89.2% to 100%; the maximum score (ceiling effect) varied from 45.5% to 89.2%. The PFC appears to be a valid, internally consistent instrument. The PFC may be a valuable learning tool with which GPTs, other physicians and medical students can acquire feedback from patients regarding their consultation skills.

  4. Modelling retention and dispersion mechanisms of bluefin tuna eggs and larvae in the northwest Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Mariani, Patrizio; MacKenzie, Brian R.; Iudicone, Daniele; Bozec, Alexandra

    2010-07-01

    Knowledge of early life history of most fish species in the Mediterranean Sea is sparse and processes affecting their recruitment are poorly understood. This is particularly true for bluefin tuna, Thunnus thynnus, even though this species is one of the world’s most valued fish species. Here we develop, apply and validate an individually based coupled biological-physical oceanographic model of fish early life history in the Mediterranean Sea. We first validate the general structure of the coupled model with a 12-day Lagrangian drift study of anchovy ( Engraulis encrasicolus) larvae in the Catalan Sea. The model reproduced the drift and growth of anchovy larvae as they drifted along the Catalan coast and yielded similar patterns as those observed in the field. We then applied the model to investigate transport and retention processes affecting the spatial distribution of bluefin tuna eggs and larvae during 1999-2003, and we compared modelled distributions with available field data collected in 2001 and 2003. Modelled and field distributions generally coincided and were patchy at mesoscales (10s-100s km); larvae were most abundant in eddies and along frontal zones. We also identified probable locations of spawning bluefin tuna using hydrographic backtracking procedures; these locations were situated in a major salinity frontal zone and coincided with distributions of an electronically tagged bluefin tuna and commercial bluefin tuna fishing vessels. Moreover, we hypothesized that mesoscale processes are responsible for the aggregation and dispersion mechanisms in the area and showed that these processes were significantly correlated to atmospheric forcing processes over the NW Mediterranean Sea. Interannual variations in average summer air temperature can reduce the intensity of ocean mesoscale processes in the Balearic area and thus potentially affect bluefin tuna larvae. These modelling approaches can increase understanding of bluefin tuna recruitment processes and eventually contribute to management of bluefin tuna fisheries.

  5. A generalized architecture of quantum secure direct communication for N disjointed users with authentication

    NASA Astrophysics Data System (ADS)

    Farouk, Ahmed; Zakaria, Magdy; Megahed, Adel; Omara, Fatma A.

    2015-11-01

    In this paper, we generalize a secured direct communication process between N users with partial and full cooperation of quantum server. So, N - 1 disjointed users u1, u2, …, uN-1 can transmit a secret message of classical bits to a remote user uN by utilizing the property of dense coding and Pauli unitary transformations. The authentication process between the quantum server and the users are validated by EPR entangled pair and CNOT gate. Afterwards, the remained EPR will generate shared GHZ states which are used for directly transmitting the secret message. The partial cooperation process indicates that N - 1 users can transmit a secret message directly to a remote user uN through a quantum channel. Furthermore, N - 1 users and a remote user uN can communicate without an established quantum channel among them by a full cooperation process. The security analysis of authentication and communication processes against many types of attacks proved that the attacker cannot gain any information during intercepting either authentication or communication processes. Hence, the security of transmitted message among N users is ensured as the attacker introduces an error probability irrespective of the sequence of measurement.

  6. A generalized architecture of quantum secure direct communication for N disjointed users with authentication.

    PubMed

    Farouk, Ahmed; Zakaria, Magdy; Megahed, Adel; Omara, Fatma A

    2015-11-18

    In this paper, we generalize a secured direct communication process between N users with partial and full cooperation of quantum server. So, N - 1 disjointed users u1, u2, …, uN-1 can transmit a secret message of classical bits to a remote user uN by utilizing the property of dense coding and Pauli unitary transformations. The authentication process between the quantum server and the users are validated by EPR entangled pair and CNOT gate. Afterwards, the remained EPR will generate shared GHZ states which are used for directly transmitting the secret message. The partial cooperation process indicates that N - 1 users can transmit a secret message directly to a remote user uN through a quantum channel. Furthermore, N - 1 users and a remote user uN can communicate without an established quantum channel among them by a full cooperation process. The security analysis of authentication and communication processes against many types of attacks proved that the attacker cannot gain any information during intercepting either authentication or communication processes. Hence, the security of transmitted message among N users is ensured as the attacker introduces an error probability irrespective of the sequence of measurement.

  7. Quantum Information Processing with Large Nuclear Spins in GaAs Semiconductors

    NASA Astrophysics Data System (ADS)

    Leuenberger, Michael N.; Loss, Daniel; Poggio, M.; Awschalom, D. D.

    2002-10-01

    We propose an implementation for quantum information processing based on coherent manipulations of nuclear spins I=3/2 in GaAs semiconductors. We describe theoretically an NMR method which involves multiphoton transitions and which exploits the nonequidistance of nuclear spin levels due to quadrupolar splittings. Starting from known spin anisotropies we derive effective Hamiltonians in a generalized rotating frame, valid for arbitrary I, which allow us to describe the nonperturbative time evolution of spin states generated by magnetic rf fields. We identify an experimentally observable regime for multiphoton Rabi oscillations. In the nonlinear regime, we find Berry phase interference.

  8. On extinction time of a generalized endemic chain-binomial model.

    PubMed

    Aydogmus, Ozgur

    2016-09-01

    We considered a chain-binomial epidemic model not conferring immunity after infection. Mean field dynamics of the model has been analyzed and conditions for the existence of a stable endemic equilibrium are determined. The behavior of the chain-binomial process is probabilistically linked to the mean field equation. As a result of this link, we were able to show that the mean extinction time of the epidemic increases at least exponentially as the population size grows. We also present simulation results for the process to validate our analytical findings. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Theory of slightly fluctuating ratchets

    NASA Astrophysics Data System (ADS)

    Rozenbaum, V. M.; Shapochkina, I. V.; Lin, S. H.; Trakhtenberg, L. I.

    2017-04-01

    We consider a Brownian particle moving in a slightly fluctuating potential. Using the perturbation theory on small potential fluctuations, we derive a general analytical expression for the average particle velocity valid for both flashing and rocking ratchets with arbitrary, stochastic or deterministic, time dependence of potential energy fluctuations. The result is determined by the Green's function for diffusion in the time-independent part of the potential and by the features of correlations in the fluctuating part of the potential. The generality of the result allows describing complex ratchet systems with competing characteristic times; these systems are exemplified by the model of a Brownian photomotor with relaxation processes of finite duration.

  10. Surgery resident selection and evaluation. A critical incident study.

    PubMed

    Edwards, J C; Currie, M L; Wade, T P; Kaminski, D L

    1993-03-01

    This article reports a study of the process of selecting and evaluating general surgery residents. In personnel psychology terms, a job analysis of general surgery was conducted using the Critical Incident Technique (CIT). The researchers collected 235 critical incidents through structured interviews with 10 general surgery faculty members and four senior residents. The researchers then directed the surgeons in a two-step process of sorting the incidents into categories and naming the categories. The final essential categories of behavior to define surgical competence were derived through discussion among the surgeons until a consensus was formed. Those categories are knowledge/self-education, clinical performance, diagnostic skills, surgical skills, communication skills, reliability, integrity, compassion, organization skills, motivation, emotional control, and personal appearance. These categories were then used to develop an interview evaluation form for selection purposes and a performance evaluation form to be used throughout residency training. Thus a continuum of evaluation was established. The categories and critical incidents were also used to structure the interview process, which has demonstrated increased interview validity and reliability in many other studies. A handbook for structuring the interviews faculty members conduct with applicants was written, and an interview training session was held with the faculty. The process of implementation of the structured selection interviews is being documented currently through qualitative research.

  11. Anger Assessment in Clinical and Nonclinical Populations: Further Validation of the State-Trait Anger Expression Inventory-2.

    PubMed

    Lievaart, Marien; Franken, Ingmar H A; Hovens, Johannes E

    2016-03-01

    The most commonly used instrument for measuring anger is the State-Trait Anger Expression Inventory-2 (STAXI-2; Spielberger, 1999). This study further examines the validity of the STAXI-2 and compares anger scores between several clinical and nonclinical samples. Reliability, concurrent, and construct validity were investigated in Dutch undergraduate students (N = 764), a general population sample (N = 1211), and psychiatric outpatients (N = 226). The results support the reliability and validity of the STAXI-2. Concurrent validity was strong, with meaningful correlations between the STAXI-2 scales and anger-related constructs in both clinical and nonclinical samples. Importantly, patients showed higher experience and expression of anger than the general population sample. Additionally, forensic outpatients with addiction problems reported higher Anger Expression-Out than general psychiatric outpatients. Our conclusion is that the STAXI-2 is a suitable instrument to measure both the experience and the expression of anger in both general and clinical populations. © 2016 Wiley Periodicals, Inc.

  12. The Recovery Knowledge Inventory for Measurement of Nursing Student Views on Recovery-oriented Mental Health Services.

    PubMed

    Happell, Brenda; Byrne, Louise; Platania-Phung, Chris

    2015-01-01

    Recovery-oriented services are a goal for policy and practice in the Australian mental health service system. Evidence-based reform requires an instrument to measure knowledge of recovery concepts. The Recovery Knowledge Inventory (RKI) was designed for this purpose, however, its suitability and validity for student health professionals has not been evaluated. The purpose of the current article is to report the psychometric features of the RKI for measuring nursing students' views on recovery. The RKI, a self-report measure, consists of four scales: (I) Roles and Responsibilities, (II) Non-Linearity of the Recovery Process, (III) Roles of Self-Definition and Peers, and (IV) Expectations Regarding Recovery. Confirmatory and exploratory factor analyses of the baseline data (n = 167) were applied to assess validity and reliability. Exploratory factor analyses generally replicated the item structure suggested by the three main scales, however more stringent analyses (confirmatory factor analysis) did not provide strong support for convergent validity. A refined RKI with 16 items had internal reliabilities of α = .75 for Roles and Responsibilities, α = .49 for Roles of Self-Definition and Peers, and α = .72, for Recovery as Non-Linear Process. If the RKI is to be applied to nursing student populations, the conceptual underpinning of the instrument needs to be reworked, and new items should be generated to evaluate and improve scale validity and reliability.

  13. Validity and reliability of instruments aimed at measuring Evidence-Based Practice in Physical Therapy: a systematic review of the literature.

    PubMed

    Fernández-Domínguez, Juan Carlos; Sesé-Abad, Albert; Morales-Asencio, Jose Miguel; Oliva-Pascual-Vaca, Angel; Salinas-Bueno, Iosune; de Pedro-Gómez, Joan Ernest

    2014-12-01

    Our goal is to compile and analyse the characteristics - especially validity and reliability - of all the existing international tools that have been used to measure evidence-based clinical practice in physiotherapy. A systematic review conducted with data from exclusively quantitative-type studies synthesized in narrative format. An in-depth search of the literature was conducted in two phases: initial, structured, electronic search of databases and also journals with summarized evidence; followed by a residual-directed search in the bibliographical references of the main articles found in the primary search procedure. The studies included were assigned to members of the research team who acted as peer reviewers. Relevant information was extracted from each of the selected articles using a template that included the general characteristics of the instrument as well as an analysis of the quality of the validation processes carried out, by following the criteria of Terwee. Twenty-four instruments were found to comply with the review screening criteria; however, in all cases, they were found to be limited as regards the 'constructs' included. Besides, they can all be seen to be lacking as regards comprehensiveness associated to the validation process of the psychometric tests used. It seems that what constitutes a rigorously developed assessment instrument for EBP in physical therapy continues to be a challenge. © 2014 John Wiley & Sons, Ltd.

  14. How Does Sexual Minority Stigma “Get Under the Skin”? A Psychological Mediation Framework

    PubMed Central

    Hatzenbuehler, Mark L.

    2009-01-01

    Sexual minorities are at increased risk for multiple mental health burdens compared to heterosexuals. The field has identified two distinct determinants of this risk, including group-specific minority stressors and general psychological processes that are common across sexual orientations. The goal of the present paper is to develop a theoretical framework that integrates the important insights from these literatures. The framework postulates that (a) sexual minorities confront increased stress exposure resulting from stigma; (b) this stigma-related stress creates elevations in general emotion dysregulation, social/interpersonal problems, and cognitive processes conferring risk for psychopathology; and (c) these processes in turn mediate the relationship between stigma-related stress and psychopathology. It is argued that this framework can, theoretically, illuminate how stigma adversely affects mental health and, practically, inform clinical interventions. Evidence for the predictive validity of this framework is reviewed, with particular attention paid to illustrative examples from research on depression, anxiety, and alcohol use disorders. PMID:19702379

  15. Generalized free-space diffuse photon transport model based on the influence analysis of a camera lens diaphragm.

    PubMed

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

    2010-10-10

    The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.

  16. Aggregation, Validation, and Generalization of Qualitative Data - Methodological and Practical Research Strategies Illustrated by the Research Process of an empirically Based Typology.

    PubMed

    Weis, Daniel; Willems, Helmut

    2017-06-01

    The article deals with the question of how aggregated data which allow for generalizable insights can be generated from single-case based qualitative investigations. Thereby, two central challenges of qualitative social research are outlined: First, researchers must ensure that the single-case data can be aggregated and condensed so that new collective structures can be detected. Second, they must apply methods and practices to allow for the generalization of the results beyond the specific study. In the following, we demonstrate how and under what conditions these challenges can be addressed in research practice. To this end, the research process of the construction of an empirically based typology is described. A qualitative study, conducted within the framework of the Luxembourg Youth Report, is used to illustrate this process. Specifically, strategies are presented which increase the likelihood of generalizability or transferability of the results, while also highlighting their limitations.

  17. Mining Twitter Data to Augment NASA GPM Validation

    NASA Technical Reports Server (NTRS)

    Teng, Bill; Albayrak, Arif; Huffman, George; Vollmer, Bruce; Loeser, Carlee; Acker, Jim

    2017-01-01

    The Twitter data stream is an important new source of real-time and historical global information for potentially augmenting the validation program of NASA's Global Precipitation Measurement (GPM) mission. There have been other similar uses of Twitter, though mostly related to natural hazards monitoring and management. The validation of satellite precipitation estimates is challenging, because many regions lack data or access to data, especially outside of the U.S. and in remote and developing areas. The time-varying set of "precipitation" tweets can be thought of as an organic network of rain gauges, potentially providing a widespread view of precipitation occurrence. Twitter provides a large source of crowd for crowdsourcing. During a 24-hour period in the middle of the snow storm this past March in the U.S. Northeast, we collected more than 13,000 relevant precipitation tweets with exact geolocation. The overall objective of our project is to determine the extent to which processed tweets can provide additional information that improves the validation of GPM data. Though our current effort focuses on tweets and precipitation, our approach is general and applicable to other social media and other geophysical measurements. Specifically, we have developed an operational infrastructure for processing tweets, in a format suitable for analysis with GPM data; engaged with potential participants, both passive and active, to "enrich" the Twitter stream; and inter-compared "precipitation" tweet data, ground station data, and GPM retrievals. In this presentation, we detail the technical capabilities of our tweet processing infrastructure, including data abstraction, feature extraction, search engine, context-awareness, real-time processing, and high volume (big) data processing; various means for "enriching" the Twitter stream; and results of inter-comparisons. Our project should bring a new kind of visibility to Twitter and engender a new kind of appreciation of the value of Twitter by the science research communities.

  18. Mining Twitter Data Stream to Augment NASA GPM Validation

    NASA Astrophysics Data System (ADS)

    Teng, W. L.; Albayrak, A.; Huffman, G. J.; Vollmer, B.

    2017-12-01

    The Twitter data stream is an important new source of real-time and historical global information for potentially augmenting the validation program of NASA's Global Precipitation Measurement (GPM) mission. There have been other similar uses of Twitter, though mostly related to natural hazards monitoring and management. The validation of satellite precipitation estimates is challenging, because many regions lack data or access to data, especially outside of the U.S. and in remote and developing areas. The time-varying set of "precipitation" tweets can be thought of as an organic network of rain gauges, potentially providing a widespread view of precipitation occurrence. Twitter provides a large source of crowd for crowdsourcing. During a 24-hour period in the middle of the snow storm this past March in the U.S. Northeast, we collected more than 13,000 relevant precipitation tweets with exact geolocation. The overall objective of our project is to determine the extent to which processed tweets can provide additional information that improves the validation of GPM data. Though our current effort focuses on tweets and precipitation, our approach is general and applicable to other social media and other geophysical measurements. Specifically, we have developed an operational infrastructure for processing tweets, in a format suitable for analysis with GPM data; engaged with potential participants, both passive and active, to "enrich" the Twitter stream; and inter-compared "precipitation" tweet data, ground station data, and GPM retrievals. In this presentation, we detail the technical capabilities of our tweet processing infrastructure, including data abstraction, feature extraction, search engine, context-awareness, real-time processing, and high volume (big) data processing; various means for "enriching" the Twitter stream; and results of inter-comparisons. Our project should bring a new kind of visibility to Twitter and engender a new kind of appreciation of the value of Twitter by the science research communities.

  19. Brine flow in heated geologic salt.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhlman, Kristopher L.; Malama, Bwalya

    This report is a summary of the physical processes, primary governing equations, solution approaches, and historic testing related to brine migration in geologic salt. Although most information presented in this report is not new, we synthesize a large amount of material scattered across dozens of laboratory reports, journal papers, conference proceedings, and textbooks. We present a mathematical description of the governing brine flow mechanisms in geologic salt. We outline the general coupled thermal, multi-phase hydrologic, and mechanical processes. We derive these processes governing equations, which can be used to predict brine flow. These equations are valid under a wide varietymore » of conditions applicable to radioactive waste disposal in rooms and boreholes excavated into geologic salt.« less

  20. Generalizing disease management program results: how to get from here to there.

    PubMed

    Linden, Ariel; Adams, John L; Roberts, Nancy

    2004-07-01

    For a disease management (DM) program, the ability to generalize results from the intervention group to the population, to other populations, or to other diseases is as important as demonstrating internal validity. This article provides an overview of the threats to external validity of DM programs, and offers methods to improve the capability for generalizing results obtained through the program. The external validity of DM programs must be evaluated even before program selection and implementation are begun with a prospective new client. Any fundamental differences in characteristics between individuals in an established DM program and in a new population/environment may limit the ability to generalize.

  1. Development of reference practices for the calibration and validation of atmospheric composition satellites

    NASA Astrophysics Data System (ADS)

    Lambert, Jean-Christopher; Bojkov, Bojan

    The Committee on Earth Observation Satellites (CEOS)/Working Group on Calibration and Validation (WGCV) is developing a global data quality strategy for the Global Earth Obser-vation System of Systems (GEOSS). In this context, CEOS WGCV elaborated the GEOSS Quality Assurance framework for Earth Observation (QA4EO, http://qa4eo.org). QA4EO en-compasses a documentary framework and a set of ten guidelines, which describe the top-level approach of QA activities and key requirements that drive the QA process. QA4EO is appli-cable virtually to all Earth Observation data. Calibration and validation activities are a cornerstone of the GEOSS data quality strategy. Proper uncertainty assessment of the satellite measurements and their derived data products is essential, and needs to be continuously monitored and traceable to standards. As a practical application of QA4EO, CEOS WGCV has undertaken to establish a set of best practices, methodologies and guidelines for satellite calibration and validation. The present paper reviews current developments of best practices and guidelines for the vali-dation of atmospheric composition satellites. Aimed as a community effort, the approach is to start with current practices that could be improved with time. The present review addresses current validation capabilities, achievements, caveats, harmonization efforts, and challenges. Terminologies and general principles of validation are reminded. Going beyond elementary def-initions of validation like the assessment of uncertainties, the specific GEOSS context requires considering also the validation of individual service components and against user requirements.

  2. On the Application of Image Processing Methods for Bubble Recognition to the Study of Subcooled Flow Boiling of Water in Rectangular Channels

    PubMed Central

    Paz, Concepción; Conde, Marcos; Porteiro, Jacobo; Concheiro, Miguel

    2017-01-01

    This work introduces the use of machine vision in the massive bubble recognition process, which supports the validation of boiling models involving bubble dynamics, as well as nucleation frequency, active site density and size of the bubbles. The two algorithms presented are meant to be run employing quite standard images of the bubbling process, recorded in general-purpose boiling facilities. The recognition routines are easily adaptable to other facilities if a minimum number of precautions are taken in the setup and in the treatment of the information. Both the side and front projections of subcooled flow-boiling phenomenon over a plain plate are covered. Once all of the intended bubbles have been located in space and time, the proper post-process of the recorded data become capable of tracking each of the recognized bubbles, sketching their trajectories and size evolution, locating the nucleation sites, computing their diameters, and so on. After validating the algorithm’s output against the human eye and data from other researchers, machine vision systems have been demonstrated to be a very valuable option to successfully perform the recognition process, even though the optical analysis of bubbles has not been set as the main goal of the experimental facility. PMID:28632158

  3. Psychometric survey of nursing competences illustrated with nursing students and apprentices

    PubMed

    Reichardt, Christoph; Wernecke, Frances; Giesler, Marianne; Petersen-Ewert, Corinna

    2016-09-01

    Background: The term competences is discussed differently in various disciplines of science. Furthermore there is no international or discipline comprehensive accepted definition of this term. Problem: So far, there are few practical, reliable and valid measuring instruments for a survey of general nursing skills. This article describes the adaptation process of a measuring instrument for medical skills into one for nursing competences. Method: The measurement quality of the questionnaire was audited using a sample of two different courses of studies and regular nursing apprentices. Another research question focused whether the adapted questionnaire is able to detect a change of nursing skills. For the validation of reliability and validity data from the first point of measurement was used (n = 240). The data from the second point of measurement, which was conducted two years later (n = 163), were used to validate, whether the questionnaire is able to detect a change of nursing competences. Results/Conclusions: The results indicate that the adapted version of the questionnaire is reliable and valid. Also the questionnaire was able to detect significant, partly even strong, effects of change in nursing skills (d = 0,17 – 1,04). It was possible to adapt the questionnaire for the measurement of nursing competences.

  4. Validity and reliability of the Turkish version of the DSM-5 Generalized Anxiety Disorder Severity Scale for children aged 11–17 years

    PubMed

    Yalın Sapmaz, Şermin; Özek Erkuran, Handan; Ergin, Dilek; Öztürk, Masum; Şen Celasin, Nesrin; Karaarslan, Duygu; Aydemir, Ömer

    2018-02-23

    Background/aim: This study aimed to assess the validity and reliability of the Turkish version of the DSM-5 Generalized Anxiety Disorder Severity Scale - Child Form. Materials and methods: The study sample consisted of 32 patients treated in a child psychiatry unit and diagnosed with generalized anxiety disorder and 98 healthy volunteers who were attending middle or high school during the study period. For the assessment, the Screen for Child Anxiety and Related Emotional Disorders (SCARED) was also used along with the DSM-5 Generalized Anxiety Disorder Severity Scale - Child Form. Results: Regarding reliability analyses, the Cronbach alpha internal consistency coefficient was calculated as 0.932. The test-retest correlation coefficient was calculated as r = 0.707. As for construct validity, one factor that could explain 62.6% of the variance was obtained and this was consistent with the original construct of the scale. As for concurrent validity, the scale showed a high correlation with SCARED. Conclusion: It was concluded that Turkish version of the DSM-5 Generalized Anxiety Disorder Severity Scale - Child Form could be utilized as a valid and reliable tool both in clinical practice and for research purposes.

  5. Enhancing the cross-cultural adaptation and validation process: linguistic and psychometric testing of the Brazilian-Portuguese version of a self-report measure for dry eye.

    PubMed

    Santo, Ruth Miyuki; Ribeiro-Ferreira, Felipe; Alves, Milton Ruiz; Epstein, Jonathan; Novaes, Priscila

    2015-04-01

    To provide a reliable, validated, and culturally adapted instrument that may be used in monitoring dry eye in Brazilian patients and to discuss the strategies for the enhancement of the cross-cultural adaptation and validation process of a self-report measure for dry eye. The cross-cultural adaptation process (CCAP) of the original Ocular Surface Disease Index (OSDI) into Brazilian-Portuguese was conducted using a 9-step guideline. The synthesis of translations was tested twice, for face and content validity, by different subjects (focus groups and cognitive interviews). The expert committee contributed on several steps, and back translations were based on the final rather than the prefinal version. For validation, the adapted version was applied in a prospective longitudinal study to 101 patients from the Dry Eye Clinic at the General Hospital of the University of São Paulo, Brazil. Simultaneously to the OSDI, patients answered the short form-36 health survey (SF-36) and the 25-item visual function questionnaire (VFQ-25) and underwent clinical evaluation. Internal consistency, test-retest reliability, and measure validity were assessed. Cronbach's alpha value of the cross-culturally adapted Brazilian-Portuguese version of the OSDI was 0.905, and the intraclass correlation coefficient was 0.801. There was a statistically significant difference between OSDI scores in patients with dry eye (41.15 ± 27.40) and without dry eye (17.88 ± 17.09). There was a negative association between OSDI and VFQ-25 total score (P < 0.01) and between the OSDI and five SF-36 domains. OSDI scores correlated positively with lissamine green and fluorescein staining scores (P < 0.001) and negatively with Schirmer test I and tear break-up time values (P < 0.001). Although most of the reviewed guidelines on CCAP involve well-defined steps (translation, synthesis/reconciliation, back translation, expert committee review, pretesting), the proposed methodological steps have not been applied in a uniform way. The translation and adaptation process requires skill, knowledge, experience, and a considerable investment of time to maximize the attainment of semantic, idiomatic, experiential, and conceptual equivalence between the source and target questionnaires. A well-established guideline resulted in a culturally adapted Brazilian-Portuguese version of the OSDI, tested and validated on a sample of Brazilian population, and proved to be a valid and reliable instrument for assessing patients with dry eye syndrome in Brazil. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. The Role of Anchor Stations in the Validation of Earth Observation Satellite Data and Products. The Valencia and the Alacant Anchor Stations

    NASA Astrophysics Data System (ADS)

    Lopez-Baeza, Ernesto; Geraldo Ferreira, A.; Saleh-Contell, Kauzar

    Space technology facilitates humanity and science with a global revolutionary view of the Earth through the acquisition of Earth Observation satellite data. Satellites capture information over different spatial and temporal scales and assist in understanding natural climate processes and in detecting and explaining climate change. Accurate Earth Observation data is needed to describe climate processes by improving the parameterisations of different climate elements. Algorithms to produce geophysical parameters from raw satellite observations should go through selection processes or participate in inter-comparison programmes to ensure performance reliability. Geophysical parameter datasets, obtained from satellite observations, should pass a quality control before they are accepted in global databases for impact, diagnostic or sensitivity studies. Calibration and Validation, or simply "Cal/Val", is the activity that endeavours to ensure that remote sensing products are highly consistent and reproducible. This is an evolving scientific activity that is becoming increasingly important as more long-term studies on global change are undertaken, and new satellite missions are launched. Calibration is the process of quantitatively defining the system responses to known, controlled signal inputs. Validation refers to the process of assessing, by independent means, the quality of the data products derived from the system outputs. These definitions are generally accepted and most often used in the remote sensing context to refer specifically and respectively to sensor radiometric calibration and geophysical parameter validation. Anchor Stations are carefully selected locations at which instruments measure quantities that are needed to run, calibrate or validate models and algorithms. These are needed to quanti-tatively evaluate satellite data and convert it into geophysical information. The instruments collect measurements of basic quantities over a long timescale. Measurements are made of meteorological and hydrological background data, and of quantities not readily assessed at operational stations. Anchor Stations also offer infrastructure to undertake validation experi-ments. These are more detailed measurements over shorter intensive observation periods. The Valencia Anchor Station is showing its capabilities and conditions as a reference validation site in the framework of low spatial resolution remote sensing missions such as CERES, GERB and SMOS. The Alacant Anchor Station is a reference site in studies on the interactions between desertification and climate. This paper presents the activities so far carried out at both Anchor Stations, the precise and detailed ground and aircraft experiments carefully designed to develop a specific methodology to validate low spatial resolution satellite data and products, and the knowledge exchange currently being exercised between the University of Valencia, Spain, and FUNCEME, Brazil, in common objectives of mutual interest.

  7. Gold-standard evaluation of a folksonomy-based ontology learning model

    NASA Astrophysics Data System (ADS)

    Djuana, E.

    2018-03-01

    Folksonomy, as one result of collaborative tagging process, has been acknowledged for its potential in improving categorization and searching of web resources. However, folksonomy contains ambiguities such as synonymy and polysemy as well as different abstractions or generality problem. To maximize its potential, some methods for associating tags of folksonomy with semantics and structural relationships have been proposed such as using ontology learning method. This paper evaluates our previous work in ontology learning according to gold-standard evaluation approach in comparison to a notable state-of-the-art work and several baselines. The results show that our method is comparable to the state-of the art work which further validate our approach as has been previously validated using task-based evaluation approach.

  8. Dynamics of a Chlorophyll Dimer in Collective and Local Thermal Environments

    DOE PAGES

    Merkli, M.; Berman, Gennady Petrovich; Sayre, Richard Thomas; ...

    2016-01-30

    Here we present a theoretical analysis of exciton transfer and decoherence effects in a photosynthetic dimer interacting with collective (correlated) and local (uncorrelated) protein-solvent environments. Our approach is based on the framework of the spin-boson model. We derive explicitly the thermal relaxation and decoherence rates of the exciton transfer process, valid for arbitrary temperatures and for arbitrary (in particular, large) interaction constants between the dimer and the environments. We establish a generalization of the Marcus formula, giving reaction rates for dimer levels possibly individually and asymmetrically coupled to environments. We identify rigorously parameter regimes for the validity of the generalizedmore » Marcus formula. The existence of long living quantum coherences at ambient temperatures emerges naturally from our approach.« less

  9. Risk perception and information processing: the development and validation of a questionnaire to assess self-reported information processing.

    PubMed

    Smerecnik, Chris M R; Mesters, Ilse; Candel, Math J J M; De Vries, Hein; De Vries, Nanne K

    2012-01-01

    The role of information processing in understanding people's responses to risk information has recently received substantial attention. One limitation of this research concerns the unavailability of a validated questionnaire of information processing. This article presents two studies in which we describe the development and validation of the Information-Processing Questionnaire to meet that need. Study 1 describes the development and initial validation of the questionnaire. Participants were randomized to either a systematic processing or a heuristic processing condition after which they completed a manipulation check and the initial 15-item questionnaire and again two weeks later. The questionnaire was subjected to factor reliability and validity analyses on both measurement times for purposes of cross-validation of the results. A two-factor solution was observed representing a systematic processing and a heuristic processing subscale. The resulting scale showed good reliability and validity, with the systematic condition scoring significantly higher on the systematic subscale and the heuristic processing condition significantly higher on the heuristic subscale. Study 2 sought to further validate the questionnaire in a field study. Results of the second study corresponded with those of Study 1 and provided further evidence of the validity of the Information-Processing Questionnaire. The availability of this information-processing scale will be a valuable asset for future research and may provide researchers with new research opportunities. © 2011 Society for Risk Analysis.

  10. Selection, calibration, and validation of models of tumor growth.

    PubMed

    Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C

    2016-11-01

    This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory animals while demonstrating successful implementations of OPAL.

  11. Validation of an In-Water, Tower-Shading Correction Scheme

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Doyle, John P.; Zibordi, Giuseppe; vanderLinde, Dirk

    2003-01-01

    Large offshore structures used for the deployment of optical instruments can significantly perturb the intensity of the light field surrounding the optical measurement point, where different portions of the visible spectrum are subject to different shadowing effects. These effects degrade the quality of the acquired optical data and can reduce the accuracy of several derived quantities, such as those obtained by applying bio-optical algorithms directly to the shadow-perturbed data. As a result, optical remote sensing calibration and validation studies can be impaired if shadowing artifacts are not fully accounted for. In this work, the general in-water shadowing problem is examined for a particular case study. Backward Monte Carlo (MC) radiative transfer computations- performed in a vertically stratified, horizontally inhomogeneous, and realistic ocean-atmosphere system are shown to accurately simulate the shadow-induced relative percent errors affecting the radiance and irradiance data profiles acquired close to an oceanographic tower. Multiparameter optical data processing has provided adequate representation of experimental uncertainties allowing consistent comparison with simulations. The more detailed simulations at the subsurface depth appear to be essentially equivalent to those obtained assuming a simplified ocean-atmosphere system, except in highly stratified waters. MC computations performed in the simplified system can be assumed, therefore, to accurately simulate the optical measurements conducted under more complex sampling conditions (i.e., within waters presenting moderate stratification at most). A previously reported correction scheme, based on the simplified MC simulations, and developed for subsurface shadow-removal processing of in-water optical data taken close to the investigated oceanographic tower, is then validated adequately under most experimental conditions. It appears feasible to generalize the present tower-specific approach to solve other optical sensor shadowing problems pertaining to differently shaped deployment platforms, and also including surrounding structures and instrument casings.

  12. [Spanish validation of the Boston Carpal Tunnel Questionnaire].

    PubMed

    Oteo-Álvaro, Ángel; Marín, María T; Matas, José A; Vaquero, Javier

    2016-03-18

    To describe the process of cultural adaptation and validation of the Boston Carpal Tunnel Questionnaire (BCTQ) measuring symptom intensity, functional status and quality of life in carpal tunnel syndrome patients and to report the psychometric properties of this version. A 3 expert panel supervised the adaptation process. After translation, review and back-translation of the original instrument, a new Spanish version was obtained, which was administered to 2 patient samples: a pilot sample of 20 patients for assessing comprehension, and a 90 patient sample for assessing structural validity (factor analysis and reliability), construct validity and sensitivity to change. A re-test measurement was carried out in 21 patients. Follow-up was accomplished in 40 patients. The questionnaire was well accepted by all participants. Celling effect was observed for 3 items. Reliability was very good, internal consistency: αS=0.91 y αF=0.87; test-retest stability: rS=0.939 and rF=0.986. Both subscales fitted to a general dimension. Subscales correlated with dynamometer measurements (rS=0.77 and rF=0.75) and showed to be related to abnormal 2-point discrimination, muscle atrophy and electromyography deterioration level. Scores properly correlated with other validated instruments: Douleur Neuropatique 4 questions and Brief Pain Inventory. BCTQ demonstrated to be sensitive to clinical changes, with large effect sizes (dS=-3.3 and dF=-1.9). The Spanish version of the BCTQ shows good psychometric properties warranting its use in clinical settings. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.

  13. Construction and validation of a measure of integrative well-being in seven languages: The Pemberton Happiness Index

    PubMed Central

    2013-01-01

    Purpose We introduce the Pemberton Happiness Index (PHI), a new integrative measure of well-being in seven languages, detailing the validation process and presenting psychometric data. The scale includes eleven items related to different domains of remembered well-being (general, hedonic, eudaimonic, and social well-being) and ten items related to experienced well-being (i.e., positive and negative emotional events that possibly happened the day before); the sum of these items produces a combined well-being index. Methods A distinctive characteristic of this study is that to construct the scale, an initial pool of items, covering the remembered and experienced well-being domains, were subjected to a complete selection and validation process. These items were based on widely used scales (e.g., PANAS, Satisfaction With Life Scale, Subjective Happiness Scale, and Psychological Well-Being Scales). Both the initial items and reference scales were translated into seven languages and completed via Internet by participants (N = 4,052) aged 16 to 60 years from nine countries (Germany, India, Japan, Mexico, Russia, Spain, Sweden, Turkey, and USA). Results Results from this initial validation study provided very good support for the psychometric properties of the PHI (i.e., internal consistency, a single-factor structure, and convergent and incremental validity). Conclusions Given the PHI’s good psychometric properties, this simple and integrative index could be used as an instrument to monitor changes in well-being. We discuss the utility of this integrative index to explore well-being in individuals and communities. PMID:23607679

  14. Validation and standardization of the Generalized Anxiety Disorder Screener (GAD-7) in the general population.

    PubMed

    Löwe, Bernd; Decker, Oliver; Müller, Stefanie; Brähler, Elmar; Schellberg, Dieter; Herzog, Wolfgang; Herzberg, Philipp Yorck

    2008-03-01

    The 7-item Generalized Anxiety Disorder Scale (GAD-7) is a practical self-report anxiety questionnaire that proved valid in primary care. However, the GAD-7 was not yet validated in the general population and thus far, normative data are not available. To investigate reliability, construct validity, and factorial validity of the GAD-7 in the general population and to generate normative data. Nationally representative face-to-face household survey conducted in Germany between May 5 and June 8, 2006. Five thousand thirty subjects (53.6% female) with a mean age (SD) of 48.4 (18.0) years. The survey questionnaire included the GAD-7, the 2-item depression module from the Patient Health Questionnaire (PHQ-2), the Rosenberg Self-Esteem Scale, and demographic characteristics. Confirmatory factor analyses substantiated the 1-dimensional structure of the GAD-7 and its factorial invariance for gender and age. Internal consistency was identical across all subgroups (alpha = 0.89). Intercorrelations with the PHQ-2 and the Rosenberg Self-Esteem Scale were r = 0.64 (P < 0.001) and r = -0.43 (P < 0.001), respectively. As expected, women had significantly higher mean (SD) GAD-7 anxiety scores compared with men [3.2 (3.5) vs. 2.7 (3.2); P < 0.001]. Normative data for the GAD-7 were generated for both genders and different age levels. Approximately 5% of subjects had GAD-7 scores of 10 or greater, and 1% had GAD-7 scores of 15 or greater. Evidence supports reliability and validity of the GAD-7 as a measure of anxiety in the general population. The normative data provided in this study can be used to compare a subject's GAD-7 score with those determined from a general population reference group.

  15. Analysis and synthesis of abstract data types through generalization from examples

    NASA Technical Reports Server (NTRS)

    Wild, Christian

    1987-01-01

    The discovery of general patterns of behavior from a set of input/output examples can be a useful technique in the automated analysis and synthesis of software systems. These generalized descriptions of the behavior form a set of assertions which can be used for validation, program synthesis, program testing, and run-time monitoring. Describing the behavior is characterized as a learning process in which the set of inputs is mapped into an appropriate transform space such that general patterns can be easily characterized. The learning algorithm must chose a transform function and define a subset of the transform space which is related to equivalence classes of behavior in the original domain. An algorithm for analyzing the behavior of abstract data types is presented and several examples are given. The use of the analysis for purposes of program synthesis is also discussed.

  16. [Preventable drug-related morbidity: determining valid indicators for primary care in Portugal].

    PubMed

    Guerreiro, Mara Pereira; Cantrill, Judith A; Martins, Ana Paula

    2007-01-01

    Preventable drug-related morbidity (PDRM) indicators are operational measures of therapeutic risk management. These clinical indicators, which cover a wide range of drugs, combine process and outcome in the same instrument. They were developed in the US and have been validated for primary care settings in the US, UK and Canada. This study is part of a research programme; it aimed to determine a valid set of PDRM indicators for adult patients in primary care in Portugal. Face validity of 61 US and UK-derived indicators translated to Portuguese was preliminarily determined by means of a postal questionnaire using a purposive sample of four Portuguese pharmacists with different backgrounds. Preliminary content validity of indicators approved in the previous stage was determined by cross-checking each definition of PDRM with standard drug information sources in Portugal. Face and content validity of indicators yielded by preliminary work were then established by a 37 expert panel (20 community pharmacists and 17 general practitioners) using a two-round Delphi survey. Data were analysed using SPSS release 11.5. Nineteen indicators were ruled out in preliminary validation. Changes were made in the content of eight of the remaining 42 indicators; these were related to differences in the drugs being marketed and patterns of drug monitoring between countries. Thirty-five indicators were consensus approved as PDRM for adult patients in Portuguese primary care by the Delphi panel.

  17. Poissonian steady states: from stationary densities to stationary intensities.

    PubMed

    Eliazar, Iddo

    2012-10-01

    Markov dynamics are the most elemental and omnipresent form of stochastic dynamics in the sciences, with applications ranging from physics to chemistry, from biology to evolution, and from economics to finance. Markov dynamics can be either stationary or nonstationary. Stationary Markov dynamics represent statistical steady states and are quantified by stationary densities. In this paper, we generalize the notion of steady state to the case of general Markov dynamics. Considering an ensemble of independent motions governed by common Markov dynamics, we establish that the entire ensemble attains Poissonian steady states which are quantified by stationary Poissonian intensities and which hold valid also in the case of nonstationary Markov dynamics. The methodology is applied to a host of Markov dynamics, including Brownian motion, birth-death processes, random walks, geometric random walks, renewal processes, growth-collapse dynamics, decay-surge dynamics, Ito diffusions, and Langevin dynamics.

  18. Poissonian steady states: From stationary densities to stationary intensities

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2012-10-01

    Markov dynamics are the most elemental and omnipresent form of stochastic dynamics in the sciences, with applications ranging from physics to chemistry, from biology to evolution, and from economics to finance. Markov dynamics can be either stationary or nonstationary. Stationary Markov dynamics represent statistical steady states and are quantified by stationary densities. In this paper, we generalize the notion of steady state to the case of general Markov dynamics. Considering an ensemble of independent motions governed by common Markov dynamics, we establish that the entire ensemble attains Poissonian steady states which are quantified by stationary Poissonian intensities and which hold valid also in the case of nonstationary Markov dynamics. The methodology is applied to a host of Markov dynamics, including Brownian motion, birth-death processes, random walks, geometric random walks, renewal processes, growth-collapse dynamics, decay-surge dynamics, Ito diffusions, and Langevin dynamics.

  19. Heider balance, asymmetric ties, and gender segregation

    NASA Astrophysics Data System (ADS)

    Krawczyk, Małgorzata J.; del Castillo-Mussot, Marcelo; Hernández-Ramírez, Eric; Naumis, Gerardo G.; Kułakowski, Krzysztof

    2015-12-01

    To remove a cognitive dissonance in interpersonal relations, people tend to divide their acquaintances into friendly and hostile parts, both groups internally friendly and mutually hostile. This process is modeled as an evolution toward the Heider balance. A set of differential equations have been proposed and validated (Kułakowski et al., 2005) to model the Heider dynamics of this social and psychological process. Here we generalize the model by including the initial asymmetry of the interpersonal relations and the direct reciprocity effect which removes this asymmetry. Our model is applied to the data on enmity and friendship in 37 school classes and 4 groups of teachers in México. For each class, a stable balanced partition is obtained into two groups. The gender structure of the groups reveals stronger gender segregation in younger classes, i.e. of age below 12 years, a fact consistent with other general empirical results.

  20. Acquisition of background and technical information and class trip planning

    NASA Technical Reports Server (NTRS)

    Mackinnon, R. M.; Wake, W. H.

    1981-01-01

    Instructors who are very familiar with a study area, as well as those who are not, find the field trip information acquisition and planning process speeded and made more effective by organizing it in stages. The stage follow a deductive progression: from the associated context region, to the study area, to the specific sample window sites, and from generalized background information on the study region to specific technical data on the environmental and human use systems to be interpreted at each site. On the class trip and in the follow up laboratory, the learning/interpretive process are at first deductive in applying previously learned information and skills to analysis of the study site, then inductive in reading and interpreting the landscape, imagery, and maps of the site, correlating them with information of other samples sites and building valid generalizations about the larger study area, its context region, and other (similar and/or contrasting) regions.

  1. From cognition to the system: developing a multilevel taxonomy of patient safety in general practice.

    PubMed

    Kostopoulou, O

    The paper describes the process of developing a taxonomy of patient safety in general practice. The methodologies employed included fieldwork, task analysis and confidential reporting of patient-safety events in five West Midlands practices. Reported events were traced back to their root causes and contributing factors. The resulting taxonomy is based on a theoretical model of human cognition, includes multiple levels of classification to reflect the chain of causation and considers affective and physiological influences on performance. Events are classified at three levels. At level one, the information-processing model of cognition is used to classify errors. At level two, immediate causes are identified, internal and external to the individual. At level three, more remote causal factors are classified as either 'work organization' or 'technical' with subcategories. The properties of the taxonomy (validity, reliability, comprehensiveness) as well as its usability and acceptability remain to be tested with potential users.

  2. Endocrine Disruptor Screening Program (EDSP) Universe of Chemicals and General Validation Principles

    EPA Pesticide Factsheets

    This document was developed by the EPA to provide guidance to staff and managers regarding the EDSP universe of chemicals and general validation principles for consideration of computational toxicology tools for chemical prioritization.

  3. Beating Landauer's Bound: Tradeoff between Accuracy and Heat Dissipation

    NASA Astrophysics Data System (ADS)

    Talukdar, Saurav; Bhaban, Shreyas; Salapaka, Murti

    The Landauer's Principle states that erasing of one bit of stored information is necessarily accompanied by heat dissipation of at least kb Tln 2 per bit. However, this is true only if the erasure process is always successful. We demonstrate that if the erasure process has a success probability p, the minimum heat dissipation per bit is given by kb T(plnp + (1 - p) ln (1 - p) + ln 2), referred to as the Generalized Landauer Bound, which is kb Tln 2 if the erasure process is always successful and decreases to zero as p reduces to 0.5. We present a model for a one-bit memory based on a Brownian particle in a double well potential motivated from optical tweezers and achieve erasure by manipulation of the optical fields. The method uniquely provides with a handle on the success proportion of the erasure. The thermodynamics framework for Langevin dynamics developed by Sekimoto is used for computation of heat dissipation in each realization of the erasure process. Using extensive Monte Carlo simulations, we demonstrate that the Landauer Bound of kb Tln 2 is violated by compromising on the success of the erasure process, while validating the existence of the Generalized Landauer Bound.

  4. Design and validation of a real-time spiking-neural-network decoder for brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Dethier, Julie; Nuyujukian, Paul; Ryu, Stephen I.; Shenoy, Krishna V.; Boahen, Kwabena

    2013-06-01

    Objective. Cortically-controlled motor prostheses aim to restore functions lost to neurological disease and injury. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage cortex. Approach. One possible solution is to use ultra-low power neuromorphic chips to decode neural signals for these intracortical implants. The first step is to explore in simulation the feasibility of translating decoding algorithms for brain-machine interface (BMI) applications into spiking neural networks (SNNs). Main results. Here we demonstrate the validity of the approach by implementing an existing Kalman-filter-based decoder in a simulated SNN using the Neural Engineering Framework (NEF), a general method for mapping control algorithms onto SNNs. To measure this system’s robustness and generalization, we tested it online in closed-loop BMI experiments with two rhesus monkeys. Across both monkeys, a Kalman filter implemented using a 2000-neuron SNN has comparable performance to that of a Kalman filter implemented using standard floating point techniques. Significance. These results demonstrate the tractability of SNN implementations of statistical signal processing algorithms on different monkeys and for several tasks, suggesting that a SNN decoder, implemented on a neuromorphic chip, may be a feasible computational platform for low-power fully-implanted prostheses. The validation of this closed-loop decoder system and the demonstration of its robustness and generalization hold promise for SNN implementations on an ultra-low power neuromorphic chip using the NEF.

  5. The General Necessary Condition for the Validity of Dirac's Transition Perturbation Theory

    NASA Technical Reports Server (NTRS)

    Quang, Nguyen Vinh

    1996-01-01

    For the first time, from the natural requirements for the successive approximation the general necessary condition of validity of the Dirac's method is explicitly established. It is proved that the conception of 'the transition probability per unit time' is not valid. The 'super-platinium rules' for calculating the transition probability are derived for the arbitrarily strong time-independent perturbation case.

  6. On the Myth and the Reality of the Temporal Validity Degradation of General Mental Ability Test Scores

    ERIC Educational Resources Information Center

    Reeve, Charlie L.; Bonaccio, Silvia

    2011-01-01

    Claims of changes in the validity coefficients associated with general mental ability (GMA) tests due to the passage of time (i.e., temporal validity degradation) have been the focus of an on-going debate in applied psychology. To evaluate whether and, if so, under what conditions this degradation may occur, we integrate evidence from multiple…

  7. Linguistic Adaptation and Psychometric Properties of Tamil Version of General Oral Health Assessment Index-Tml.

    PubMed

    Appukuttan, D P; Vinayagavel, M; Balasundaram, A; Damodaran, L K; Shivaraman, P; Gunasshegaran, K

    2015-01-01

    Oral health has an impact on quality of life hence for research purpose validation of a Tamil version of General Oral Health Assessment Index would enable it to be used as a valuable tool among Tamil speaking population. In this study, we aimed to assess the psychometric properties of translated Tamil version of General Oral Health Assessment Index (GOHAI-Tml). Linguistic adaptation involved forward and backward blind translation process. Reliability was analyzed using test-retest, Cronbach alpha, and split half reliability. Inter-item and item-total correlation were evaluated using Spearman rank correlation. Convenience sampling was done, and 265 consecutive patients aged 20-70 years attending the outpatient department were recruited. Subjects were requested to fill a self-reporting questionnaire along with Tamil GOHAI version. Clinical examination was done on the same visit. Concurrent validity was measured by assessing the relationship between GOHAI scores and self-perceived oral health and general health status, satisfaction with oral health, need for dental treatment and esthetic satisfaction. Discriminant validity was evaluated by comparing the GOHAI scores with the objectively assessed clinical parameters. Exploratory factor analysis was done to examine the factor structure. Mean GOHAI-Tml was 52.7 (6.8, range 22-60, median 54). The mean number of negative impacts was 2 (2.4, range 0-11, median 1). The Spearman rank correlation for test-retest ranged from 0.8 to 0.9 (P < 0.001) for all the 12 items between visits. The Cronbach alpha for 265 samples was 0.8 suggesting good internal consistency and homogeneity between items. Item scale correlation ranged from 0.4 to 0.8 (P < 0.001). Concurrent and discriminant validity was established. Principal component analysis resulted in extraction of four factors which together accounted for 66.4% (7.9/12) variance. GOHAI-Tml has shown acceptable psychometric properties, so that it can be used as an efficient tool in identifying the impact of oral health on quality of life among the Tamil speaking population.

  8. Auditing radiation sterilization facilities

    NASA Astrophysics Data System (ADS)

    Beck, Jeffrey A.

    The diversity of radiation sterilization systems available today places renewed emphasis on the need for thorough Quality Assurance audits of these facilities. Evaluating compliance with Good Manufacturing Practices is an obvious requirement, but an effective audit must also evaluate installation and performance qualification programs (validation_, and process control and monitoring procedures in detail. The present paper describes general standards that radiation sterilization operations should meet in each of these key areas, and provides basic guidance for conducting QA audits of these facilities.

  9. The Importance of Time and Frequency Reference in Quantum Astronomy and Quantum Communications

    DTIC Science & Technology

    2007-11-01

    simulator, but the same general results are valid for optical fiber and also different quantum state transmission technologies (i.e. Entangled Photons ...protocols [6]). The Matlab simulation starts from a sequence of pulses of duration Ton; the number of photons per pulse has been implemented like a...astrophysical emission mechanisms or scattering processes by measuring the statistics of the arrival time of each incoming photon . This line of research will be

  10. Validation of landsurface processes in the AMIP models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, T J

    The Atmospheric Model Intercomparison Project (AMIP) is a commonly accepted protocol for testing the performance of the world's atmospheric general circulation models (AGCMs) under common specifications of radiative forcings (in solar constant and carbon dioxide concentration) and observed ocean boundary conditions (Gates 1992, Gates et al. 1999). From the standpoint of landsurface specialists, the AMIP affords an opportunity to investigate the behaviors of a wide variety of land-surface schemes (LSS) that are coupled to their ''native'' AGCMs (Phillips et al. 1995, Phillips 1999). In principle, therefore, the AMIP permits consideration of an overarching question: ''To what extent does an AGCM'smore » performance in simulating continental climate depend on the representations of land-surface processes by the embedded LSS?'' There are, of course, some formidable obstacles to satisfactorily addressing this question. First, there is the dilemna of how to effectively validate simulation performance, given the present dearth of global land-surface data sets. Even if this data problem were to be alleviated, some inherent methodological difficulties would remain: in the context of the AMIP, it is not possible to validate a given LSS per se, since the associated land-surface climate simulation is a product of the coupled AGCM/LSS system. Moreover, aside from the intrinsic differences in LSS across the AMIP models, the varied representations of land-surface characteristics (e.g. vegetation properties, surface albedos and roughnesses, etc.) and related variations in land-surface forcings further complicate such an attribution process. Nevertheless, it may be possible to develop validation methodologies/statistics that are sufficiently penetrating to reveal ''signatures'' of particular ISS representations (e.g. ''bucket'' vs more complex parameterizations of hydrology) in the AMIP land-surface simulations.« less

  11. New approach to the design of Schottky barrier diodes for THz mixers

    NASA Technical Reports Server (NTRS)

    Jelenski, A.; Grueb, A.; Krozer, V.; Hartnagel, H. L.

    1992-01-01

    Near-ideal GaAs Schottky barrier diodes especially designed for mixing applications in the THz frequency range are presented. A diode fabrication process for submicron diodes with near-ideal electrical and noise characteristics is described. This process is based on the electrolytic pulse etching of GaAs in combination with an in-situ platinum plating for the formation of the Schottky contacts. Schottky barrier diodes with a diameter of 1 micron fabricated by the process have already shown excellent results in a 650 GHz waveguide mixer at room temperature. A conversion loss of 7.5 dB and a mixer noise temperature of less than 2000 K have been obtained at an intermediate frequency of 4 GHz. The optimization of the diode structure and the technology was possible due to the development of a generalized Schottky barrier diode model which is valid also at high current densities. The common diode design and optimization is discussed on the basis of the classical theory. However, the conventional fomulas are valid only in a limited forward bias range corresponding to currents much smaller than the operating currents under submillimeter mixing conditions. The generalized new model takes into account not only the phenomena occurring at the junction such as current dependent recombination and drift/diffusion velocities, but also mobility and electron temperature variations in the undepleted epi-layer. Calculated diode I/V and noise characteristics are in excellent agreement with the measured values. Thus, the model offers the possibility of optimizing the diode structure and predicting the diode performance under mixing conditions at THz frequencies.

  12. Using Speech Recall in Hearing Aid Fitting and Outcome Evaluation Under Ecological Test Conditions.

    PubMed

    Lunner, Thomas; Rudner, Mary; Rosenbom, Tove; Ågren, Jessica; Ng, Elaine Hoi Ning

    2016-01-01

    In adaptive Speech Reception Threshold (SRT) tests used in the audiological clinic, speech is presented at signal to noise ratios (SNRs) that are lower than those generally encountered in real-life communication situations. At higher, ecologically valid SNRs, however, SRTs are insensitive to changes in hearing aid signal processing that may be of benefit to listeners who are hard of hearing. Previous studies conducted in Swedish using the Sentence-final Word Identification and Recall test (SWIR) have indicated that at such SNRs, the ability to recall spoken words may be a more informative measure. In the present study, a Danish version of SWIR, known as the Sentence-final Word Identification and Recall Test in a New Language (SWIRL) was introduced and evaluated in two experiments. The objective of experiment 1 was to determine if the Swedish results demonstrating benefit from noise reduction signal processing for hearing aid wearers could be replicated in 25 Danish participants with mild to moderate symmetrical sensorineural hearing loss. The objective of experiment 2 was to compare direct-drive and skin-drive transmission in 16 Danish users of bone-anchored hearing aids with conductive hearing loss or mixed sensorineural and conductive hearing loss. In experiment 1, performance on SWIRL improved when hearing aid noise reduction was used, replicating the Swedish results and generalizing them across languages. In experiment 2, performance on SWIRL was better for direct-drive compared with skin-drive transmission conditions. These findings indicate that spoken word recall can be used to identify benefits from hearing aid signal processing at ecologically valid, positive SNRs where SRTs are insensitive.

  13. Development and validation of an instrument for rapidly assessing symptoms: the general symptom distress scale.

    PubMed

    Badger, Terry A; Segrin, Chris; Meek, Paula

    2011-03-01

    Symptom assessment has increasingly focused on the evaluation of total symptom distress or burden rather than assessing only individual symptoms. The challenge for clinicians and researchers alike is to assess symptoms, and to determine the symptom distress associated with the symptoms and the patient's ability for symptom management without a lengthy and burdensome assessment process. The objective of this article was to discuss the psychometric evaluation of a brief general symptom distress scale (GSDS) developed to assess specific symptoms and how they rank in relation to each other, the overall symptom distress associated with the symptom schema, and provide an assessment of how well or poorly that symptom schema is managed. Results from a pilot study about the initial development of the GSDS with 76 hospitalized patients are presented, followed by a more complete psychometric evaluation of the GSDS using three samples of cancer patients (n=190) and their social network members, called partners in these studies (n=94). Descriptive statistics were used to describe the GSDS symptoms, symptom distress, and symptom management. Point biserial correlations indexed the associations between dichotomous symptoms and continuous measures, and conditional probabilities were used to illustrate the substantial comorbidities of this sample. Internal consistency was examined using the KR-20 coefficient, and test-retest reliability was examined. Construct validity and predictive validity also were examined. The GSDS demonstrated satisfactory internal consistency and test-retest reliability, and good construct validity and predictive validity. The total score on the GSDS, symptom distress, and symptom management correlated significantly with related constructs of depression, positive and negative affect, and general health. The GSDS was able to demonstrate its ability to distinguish between those with or without chronic illness, and was able to significantly predict scores on criterion measures such as depression. Collectively, these results suggest that the GSDS is a straightforward and useful instrument for rapidly assessing symptoms that can disrupt health-related quality of life. Copyright © 2011 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.

  14. Landscape scale estimation of soil carbon stock using 3D modelling.

    PubMed

    Veronesi, F; Corstanje, R; Mayr, T

    2014-07-15

    Soil C is the largest pool of carbon in the terrestrial biosphere, and yet the processes of C accumulation, transformation and loss are poorly accounted for. This, in part, is due to the fact that soil C is not uniformly distributed through the soil depth profile and most current landscape level predictions of C do not adequately account the vertical distribution of soil C. In this study, we apply a method based on simple soil specific depth functions to map the soil C stock in three-dimensions at landscape scale. We used soil C and bulk density data from the Soil Survey for England and Wales to map an area in the West Midlands region of approximately 13,948 km(2). We applied a method which describes the variation through the soil profile and interpolates this across the landscape using well established soil drivers such as relief, land cover and geology. The results indicate that this mapping method can effectively reproduce the observed variation in the soil profiles samples. The mapping results were validated using cross validation and an independent validation. The cross-validation resulted in an R(2) of 36% for soil C and 44% for BULKD. These results are generally in line with previous validated studies. In addition, an independent validation was undertaken, comparing the predictions against the National Soil Inventory (NSI) dataset. The majority of the residuals of this validation are between ± 5% of soil C. This indicates high level of accuracy in replicating topsoil values. In addition, the results were compared to a previous study estimating the carbon stock of the UK. We discuss the implications of our results within the context of soil C loss factors such as erosion and the impact on regional C process models. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Communication: GAIMS—Generalized Ab Initio Multiple Spawning for both internal conversion and intersystem crossing processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curchod, Basile F. E.; Martínez, Todd J., E-mail: toddjmartinez@gmail.com; SLAC National Accelerator Laboratory, Menlo Park, California 94025

    2016-03-14

    Full multiple spawning is a formally exact method to describe the excited-state dynamics of molecular systems beyond the Born-Oppenheimer approximation. However, it has been limited until now to the description of radiationless transitions taking place between electronic states with the same spin multiplicity. This Communication presents a generalization of the full and ab initio multiple spawning methods to both internal conversion (mediated by nonadiabatic coupling terms) and intersystem crossing events (triggered by spin-orbit coupling matrix elements) based on a spin-diabatic representation. The results of two numerical applications, a model system and the deactivation of thioformaldehyde, validate the presented formalism andmore » its implementation.« less

  16. Communication: GAIMS—generalized ab initio multiple spawning for both internal conversion and intersystem crossing processes

    DOE PAGES

    Curchod, Basile F. E.; Rauer, Clemens; Marquetand, Philipp; ...

    2016-03-11

    Full Multiple Spawning is a formally exact method to describe the excited-state dynamics of molecular systems beyond the Born-Oppenheimer approximation. However, it has been limited until now to the description of radiationless transitions taking place between electronic states with the same spin multiplicity. This Communication presents a generalization of the full and ab initio Multiple Spawning methods to both internal conversion (mediated by nonadiabatic coupling terms) and intersystem crossing events (triggered by spin-orbit coupling matrix elements) based on a spin-diabatic representation. Lastly, the results of two numerical applications, a model system and the deactivation of thioformaldehyde, validate the presented formalismmore » and its implementation.« less

  17. Quantum Information Processing with Large Nuclear Spins in GaAs Semiconductors

    NASA Astrophysics Data System (ADS)

    Leuenberger, Michael N.; Loss, Daniel; Poggio, M.; Awschalom, D. D.

    2003-03-01

    We propose an implementation for quantum information processing based on coherent manipulations of nuclear spins I=3/2 in GaAs semiconductors. We describe theoretically an NMR method which involves multiphoton transitions and which exploits the nonequidistance of nuclear spin levels due to quadrupolar splittings. Starting from known spin anisotropies we derive effective Hamiltonians in a generalized rotating frame, valid for arbitrary I, which allow us to describe the nonperturbative time evolution of spin states generated by magnetic rf fields. We identify an experimentally observable regime for multiphoton Rabi oscillations. In the nonlinear regime, we find Berry phase interference. Ref: PRL 89, 207601 (2002).

  18. Development and validation of a brief general and sports nutrition knowledge questionnaire and assessment of athletes' nutrition knowledge.

    PubMed

    Trakman, Gina Louise; Forsyth, Adrienne; Hoye, Russell; Belski, Regina

    2018-01-01

    The Nutrition for Sport Knowledge Questionnaire (NSKQ) is an 89-item, valid and reliable measure of sports nutrition knowledge (SNK). It takes 25 min to complete and has been subject to low completion and response rates. The aim of this study was to develop an abridged version of the NSKQ (A-NSKQ) and compare response rates, completion rates and NK scores of the NSKQ and A-NSKQ. Rasch analysis was used for the questionnaire validation. The sample ( n  = 181) was the same sample that was used in the validation of the full-length NSKQ. Construct validity was assessed using the known-group comparisons method. Temporal stability was assessed using the test-retest reliability method. NK assessment was cross-sectional; responses were collected electronically from members of one non-elite Australian football (AF) and netball club, using Qualtrics Software (Qualtrics, Provo, UT). Validation - The A-NSKQ has 37 items that assess general ( n  = 17) and sports ( n  = 20) nutrition knowledge (NK). Both sections are unidimensional (Perc5% = 2.84% [general] and 3.41% [sport]). Both sections fit the Rasch Model (overall-interaction statistic mean (SD) = - 0.15 ± 0.96 [general] and 0.22 ± 1.11 [sport]; overall-person interaction statistic mean (SD) = - 0.11 ± 0.61 [general] and 0.08 ± 0.73 [sport]; Chi-Square probability = 0.308 [general] and 0.283 [sport]). Test-retest reliability was confirmed ( r  = 0.8, P  < 0.001 [general] and r  = 0.7, P < 0.001 [sport]). Construct validity was demonstrated (nutrition students = 77% versus non-nutrition students = 60%, P < 0.001 [general] and nutrition students = 60% versus non-nutrition students = 40%, P < 0.001 [sport]. Assessment of NK - 177 usable survey responses from were returned. Response rates were low (7%) but completion rates were high (85%). NK scores on the A-NSKQ (46%) are comparable to results obtained in similar cohorts on the NSKQ (49%). The A-NSKQ took on average 12 min to complete, which is around half the time taken to complete the NSKQ (25 min). The A-NSKQ is a valid and reliable, brief questionnaire designed to assess general NK (GNK) and SNK.

  19. EEG feature selection method based on decision tree.

    PubMed

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  20. Application and Evaluation of Independent Component Analysis Methods to Generalized Seizure Disorder Activities Exhibited in the Brain.

    PubMed

    George, S Thomas; Balakrishnan, R; Johnson, J Stanly; Jayakumar, J

    2017-07-01

    EEG records the spontaneous electrical activity of the brain using multiple electrodes placed on the scalp, and it provides a wealth of information related to the functions of brain. Nevertheless, the signals from the electrodes cannot be directly applied to a diagnostic tool like brain mapping as they undergo a "mixing" process because of the volume conduction effect in the scalp. A pervasive problem in neuroscience is determining which regions of the brain are active, given voltage measurements at the scalp. Because of which, there has been a surge of interest among the biosignal processing community to investigate the process of mixing and unmixing to identify the underlying active sources. According to the assumptions of independent component analysis (ICA) algorithms, the resultant mixture obtained from the scalp can be closely approximated by a linear combination of the "actual" EEG signals emanating from the underlying sources of electrical activity in the brain. As a consequence, using these well-known ICA techniques in preprocessing of the EEG signals prior to clinical applications could result in development of diagnostic tool like quantitative EEG which in turn can assist the neurologists to gain noninvasive access to patient-specific cortical activity, which helps in treating neuropathologies like seizure disorders. The popular and proven ICA schemes mentioned in various literature and applications were selected (which includes Infomax, JADE, and SOBI) and applied on generalized seizure disorder samples using EEGLAB toolbox in MATLAB environment to see their usefulness in source separations; and they were validated by the expert neurologist for clinical relevance in terms of pathologies on brain functionalities. The performance of Infomax method was found to be superior when compared with other ICA schemes applied on EEG and it has been established based on the validations carried by expert neurologist for generalized seizure and its clinical correlation. The results are encouraging for furthering the studies in the direction of developing useful brain mapping tools using ICA methods.

  1. A generalized architecture of quantum secure direct communication for N disjointed users with authentication

    PubMed Central

    Farouk, Ahmed; Zakaria, Magdy; Megahed, Adel; Omara, Fatma A.

    2015-01-01

    In this paper, we generalize a secured direct communication process between N users with partial and full cooperation of quantum server. So, N − 1 disjointed users u1, u2, …, uN−1 can transmit a secret message of classical bits to a remote user uN by utilizing the property of dense coding and Pauli unitary transformations. The authentication process between the quantum server and the users are validated by EPR entangled pair and CNOT gate. Afterwards, the remained EPR will generate shared GHZ states which are used for directly transmitting the secret message. The partial cooperation process indicates that N − 1 users can transmit a secret message directly to a remote user uN through a quantum channel. Furthermore, N − 1 users and a remote user uN can communicate without an established quantum channel among them by a full cooperation process. The security analysis of authentication and communication processes against many types of attacks proved that the attacker cannot gain any information during intercepting either authentication or communication processes. Hence, the security of transmitted message among N users is ensured as the attacker introduces an error probability irrespective of the sequence of measurement. PMID:26577473

  2. The Perseverative Thinking Questionnaire (PTQ): validation of a content-independent measure of repetitive negative thinking.

    PubMed

    Ehring, Thomas; Zetsche, Ulrike; Weidacker, Kathrin; Wahl, Karina; Schönfeld, Sabine; Ehlers, Anke

    2011-06-01

    Repetitive negative thinking (RNT) has been found to be involved in the maintenance of several types of emotional problems and has therefore been suggested to be a transdiagnostic process. However, existing measures of RNT typically focus on a particular disorder-specific content. In this article, the preliminary validation of a content-independent self-report questionnaire of RNT is presented. The 15-item Perseverative Thinking Questionnaire was evaluated in two studies (total N = 1832), comprising non-clinical as well as clinical participants. Results of confirmatory factor analyses across samples supported a second-order model with one higher-order factor representing RNT in general and three lower-order factors representing (1) the core characteristics of RNT (repetitiveness, intrusiveness, difficulties with disengagement), (2) perceived unproductiveness of RNT and (3) RNT capturing mental capacity. High internal consistencies and high re-test reliability were found for the total scale and all three subscales. The validity of the Perseverative Thinking Questionnaire was supported by substantial correlations with existing measures of RNT and associations with symptom levels and clinical diagnoses of depression and anxiety. Results suggest the usefulness of the new measure for research into RNT as a transdiagnostic process. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. Peer feedback for examiner quality assurance on MRCGP International South Asia: a mixed methods study.

    PubMed

    Perera, D P; Andrades, Marie; Wass, Val

    2017-12-08

    The International Membership Examination (MRCGP[INT]) of the Royal College of General Practitioners UK is a unique collaboration between four South Asian countries with diverse cultures, epidemiology, clinical facilities and resources. In this setting good quality assurance is imperative to achieve acceptable standards of inter rater reliability. This study aims to explore the process of peer feedback for examiner quality assurance with regard to factors affecting the implementation and acceptance of the method. A sequential mixed methods approach was used based on focus group discussions with examiners (n = 12) and clinical examination convenors who acted as peer reviewers (n = 4). A questionnaire based on emerging themes and literature review was then completed by 20 examiners at the subsequent OSCE exam. Qualitative data were analysed using an iterative reflexive process. Quantitative data were integrated by interpretive analysis looking for convergence, complementarity or dissonance. The qualitative data helped understand the issues and informed the process of developing the questionnaire. The quantitative data allowed for further refining of issues, wider sampling of examiners and giving voice to different perspectives. Examiners stated specifically that peer feedback gave an opportunity for discussion, standardisation of judgments and improved discriminatory abilities. Interpersonal dynamics, hierarchy and perception of validity of feedback were major factors influencing acceptance of feedback. Examiners desired increased transparency, accountability and the opportunity for equal partnership within the process. The process was stressful for examiners and reviewers; however acceptance increased with increasing exposure to receiving feedback. The process could be refined to improve acceptability through scrupulous attention to training and selection of those giving feedback to improve the perceived validity of feedback and improved reviewer feedback skills to enable better interpersonal dynamics and a more equitable feedback process. It is important to highlight the role of quality assurance and peer feedback as a tool for continuous improvement and maintenance of standards to examiners during training. Examiner quality assurance using peer feedback was generally a successful and accepted process. The findings highlight areas for improvement and guide the path towards a model of feedback that is responsive to examiner views and cultural sensibilities.

  4. What buoyancy really is. A generalized Archimedes' principle for sedimentation and ultracentrifugation

    NASA Astrophysics Data System (ADS)

    Piazza, Roberto; Buzzaccaro, Stefano; Secchi, Eleonora; Parola, Alberto

    Particle settling is a pervasive process in nature, and centrifugation is a much versatile separation technique. Yet, the results of settling and ultracentrifugation experiments often appear to contradict the very law on which they are based: Archimedes Principle - arguably, the oldest Physical Law. The purpose of this paper is delving at the very roots of the concept of buoyancy by means of a combined experimental-theoretical study on sedimentation profiles in colloidal mixtures. Our analysis shows that the standard Archimedes' principle is only a limiting approximation, valid for mesoscopic particles settling in a molecular fluid, and we provide a general expression for the actual buoyancy force. This "Generalized Archimedes Principle" accounts for unexpected effects, such as denser particles floating on top of a lighter fluid, which in fact we observe in our experiments.

  5. Involuntary orienting of attention to a sound desynchronizes the occipital alpha rhythm and improves visual perception.

    PubMed

    Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A

    2017-04-15

    Directing attention voluntarily to the location of a visual target results in an amplitude reduction (desynchronization) of the occipital alpha rhythm (8-14Hz), which is predictive of improved perceptual processing of the target. Here we investigated whether modulations of the occipital alpha rhythm triggered by the involuntary orienting of attention to a salient but spatially non-predictive sound would similarly influence perception of a subsequent visual target. Target discrimination was more accurate when a sound preceded the target at the same location (validly cued trials) than when the sound was on the side opposite to the target (invalidly cued trials). This behavioral effect was accompanied by a sound-induced desynchronization of the alpha rhythm over the lateral occipital scalp. The magnitude of alpha desynchronization over the hemisphere contralateral to the sound predicted correct discriminations of validly cued targets but not of invalidly cued targets. These results support the conclusion that cue-induced alpha desynchronization over the occipital cortex is a manifestation of a general priming mechanism that improves visual processing and that this mechanism can be activated either by the voluntary or involuntary orienting of attention. Further, the observed pattern of alpha modulations preceding correct and incorrect discriminations of valid and invalid targets suggests that involuntary orienting to the non-predictive sound has a rapid and purely facilitatory influence on processing targets on the cued side, with no inhibitory influence on targets on the opposite side. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. A case of misdiagnosis of mild cognitive impairment: The utility of symptom validity testing in an outpatient memory clinic.

    PubMed

    Roor, Jeroen J; Dandachi-FitzGerald, Brechje; Ponds, Rudolf W H M

    2016-01-01

    Noncredible symptom reports hinder the diagnostic process. This fact is especially the case for medical conditions that rely on subjective report of symptoms instead of objective measures. Mild cognitive impairment (MCI) primarily relies on subjective report, which makes it potentially susceptible to erroneous diagnosis. In this case report, we describe a 59-year-old female patient diagnosed with MCI 10 years previously. The patient was referred to the neurology department for reexamination by her general practitioner because of cognitive complaints and persistent fatigue. This case study used information from the medical file, a new magnetic resonance imaging brain scan, and neuropsychological assessment. Current neuropsychological assessment, including symptom validity tests, clearly indicated noncredible test performance, thereby invalidating the obtained neuropsychological test data. We conclude that a blind spot for noncredible symptom reports existed in the previous diagnostic assessments. This case highlights the usefulness of formal symptom validity testing in the diagnostic assessment of MCI.

  7. Validation of the early childhood attitude toward women in science scale (ECWiSS): A pilot administration

    NASA Astrophysics Data System (ADS)

    Mulkey, Lynn M.

    The intention of this research was to measure attitudes of young children toward women scientists. A 27-item instrument, the Early Childhood Women in Science Scale (ECWiSS) was validated in a test case of the proposition that differential socialization predicts entry into the scientific talent pool. Estimates of internal consistency indicated that the scale is highly reliable. Known groups and correlates procedures, employed to determine the validity of the instrument, revealed that the scale is able to discriminate significant differences between groups and distinguishes three dimensions of attitude (role-specific self-concept, home-related sex-role conflict, and work-related sex-role conflict). Results of the analyses also confirmed the anticipated pattern of correlations with measures of another construct. The findings suggest the utility of the ECWiSS for measurement of early childhood attitudes in models of the ascriptive and/or meritocratic processes affecting recruitment to science and more generally in program and curriculum evaluation where attitude toward women in science is the construct of interest.

  8. Review of surface steam sterilization for validation purposes.

    PubMed

    van Doornmalen, Joost; Kopinga, Klaas

    2008-03-01

    Sterilization is an essential step in the process of producing sterile medical devices. To guarantee sterility, the process of sterilization must be validated. Because there is no direct way to measure sterility, the techniques applied to validate the sterilization process are based on statistical principles. Steam sterilization is the most frequently applied sterilization method worldwide and can be validated either by indicators (chemical or biological) or physical measurements. The steam sterilization conditions are described in the literature. Starting from these conditions, criteria for the validation of steam sterilization are derived and can be described in terms of physical parameters. Physical validation of steam sterilization appears to be an adequate and efficient validation method that could be considered as an alternative for indicator validation. Moreover, physical validation can be used for effective troubleshooting in steam sterilizing processes.

  9. Use of existing patient-reported outcome (PRO) instruments and their modification: the ISPOR Good Research Practices for Evaluating and Documenting Content Validity for the Use of Existing Instruments and Their Modification PRO Task Force Report.

    PubMed

    Rothman, Margaret; Burke, Laurie; Erickson, Pennifer; Leidy, Nancy Kline; Patrick, Donald L; Petrie, Charles D

    2009-01-01

    Patient-reported outcome (PRO) instruments are used to evaluate the effect of medical products on how patients feel or function. This article presents the results of an ISPOR task force convened to address good clinical research practices for the use of existing or modified PRO instruments to support medical product labeling claims. The focus of the article is on content validity, with specific reference to existing or modified PRO instruments, because of the importance of content validity in selecting or modifying an existing PRO instrument and the lack of consensus in the research community regarding best practices for establishing and documenting this measurement property. Topics addressed in the article include: definition and general description of content validity; PRO concept identification as the important first step in establishing content validity; instrument identification and the initial review process; key issues in qualitative methodology; and potential threats to content validity, with three case examples used to illustrate types of threats and how they might be resolved. A table of steps used to identify and evaluate an existing PRO instrument is provided, and figures are used to illustrate the meaning of content validity in relationship to instrument development and evaluation. RESULTS & RECOMMENDATIONS: Four important threats to content validity are identified: unclear conceptual match between the PRO instrument and the intended claim, lack of direct patient input into PRO item content from the target population in which the claim is desired, no evidence that the most relevant and important item content is contained in the instrument, and lack of documentation to support modifications to the PRO instrument. In some cases, careful review of the threats to content validity in a specific application may be reduced through additional well documented qualitative studies that specifically address the issue of concern. Published evidence of the content validity of a PRO instrument for an intended application is often limited. Such evidence is, however, important to evaluating the adequacy of a PRO instrument for the intended application. This article provides an overview of key issues involved in assessing and documenting content validity as it relates to using existing instruments in the drug approval process.

  10. Development and validation of the trait and state versions of the Post-Event Processing Inventory.

    PubMed

    Blackie, Rebecca A; Kocovski, Nancy L

    2017-03-01

    Post-event processing (PEP) refers to negative and prolonged rumination following anxiety-provoking social situations. Although there are scales to assess PEP, they are situation-specific, some targeting only public-speaking situations. Furthermore, there are no trait measures to assess the tendency to engage in PEP. The purpose of this research was to create a new measure of PEP, the Post-Event Processing Inventory (PEPI), which can be employed following all types of social situations and includes both trait and state forms. Over two studies (study 1, N = 220; study 2, N = 199), we explored and confirmed the factor structure of the scale with student samples. For each form of the scale, we found and confirmed that a higher-order, general PEP factor could be inferred from three sub-domains (intensity, frequency, and self-judgment). We also found preliminary evidence for the convergent, concurrent, discriminant/divergent, incremental, and predictive validity for each version of the scale. Both forms of the scale demonstrated excellent internal consistency and the trait form had excellent two-week test-retest reliability. Given the utility and versatility of the scale, the PEPI may provide a useful alternative to existing measures of PEP and rumination.

  11. Development and analysis of an instrument to assess student understanding of GOB chemistry knowledge relevant to clinical nursing practice.

    PubMed

    Brown, Corina E; Hyslop, Richard M; Barbera, Jack

    2015-01-01

    The General, Organic, and Biological Chemistry Knowledge Assessment (GOB-CKA) is a multiple-choice instrument designed to assess students' understanding of the chemistry topics deemed important to clinical nursing practice. This manuscript describes the development process of the individual items along with a psychometric evaluation of the final version of the items and instrument. In developing items for the GOB-CKA, essential topics were identified through a series of expert interviews (with practicing nurses, nurse educators, and GOB chemistry instructors) and confirmed through a national survey. Individual items were tested in qualitative studies with students from the target population for clarity and wording. Data from pilot and beta studies were used to evaluate each item and narrow the total item count to 45. A psychometric analysis performed on data from the 45-item final version was used to provide evidence of validity and reliability. The final version of the instrument has a Cronbach's alpha value of 0.76. Feedback from an expert panel provided evidence of face and content validity. Convergent validity was estimated by comparing the results from the GOB-CKA with the General-Organic-Biochemistry Exam (Form 2007) of the American Chemical Society. Instructors who wish to use the GOB-CKA for teaching and research may contact the corresponding author for a copy of the instrument. © 2014 Wiley Periodicals, Inc.

  12. Relationship between HPLC precision and number of significant figures when reporting impurities and when setting specifications.

    PubMed

    Agut, Christophe; Segalini, Audrey; Bauer, Michel; Boccardi, Giovanni

    2006-05-03

    The rounding of an analytical result is a process that should take into account the uncertainty of the result, which is in turn assessed during the validation exercise. Rounding rules are known in physical and analytical chemistry since a long time, but are often not used or misused in pharmaceutical analysis. The paper describes the theoretical background of the most common rules and their application to fix the rounding of results and specifications. The paper makes use of uncertainty values of impurity determination acquired during studies of reproducibility and intermediate precision with regards to 22 impurities of drug substances or drug products. As a general rule, authors propose the use of sound and well-established rounding rules to derive rounding from the results of the validation package.

  13. TIE: an ability test of emotional intelligence.

    PubMed

    Śmieja, Magdalena; Orzechowski, Jarosław; Stolarski, Maciej S

    2014-01-01

    The Test of Emotional Intelligence (TIE) is a new ability scale based on a theoretical model that defines emotional intelligence as a set of skills responsible for the processing of emotion-relevant information. Participants are provided with descriptions of emotional problems, and asked to indicate which emotion is most probable in a given situation, or to suggest the most appropriate action. Scoring is based on the judgments of experts: professional psychotherapists, trainers, and HR specialists. The validation study showed that the TIE is a reliable and valid test, suitable for both scientific research and individual assessment. Its internal consistency measures were as high as .88. In line with theoretical model of emotional intelligence, the results of the TIE shared about 10% of common variance with a general intelligence test, and were independent of major personality dimensions.

  14. Wavelet-based identification of rotor blades in passage-through-resonance tests

    NASA Astrophysics Data System (ADS)

    Carassale, Luigi; Marrè-Brunenghi, Michela; Patrone, Stefano

    2018-01-01

    Turbine blades are critical components in turbo engines and their design process usually includes experimental tests in order to validate and/or update numerical models. These tests are generally carried out on full-scale rotors having some blades instrumented with strain gauges and usually involve a run-up or a run-down phase. The quantification of damping in these conditions is rather challenging for several reasons. In this work, we show through numerical simulations that the usual identification procedures lead to a systematic overestimation of damping due both to the finite sweep velocity, as well as to the variation of the blade natural frequencies with the rotation speed. To overcome these problems, an identification procedure based on the continuous wavelet transform is proposed and validated through numerical simulation.

  15. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  16. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  17. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  18. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  19. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  20. Development and evaluation of a thermochemistry concept inventory for college-level general chemistry

    NASA Astrophysics Data System (ADS)

    Wren, David A.

    The research presented in this dissertation culminated in a 10-item Thermochemistry Concept Inventory (TCI). The development of the TCI can be divided into two main phases: qualitative studies and quantitative studies. Both phases focused on the primary stakeholders of the TCI, college-level general chemistry instructors and students. Each phase was designed to collect evidence for the validity of the interpretations and uses of TCI testing data. A central use of TCI testing data is to identify student conceptual misunderstandings, which are represented as incorrect options of multiple-choice TCI items. Therefore, quantitative and qualitative studies focused heavily on collecting evidence at the item-level, where important interpretations may be made by TCI users. Qualitative studies included student interviews (N = 28) and online expert surveys (N = 30). Think-aloud student interviews (N = 12) were used to identify conceptual misunderstandings used by students. Novice response process validity interviews (N = 16) helped provide information on how students interpreted and answered TCI items and were the basis of item revisions. Practicing general chemistry instructors (N = 18), or experts, defined boundaries of thermochemistry content included on the TCI. Once TCI items were in the later stages of development, an online version of the TCI was used in expert response process validity survey (N = 12), to provide expert feedback on item content, format and consensus of the correct answer for each item. Quantitative studies included three phases: beta testing of TCI items (N = 280), pilot testing of the a 12-item TCI (N = 485), and a large data collection using a 10-item TCI ( N = 1331). In addition to traditional classical test theory analysis, Rasch model analysis was also used for evaluation of testing data at the test and item level. The TCI was administered in both formative assessment (beta and pilot testing) and summative assessment (large data collection), with items performing well in both. One item, item K, did not have acceptable psychometric properties when the TCI was used as a quiz (summative assessment), but was retained in the final version of the TCI based on the acceptable psychometric properties displayed in pilot testing (formative assessment).

  1. Validation of COG10 and ENDFB6R7 on the Auk Workstation for General Application to Plutonium Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Percher, Catherine G

    2011-08-08

    The COG 10 code package1 on the Auk workstation is now validated with the ENBFB6R7 neutron cross section library for general application to plutonium (Pu) systems by comparison of the calculated keffective to the expected keffective of several relevant experimental benchmarks. This validation is supplemental to the installation and verification of COG 10 on the Auk workstation2.

  2. Another dimension to metamorphic phase equilibria: the power of interactive movies for understanding complex phase diagram sections

    NASA Astrophysics Data System (ADS)

    Moulas, E.; Caddick, M. J.; Tisato, N.; Burg, J.-P.

    2012-04-01

    The investigation of metamorphic phase equilibria, using software packages that perform thermodynamic calculations, involves a series of important assumptions whose validity can often be questioned but are difficult to test. For example, potential influences of deformation on phase relations, and modification of effective reactant composition (X) at successive stages of equilibrium may both introduce significant uncertainty into phase diagram calculations. This is generally difficult to model with currently available techniques, and is typically not well quantified. We present here a method to investigate such phenomena along pre-defined Pressure-Temperature (P-T) paths, calculating local equilibrium via Gibbs energy minimization. An automated strategy to investigate complex changes in the effective equilibration composition has been developed. This demonstrates the consequences of specified X modification and, more importantly, permits automated calculation of X changes that are likely along the requested path if considering several specified processes. Here we describe calculations considering two such processes and show an additional example of a metamorphic texture that is difficult to model with current techniques. Firstly, we explore the assumption that although water saturation and bulk-rock equilibrium are generally considered to be valid assumptions in the calculation of phase equilibria, the saturation of thermodynamic components ignores mechanical effects that the fluid/melt phase can impose on the rock, which in turn can modify the effective equilibrium composition. Secondly, we examine how mass fractionation caused by porphyroblast growth at low temperatures or progressive melt extraction at high temperatures successively modifies X out of the plane of the initial diagram, complicating the process of determining best-fit P-T paths for natural samples. In particular, retrograde processes are poorly modeled without careful consideration of prograde fractionation processes. Finally we show how, although the effective composition of symplectite growth is not easy to determine and quantify, it is possible to successfully model by constructing a series of phase equilibria calculations.

  3. Order-disorder transition in conflicting dynamics leading to rank-frequency generalized beta distributions

    NASA Astrophysics Data System (ADS)

    Alvarez-Martinez, R.; Martinez-Mekler, G.; Cocho, G.

    2011-01-01

    The behavior of rank-ordered distributions of phenomena present in a variety of fields such as biology, sociology, linguistics, finance and geophysics has been a matter of intense research. Often power laws have been encountered; however, their validity tends to hold mainly for an intermediate range of rank values. In a recent publication (Martínez-Mekler et al., 2009 [7]), a generalization of the functional form of the beta distribution has been shown to give excellent fits for many systems of very diverse nature, valid for the whole range of rank values, regardless of whether or not a power law behavior has been previously suggested. Here we give some insight on the significance of the two free parameters which appear as exponents in the functional form, by looking into discrete probabilistic branching processes with conflicting dynamics. We analyze a variety of realizations of these so-called expansion-modification models first introduced by Wentian Li (1989) [10]. We focus our attention on an order-disorder transition we encounter as we vary the modification probability p. We characterize this transition by means of the fitting parameters. Our numerical studies show that one of the fitting exponents is related to the presence of long-range correlations exhibited by power spectrum scale invariance, while the other registers the effect of disordering elements leading to a breakdown of these properties. In the absence of long-range correlations, this parameter is sensitive to the occurrence of unlikely events. We also introduce an approximate calculation scheme that relates this dynamics to multinomial multiplicative processes. A better understanding through these models of the meaning of the generalized beta-fitting exponents may contribute to their potential for identifying and characterizing universality classes.

  4. Generalization of Selection Test Validity.

    ERIC Educational Resources Information Center

    Colbert, G. A.; Taylor, L. R.

    1978-01-01

    This is part three of a three-part series concerned with the empirical development of homogeneous families of insurance company jobs based on data from the Position Analysis Questionnaire (PAQ). This part involves validity generalizations within the job families which resulted from the previous research. (Editor/RK)

  5. The German linguistic validation of the Ureteral Stent Symptoms Questionnaire (USSQ).

    PubMed

    Abt, Dominik; Dötzer, Kristina; Honek, Patrick; Müller, Karolina; Engeler, Daniel Stephan; Burger, Maximilian; Schmid, Hans-Peter; Knoll, Thomas; Sanguedolce, Francesco; Joshi, Hrishi B; Fritsche, Hans-Martin

    2017-03-01

    We developed and validated the German version of the Ureteral Stent Symptoms Questionnaire (USSQ) for male and female patients with indwelling ureteral stents. The German version of the USSQ was developed following a well-established multistep process. A total of 101 patients with indwelling ureteral stents completed the German USSQ as well as the validated questionnaires International Prostate Symptom Score (IPSS) or International Consultation on Incontinence Questionnaire (ICIQ) and the Short Form Health Survey (SF-36). Patients completed questionnaires at 1 and 2-4 weeks after stent insertion and 4 weeks after stent removal. Statistical analyses were performed to assess the psychometric properties of the questionnaire. The German version of the USSQ showed good internal consistency (Cronbach's α = .72-.88) and test-retest reliability [intraclass correlation coefficient (ICC) = .81-.92]. Inter-domain associations within the USSQ showed substantial correlations between different USSQ domains, indicating a high conceptual relationship of the domains. Except from urinary symptoms and general quality of life, German USSQ showed good convergent validity with the corresponding validated questionnaires. All USSQ domains showed significant sensitivity to change (p ≤ .001). The new German version of the USSQ proved to be a reliable and robust instrument for the evaluation of ureteral stent-associated morbidity for both male and female patients. It is expected to be a valid outcome measure in the future stent research.

  6. A unified dislocation density-dependent physical-based constitutive model for cold metal forming

    NASA Astrophysics Data System (ADS)

    Schacht, K.; Motaman, A. H.; Prahl, U.; Bleck, W.

    2017-10-01

    Dislocation-density-dependent physical-based constitutive models of metal plasticity while are computationally efficient and history-dependent, can accurately account for varying process parameters such as strain, strain rate and temperature; different loading modes such as continuous deformation, creep and relaxation; microscopic metallurgical processes; and varying chemical composition within an alloy family. Since these models are founded on essential phenomena dominating the deformation, they have a larger range of usability and validity. Also, they are suitable for manufacturing chain simulations since they can efficiently compute the cumulative effect of the various manufacturing processes by following the material state through the entire manufacturing chain and also interpass periods and give a realistic prediction of the material behavior and final product properties. In the physical-based constitutive model of cold metal plasticity introduced in this study, physical processes influencing cold and warm plastic deformation in polycrystalline metals are described using physical/metallurgical internal variables such as dislocation density and effective grain size. The evolution of these internal variables are calculated using adequate equations that describe the physical processes dominating the material behavior during cold plastic deformation. For validation, the model is numerically implemented in general implicit isotropic elasto-viscoplasticity algorithm as a user-defined material subroutine (UMAT) in ABAQUS/Standard and used for finite element simulation of upsetting tests and a complete cold forging cycle of case hardenable MnCr steel family.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Segev, A.; Fang, W.

    In currency-based updates, processing a query to a materialized view has to satisfy a currency constraint which specifies the maximum time lag of the view data with respect to a transaction database. Currency-based update policies are more general than periodical, deferred, and immediate updates; they provide additional opportunities for optimization and allow updating a materialized view from other materialized views. In this paper, we present algorithms to determine the source and timing of view updates and validate the resulting cost savings through simulation results. 20 refs.

  8. Implications of the Institute of Medicine Report: Evaluation of Biomarkers and Surrogate Endpoints in Chronic Disease.

    PubMed

    Wagner, J A; Ball, J R

    2015-07-01

    The Institute of Medicine (IOM) released a groundbreaking 2010 report, Evaluation of Biomarkers and Surrogate Endpoints in Chronic Disease. Key recommendations included a harmonized scientific process and a general framework for biomarker evaluation with three interrelated steps: (1) Analytical validation -- is the biomarker measurement accurate? (2) Qualification -- is the biomarker associated with the clinical endpoint of concern? (3) Utilization -- what is the specific context of the proposed use? © 2015 American Society for Clinical Pharmacology and Therapeutics.

  9. FANTOM: Algorithm-Architecture Codesign for High-Performance Embedded Signal and Image Processing Systems

    DTIC Science & Technology

    2013-05-25

    graphics processors by IBM, AMD, and nVIDIA . They are between general-purpose pro- cessors and special-purpose processors. In Phase II. 3.10 Measure of...particular, Dr. Kevin Irick started a company Silicon Scapes and he has been the CEO. 5 Implications for Related/Future Research We speculate that...final project report in Jan. 2011. At the test and validation stage of the project. FANTOM’s partner at Raytheon quit from his company and hence from

  10. Development Approaches Coupled with Verification and Validation Methodologies for Agent-Based Mission-Level Analytical Combat Simulations

    DTIC Science & Technology

    2004-03-01

    When applying experience to new situations, the process is very similar. Faced with a new situation, a human generally looks for ways in which...find the best course of action, the human would compare current goals to those it faced in the previous experiences and choose the path that...154. Saperstein, Alvin (1995) “War and Chaos”. American Scientist, vol. 84. November-December 1995. pp. 548-557. 155. Sargent, Robert G . (1991

  11. Modeling Amorphous Microporous Polymers for CO2 Capture and Separations.

    PubMed

    Kupgan, Grit; Abbott, Lauren J; Hart, Kyle E; Colina, Coray M

    2018-06-13

    This review concentrates on the advances of atomistic molecular simulations to design and evaluate amorphous microporous polymeric materials for CO 2 capture and separations. A description of atomistic molecular simulations is provided, including simulation techniques, structural generation approaches, relaxation and equilibration methodologies, and considerations needed for validation of simulated samples. The review provides general guidelines and a comprehensive update of the recent literature (since 2007) to promote the acceleration of the discovery and screening of amorphous microporous polymers for CO 2 capture and separation processes.

  12. Demonstrating the validity of three general scores of PET in predicting higher education achievement in Israel.

    PubMed

    Oren, Carmel; Kennet-Cohen, Tamar; Turvall, Elliot; Allalouf, Avi

    2014-01-01

    The Psychometric Entrance Test (PET), used for admission to higher education in Israel together with the Matriculation (Bagrut), had in the past one general (total) score in which the weights for its domains: Verbal, Quantitative and English, were 2:2:1, respectively. In 2011, two additional total scores were introduced, with different weights for the Verbal and the Quantitative domains. This study compares the predictive validity of the three general scores of PET, and demonstrates validity in terms of utility. 100,863 freshmen students of all Israeli universities over the classes of 2005-2009. Regression weights and correlations of the predictors with FYGPA were computed. Simulations based on these results supplied the utility estimates. On average, PET is slightly more predictive than the Bagrut; using them both yields a better tool than either of them alone. Assigning differential weights to the components in the respective schools further improves the validity. The introduction of the new general scores of PET is validated by gathering and analyzing evidence based on relations of test scores to other variables. The utility of using the test can be demonstrated in ways different from correlations.

  13. Reliability and criterion validity of an observation protocol for working technique assessments in cash register work.

    PubMed

    Palm, Peter; Josephson, Malin; Mathiassen, Svend Erik; Kjellberg, Katarina

    2016-06-01

    We evaluated the intra- and inter-observer reliability and criterion validity of an observation protocol, developed in an iterative process involving practicing ergonomists, for assessment of working technique during cash register work for the purpose of preventing upper extremity symptoms. Two ergonomists independently assessed 17 15-min videos of cash register work on two occasions each, as a basis for examining reliability. Criterion validity was assessed by comparing these assessments with meticulous video-based analyses by researchers. Intra-observer reliability was acceptable (i.e. proportional agreement >0.7 and kappa >0.4) for 10/10 questions. Inter-observer reliability was acceptable for only 3/10 questions. An acceptable inter-observer reliability combined with an acceptable criterion validity was obtained only for one working technique aspect, 'Quality of movements'. Thus, major elements of the cashiers' working technique could not be assessed with an acceptable accuracy from short periods of observations by one observer, such as often desired by practitioners. Practitioner Summary: We examined an observation protocol for assessing working technique in cash register work. It was feasible in use, but inter-observer reliability and criterion validity were generally not acceptable when working technique aspects were assessed from short periods of work. We recommend the protocol to be used for educational purposes only.

  14. Evaluating the statistical performance of less applied algorithms in classification of worldview-3 imagery data in an urbanized landscape

    NASA Astrophysics Data System (ADS)

    Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa

    2018-03-01

    In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.

  15. A Data Preparation Methodology in Data Mining Applied to Mortality Population Databases.

    PubMed

    Pérez, Joaquín; Iturbide, Emmanuel; Olivares, Víctor; Hidalgo, Miguel; Martínez, Alicia; Almanza, Nelva

    2015-11-01

    It is known that the data preparation phase is the most time consuming in the data mining process, using up to 50% or up to 70% of the total project time. Currently, data mining methodologies are of general purpose and one of their limitations is that they do not provide a guide about what particular task to develop in a specific domain. This paper shows a new data preparation methodology oriented to the epidemiological domain in which we have identified two sets of tasks: General Data Preparation and Specific Data Preparation. For both sets, the Cross-Industry Standard Process for Data Mining (CRISP-DM) is adopted as a guideline. The main contribution of our methodology is fourteen specialized tasks concerning such domain. To validate the proposed methodology, we developed a data mining system and the entire process was applied to real mortality databases. The results were encouraging because it was observed that the use of the methodology reduced some of the time consuming tasks and the data mining system showed findings of unknown and potentially useful patterns for the public health services in Mexico.

  16. Design distributed simulation platform for vehicle management system

    NASA Astrophysics Data System (ADS)

    Wen, Zhaodong; Wang, Zhanlin; Qiu, Lihua

    2006-11-01

    Next generation military aircraft requires the airborne management system high performance. General modules, data integration, high speed data bus and so on are needed to share and manage information of the subsystems efficiently. The subsystems include flight control system, propulsion system, hydraulic power system, environmental control system, fuel management system, electrical power system and so on. The unattached or mixed architecture is changed to integrated architecture. That means the whole airborne system is regarded into one system to manage. So the physical devices are distributed but the system information is integrated and shared. The process function of each subsystem are integrated (including general process modules, dynamic reconfiguration), furthermore, the sensors and the signal processing functions are shared. On the other hand, it is a foundation for power shared. Establish a distributed vehicle management system using 1553B bus and distributed processors which can provide a validation platform for the research of airborne system integrated management. This paper establishes the Vehicle Management System (VMS) simulation platform. Discuss the software and hardware configuration and analyze the communication and fault-tolerant method.

  17. Exact results in nonequilibrium statistical mechanics: Formalism and applications in chemical kinetics and single-molecule free energy estimation

    NASA Astrophysics Data System (ADS)

    Adib, Artur B.

    In the last two decades or so, a collection of results in nonequilibrium statistical mechanics that departs from the traditional near-equilibrium framework introduced by Lars Onsager in 1931 has been derived, yielding new fundamental insights into far-from-equilibrium processes in general. Apart from offering a more quantitative statement of the second law of thermodynamics, some of these results---typified by the so-called "Jarzynski equality"---have also offered novel means of estimating equilibrium quantities from nonequilibrium processes, such as free energy differences from single-molecule "pulling" experiments. This thesis contributes to such efforts by offering three novel results in nonequilibrium statistical mechanics: (a) The entropic analog of the Jarzynski equality; (b) A methodology for estimating free energies from "clamp-and-release" nonequilibrium processes; and (c) A directly measurable symmetry relation in chemical kinetics similar to (but more general than) chemical detailed balance. These results share in common the feature of remaining valid outside Onsager's near-equilibrium regime, and bear direct applicability in protein folding kinetics as well as in single-molecule free energy estimation.

  18. When Assessment Data Are Words: Validity Evidence for Qualitative Educational Assessments.

    PubMed

    Cook, David A; Kuper, Ayelet; Hatala, Rose; Ginsburg, Shiphra

    2016-10-01

    Quantitative scores fail to capture all important features of learner performance. This awareness has led to increased use of qualitative data when assessing health professionals. Yet the use of qualitative assessments is hampered by incomplete understanding of their role in forming judgments, and lack of consensus in how to appraise the rigor of judgments therein derived. The authors articulate the role of qualitative assessment as part of a comprehensive program of assessment, and translate the concept of validity to apply to judgments arising from qualitative assessments. They first identify standards for rigor in qualitative research, and then use two contemporary assessment validity frameworks to reorganize these standards for application to qualitative assessment.Standards for rigor in qualitative research include responsiveness, reflexivity, purposive sampling, thick description, triangulation, transparency, and transferability. These standards can be reframed using Messick's five sources of validity evidence (content, response process, internal structure, relationships with other variables, and consequences) and Kane's four inferences in validation (scoring, generalization, extrapolation, and implications). Evidence can be collected and evaluated for each evidence source or inference. The authors illustrate this approach using published research on learning portfolios.The authors advocate a "methods-neutral" approach to assessment, in which a clearly stated purpose determines the nature of and approach to data collection and analysis. Increased use of qualitative assessments will necessitate more rigorous judgments of the defensibility (validity) of inferences and decisions. Evidence should be strategically sought to inform a coherent validity argument.

  19. Questionnaire to assess patient satisfaction with pharmaceutical care in Spanish language.

    PubMed

    Traverso, María Luz; Salamano, Mercedes; Botta, Carina; Colautti, Marisel; Palchik, Valeria; Pérez, Beatriz

    2007-08-01

    To develop and validate a questionnaire, in Spanish, for assessing patient satisfaction with pharmaceutical care received in community pharmacies. Selection and translation of questionnaire's items; definition of response scale and demographic questions. Evaluation of face and content validity, feasibility, factor structure, reliability and construct validity. Forty-one community pharmacies of the province of Santa Fe. Argentina. Questionnaire administered to patients receiving pharmaceutical care or traditional pharmacy services. Pilot test to assess feasibility. Factor analysis used principal components and varimax rotation. Reliability established using internal consistency with Cronbach's alpha. Construct validity determined with extreme group method. A self-administered questionnaire with 27 items, 5-point Likert response scale and demographic questions was designed considering multidimensional structure of patient satisfaction. Questionnaire evaluates cumulative experience of patients with comprehensive pharmaceutical care practice in community pharmacies. Two hundred and seventy-four complete questionnaires were obtained. Factor analysis resulted in three factors: Managing therapy, Interpersonal relationship and General satisfaction, with a cumulative variance of 62.51%. Cronbach's alpha for the whole questionnaire was 0.96, and 0.95, 0.88 and 0.76 for the three factors, respectively. Mann-Whitney test for construct validity did not showed significant differences between pharmacies that provide pharmaceutical care and those that do not, however, 23 items showed significant differences between the two groups of pharmacies. The questionnaire developed can be a reliable and valid instrument to assess patient satisfaction with pharmaceutical care in community pharmacies in Spanish. Further research is needed to deepen the validation process.

  20. External validity of a hierarchical dimensional model of child and adolescent psychopathology: Tests using confirmatory factor analyses and multivariate behavior genetic analyses.

    PubMed

    Waldman, Irwin D; Poore, Holly E; van Hulle, Carol; Rathouz, Paul J; Lahey, Benjamin B

    2016-11-01

    Several recent studies of the hierarchical phenotypic structure of psychopathology have identified a General psychopathology factor in addition to the more expected specific Externalizing and Internalizing dimensions in both youth and adult samples and some have found relevant unique external correlates of this General factor. We used data from 1,568 twin pairs (599 MZ & 969 DZ) age 9 to 17 to test hypotheses for the underlying structure of youth psychopathology and the external validity of the higher-order factors. Psychopathology symptoms were assessed via structured interviews of caretakers and youth. We conducted phenotypic analyses of competing structural models using Confirmatory Factor Analysis and used Structural Equation Modeling and multivariate behavior genetic analyses to understand the etiology of the higher-order factors and their external validity. We found that both a General factor and specific Externalizing and Internalizing dimensions are necessary for characterizing youth psychopathology at both the phenotypic and etiologic levels, and that the 3 higher-order factors differed substantially in the magnitudes of their underlying genetic and environmental influences. Phenotypically, the specific Externalizing and Internalizing dimensions were slightly negatively correlated when a General factor was included, which reflected a significant inverse correlation between the nonshared environmental (but not genetic) influences on Internalizing and Externalizing. We estimated heritability of the general factor of psychopathology for the first time. Its moderate heritability suggests that it is not merely an artifact of measurement error but a valid construct. The General, Externalizing, and Internalizing factors differed in their relations with 3 external validity criteria: mother's smoking during pregnancy, parent's harsh discipline, and the youth's association with delinquent peers. Multivariate behavior genetic analyses supported the external validity of the 3 higher-order factors by suggesting that the General, Externalizing, and Internalizing factors were correlated with peer delinquency and parent's harsh discipline for different etiologic reasons. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. The Vanderbilt Holistic Face Processing Test: A short and reliable measure of holistic face processing

    PubMed Central

    Richler, Jennifer J.; Floyd, R. Jackie; Gauthier, Isabel

    2014-01-01

    Efforts to understand individual differences in high-level vision necessitate the development of measures that have sufficient reliability, which is generally not a concern in group studies. Holistic processing is central to research on face recognition and, more recently, to the study of individual differences in this area. However, recent work has shown that the most popular measure of holistic processing, the composite task, has low reliability. This is particularly problematic for the recent surge in interest in studying individual differences in face recognition. Here, we developed and validated a new measure of holistic face processing specifically for use in individual-differences studies. It avoids some of the pitfalls of the standard composite design and capitalizes on the idea that trial variability allows for better traction on reliability. Across four experiments, we refine this test and demonstrate its reliability. PMID:25228629

  2. Eye-Tracking as a Tool in Process-Oriented Reading Test Validation

    ERIC Educational Resources Information Center

    Solheim, Oddny Judith; Uppstad, Per Henning

    2011-01-01

    The present paper addresses the continuous need for methodological reflection on how to validate inferences made on the basis of test scores. Validation is a process that requires many lines of evidence. In this article we discuss the potential of eye tracking methodology in process-oriented reading test validation. Methodological considerations…

  3. Identification of FOPDT and SOPDT process dynamics using closed loop test.

    PubMed

    Bajarangbali, Raghunath; Majhi, Somanath; Pandey, Saurabh

    2014-07-01

    In this paper, identification of stable and unstable first order, second order overdamped and underdamped process dynamics with time delay is presented. Relay with hysteresis is used to induce a limit cycle output and using this information, unknown process model parameters are estimated. State space based generalized analytical expressions are derived to achieve accurate results. To show the performance of the proposed method expressions are also derived for systems with a zero. In real time systems, measurement noise is an important issue during identification of process dynamics. A relay with hysteresis reduces the effect of measurement noise, in addition a new multiloop control strategy is proposed to recover the original limit cycle. Simulation results are included to validate the effectiveness of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  4. The opto-mechanical design process: from vision to reality

    NASA Astrophysics Data System (ADS)

    Kvamme, E. Todd; Stubbs, David M.; Jacoby, Michael S.

    2017-08-01

    The design process for an opto-mechanical sub-system is discussed from requirements development through test. The process begins with a proper mission understanding and the development of requirements for the system. Preliminary design activities are then discussed with iterative analysis and design work being shared between the design, thermal, and structural engineering personnel. Readiness for preliminary review and the path to a final design review are considered. The value of prototyping and risk mitigation testing is examined with a focus on when it makes sense to execute a prototype test program. System level margin is discussed in general terms, and the practice of trading margin in one area of performance to meet another area is reviewed. Requirements verification and validation is briefly considered. Testing and its relationship to requirements verification concludes the design process.

  5. Accelerated numerical processing of electronically recorded holograms with reduced speckle noise.

    PubMed

    Trujillo, Carlos; Garcia-Sucerquia, Jorge

    2013-09-01

    The numerical reconstruction of digitally recorded holograms suffers from speckle noise. An accelerated method that uses general-purpose computing in graphics processing units to reduce that noise is shown. The proposed methodology utilizes parallelized algorithms to record, reconstruct, and superimpose multiple uncorrelated holograms of a static scene. For the best tradeoff between reduction of the speckle noise and processing time, the method records, reconstructs, and superimposes six holograms of 1024 × 1024 pixels in 68 ms; for this case, the methodology reduces the speckle noise by 58% compared with that exhibited by a single hologram. The fully parallelized method running on a commodity graphics processing unit is one order of magnitude faster than the same technique implemented on a regular CPU using its multithreading capabilities. Experimental results are shown to validate the proposal.

  6. Model Validation Status Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E.L. Hardin

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified,more » and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and engineered barriers, plus the TSPA model itself Description of the model areas is provided in Section 3, and the documents reviewed are described in Section 4. The responsible manager for the Model Validation Status Review was the Chief Science Officer (CSO) for Bechtel-SAIC Co. (BSC). The team lead was assigned by the CSO. A total of 32 technical specialists were engaged to evaluate model validation status in the 21 model areas. The technical specialists were generally independent of the work reviewed, meeting technical qualifications as discussed in Section 5.« less

  7. The Role of Structural Models in the Solar Sail Flight Validation Process

    NASA Technical Reports Server (NTRS)

    Johnston, John D.

    2004-01-01

    NASA is currently soliciting proposals via the New Millennium Program ST-9 opportunity for a potential Solar Sail Flight Validation (SSFV) experiment to develop and operate in space a deployable solar sail that can be steered and provides measurable acceleration. The approach planned for this experiment is to test and validate models and processes for solar sail design, fabrication, deployment, and flight. These models and processes would then be used to design, fabricate, and operate scaleable solar sails for future space science missions. There are six validation objectives planned for the ST9 SSFV experiment: 1) Validate solar sail design tools and fabrication methods; 2) Validate controlled deployment; 3) Validate in space structural characteristics (focus of poster); 4) Validate solar sail attitude control; 5) Validate solar sail thrust performance; 6) Characterize the sail's electromagnetic interaction with the space environment. This poster presents a top-level assessment of the role of structural models in the validation process for in-space structural characteristics.

  8. Earth Science Enterprise Scientific Data Purchase Project: Verification and Validation

    NASA Technical Reports Server (NTRS)

    Jenner, Jeff; Policelli, Fritz; Fletcher, Rosea; Holecamp, Kara; Owen, Carolyn; Nicholson, Lamar; Dartez, Deanna

    2000-01-01

    This paper presents viewgraphs on the Earth Science Enterprise Scientific Data Purchase Project's verification,and validation process. The topics include: 1) What is Verification and Validation? 2) Why Verification and Validation? 3) Background; 4) ESE Data Purchas Validation Process; 5) Data Validation System and Ingest Queue; 6) Shipment Verification; 7) Tracking and Metrics; 8) Validation of Contract Specifications; 9) Earth Watch Data Validation; 10) Validation of Vertical Accuracy; and 11) Results of Vertical Accuracy Assessment.

  9. Validation of COG10 and ENDFB6R7 on the Auk Workstation for General Application to Highly Enriched Uranium Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Percher, Catherine G.

    2011-08-08

    The COG 10 code package1 on the Auk workstation is now validated with the ENBFB6R7 neutron cross section library for general application to highly enriched uranium (HEU) systems by comparison of the calculated keffective to the expected keffective of several relevant experimental benchmarks. This validation is supplemental to the installation and verification of COG 10 on the Auk workstation2.

  10. 45 CFR 153.350 - Risk adjustment data validation standards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Risk adjustment data validation standards. 153.350... validation standards. (a) General requirement. The State, or HHS on behalf of the State, must ensure proper implementation of any risk adjustment software and ensure proper validation of a statistically valid sample of...

  11. 45 CFR 153.350 - Risk adjustment data validation standards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Risk adjustment data validation standards. 153.350... validation standards. (a) General requirement. The State, or HHS on behalf of the State, must ensure proper implementation of any risk adjustment software and ensure proper validation of a statistically valid sample of...

  12. 20 CFR 404.727 - Evidence of a deemed valid marriage.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Evidence of a deemed valid marriage. 404.727... DISABILITY INSURANCE (1950- ) Evidence Evidence of Age, Marriage, and Death § 404.727 Evidence of a deemed valid marriage. (a) General. A deemed valid marriage is a ceremonial marriage we consider valid even...

  13. Airborne Comparisons of an Ultra-Stable Quartz Oscillator with a H-Maser as Another Possible Validation of General Relativity

    DTIC Science & Technology

    1999-12-01

    POSSIBLE VALIDATION OF GENERAL RELATIVITY Andrei A. Grishaev Institute of Metrology for Time and Space (IMVP), GP VNIIFTRI 141570 Mendeleevo...Metrology for Time and Space (IMVP),GP VNIIFTRI ,141570 Mendeleevo, Russia, 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY

  14. Can You Trust Self-Report Data Provided by Homeless Mentally Ill Individuals?

    ERIC Educational Resources Information Center

    Calsyn, Robert J.; And Others

    1993-01-01

    Reliability and validity of self-report data provided by 178 mentally ill homeless persons were generally favorable. Self-reports of service use also generally agreed with treatment staff estimates, providing further validity evidence. Researchers and administrators can be relatively confident in using such data. (SLD)

  15. [Computerized system validation of clinical researches].

    PubMed

    Yan, Charles; Chen, Feng; Xia, Jia-lai; Zheng, Qing-shan; Liu, Daniel

    2015-11-01

    Validation is a documented process that provides a high degree of assurance. The computer system does exactly and consistently what it is designed to do in a controlled manner throughout the life. The validation process begins with the system proposal/requirements definition, and continues application and maintenance until system retirement and retention of the e-records based on regulatory rules. The objective to do so is to clearly specify that each application of information technology fulfills its purpose. The computer system validation (CSV) is essential in clinical studies according to the GCP standard, meeting product's pre-determined attributes of the specifications, quality, safety and traceability. This paper describes how to perform the validation process and determine relevant stakeholders within an organization in the light of validation SOPs. Although a specific accountability in the implementation of the validation process might be outsourced, the ultimate responsibility of the CSV remains on the shoulder of the business process owner-sponsor. In order to show that the compliance of the system validation has been properly attained, it is essential to set up comprehensive validation procedures and maintain adequate documentations as well as training records. Quality of the system validation should be controlled using both QC and QA means.

  16. Determination of vitamin C in foods: current state of method validation.

    PubMed

    Spínola, Vítor; Llorent-Martínez, Eulogio J; Castilho, Paula C

    2014-11-21

    Vitamin C is one of the most important vitamins, so reliable information about its content in foodstuffs is a concern to both consumers and quality control agencies. However, the heterogeneity of food matrixes and the potential degradation of this vitamin during its analysis create enormous challenges. This review addresses the development and validation of high-performance liquid chromatography methods for vitamin C analysis in food commodities, during the period 2000-2014. The main characteristics of vitamin C are mentioned, along with the strategies adopted by most authors during sample preparation (freezing and acidification) to avoid vitamin oxidation. After that, the advantages and handicaps of different analytical methods are discussed. Finally, the main aspects concerning method validation for vitamin C analysis are critically discussed. Parameters such as selectivity, linearity, limit of quantification, and accuracy were studied by most authors. Recovery experiments during accuracy evaluation were in general satisfactory, with usual values between 81 and 109%. However, few methods considered vitamin C stability during the analytical process, and the study of the precision was not always clear or complete. Potential future improvements regarding proper method validation are indicated to conclude this review. Copyright © 2014. Published by Elsevier B.V.

  17. Montreal-Toulouse Language Assessment Battery: evidence of criterion validity from patients with aphasia.

    PubMed

    Pagliarin, Karina Carlesso; Ortiz, Karin Zazo; Barreto, Simone dos Santos; Pimenta Parente, Maria Alice de Mattos; Nespoulous, Jean-Luc; Joanette, Yves; Fonseca, Rochele Paz

    2015-10-15

    The Montreal-Toulouse Language Assessment Battery - Brazilian version (MTL-BR) provides a general description of language processing and related components in adults with brain injury. The present study aimed at verifying the criterion-related validity of the Montreal-Toulouse Language Assessment Battery - Brazilian version (MTL-BR) by assessing its ability to discriminate between individuals with unilateral brain damage with and without aphasia. The investigation was carried out in a Brazilian community-based sample of 104 adults, divided into four groups: 26 participants with left hemisphere damage (LHD) with aphasia, 25 participants with right hemisphere damage (RHD), 28 with LHD non-aphasic, and 25 healthy adults. There were significant differences between patients with aphasia and the other groups on most total and subtotal scores on MTL-BR tasks. The results showed strong criterion-related validity evidence for the MTL-BR Battery, and provided important information regarding hemispheric specialization and interhemispheric cooperation. Future research is required to search for additional evidence of sensitivity, specificity and validity of the MTL-BR in samples with different types of aphasia and degrees of language impairment. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Children's Behavior in the Postanesthesia Care Unit: The Development of the Child Behavior Coding System-PACU (CBCS-P)

    PubMed Central

    Tan, Edwin T.; Martin, Sarah R.; Fortier, Michelle A.; Kain, Zeev N.

    2012-01-01

    Objective To develop and validate a behavioral coding measure, the Children's Behavior Coding System-PACU (CBCS-P), for children's distress and nondistress behaviors while in the postanesthesia recovery unit. Methods A multidisciplinary team examined videotapes of children in the PACU and developed a coding scheme that subsequently underwent a refinement process (CBCS-P). To examine the reliability and validity of the coding system, 121 children and their parents were videotaped during their stay in the PACU. Participants were healthy children undergoing elective, outpatient surgery and general anesthesia. The CBCS-P was utilized and objective data from medical charts (analgesic consumption and pain scores) were extracted to establish validity. Results Kappa values indicated good-to-excellent (κ's > .65) interrater reliability of the individual codes. The CBCS-P had good criterion validity when compared to children's analgesic consumption and pain scores. Conclusions The CBCS-P is a reliable, observational coding method that captures children's distress and nondistress postoperative behaviors. These findings highlight the importance of considering context in both the development and application of observational coding schemes. PMID:22167123

  19. Validation of a 30-year-old process for the manufacture of L-asparaginase from Erwinia chrysanthemi.

    PubMed

    Gervais, David; Allison, Nigel; Jennings, Alan; Jones, Shane; Marks, Trevor

    2013-04-01

    A 30-year-old manufacturing process for the biologic product L-asparaginase from the plant pathogen Erwinia chrysanthemi was rigorously qualified and validated, with a high level of agreement between validation data and the 6-year process database. L-Asparaginase exists in its native state as a tetrameric protein and is used as a chemotherapeutic agent in the treatment regimen for Acute Lymphoblastic Leukaemia (ALL). The manufacturing process involves fermentation of the production organism, extraction and purification of the L-asparaginase to make drug substance (DS), and finally formulation and lyophilisation to generate drug product (DP). The extensive manufacturing experience with the product was used to establish ranges for all process parameters and product quality attributes. The product and in-process intermediates were rigorously characterised, and new assays, such as size-exclusion and reversed-phase UPLC, were developed, validated, and used to analyse several pre-validation batches. Finally, three prospective process validation batches were manufactured and product quality data generated using both the existing and the new analytical methods. These data demonstrated the process to be robust, highly reproducible and consistent, and the validation was successful, contributing to the granting of an FDA product license in November, 2011.

  20. Backward Dependencies and in-Situ wh-Questions as Test Cases on How to Approach Experimental Linguistics Research That Pursues Theoretical Linguistics Questions

    PubMed Central

    Pablos, Leticia; Doetjes, Jenny; Cheng, Lisa L.-S.

    2018-01-01

    The empirical study of language is a young field in contemporary linguistics. This being the case, and following a natural development process, the field is currently at a stage where different research methods and experimental approaches are being put into question in terms of their validity. Without pretending to provide an answer with respect to the best way to conduct linguistics related experimental research, in this article we aim at examining the process that researchers follow in the design and implementation of experimental linguistics research with a goal to validate specific theoretical linguistic analyses. First, we discuss the general challenges that experimental work faces in finding a compromise between addressing theoretically relevant questions and being able to implement these questions in a specific controlled experimental paradigm. We discuss the Granularity Mismatch Problem (Poeppel and Embick, 2005) which addresses the challenges that research that is trying to bridge the representations and computations of language and their psycholinguistic/neurolinguistic evidence faces, and the basic assumptions that interdisciplinary research needs to consider due to the different conceptual granularity of the objects under study. To illustrate the practical implications of the points addressed, we compare two approaches to perform linguistic experimental research by reviewing a number of our own studies strongly grounded on theoretically informed questions. First, we show how linguistic phenomena similar at a conceptual level can be tested within the same language using measurement of event-related potentials (ERP) by discussing results from two ERP experiments on the processing of long-distance backward dependencies that involve coreference and negative polarity items respectively in Dutch. Second, we examine how the same linguistic phenomenon can be tested in different languages using reading time measures by discussing the outcome of four self-paced reading experiments on the processing of in-situ wh-questions in Mandarin Chinese and French. Finally, we review the implications that our findings have for the specific theoretical linguistics questions that we originally aimed to address. We conclude with an overview of the general insights that can be gained from the role of structural hierarchy and grammatical constraints in processing and the existing limitations on the generalization of results. PMID:29375417

  1. Backward Dependencies and in-Situ wh-Questions as Test Cases on How to Approach Experimental Linguistics Research That Pursues Theoretical Linguistics Questions.

    PubMed

    Pablos, Leticia; Doetjes, Jenny; Cheng, Lisa L-S

    2017-01-01

    The empirical study of language is a young field in contemporary linguistics. This being the case, and following a natural development process, the field is currently at a stage where different research methods and experimental approaches are being put into question in terms of their validity. Without pretending to provide an answer with respect to the best way to conduct linguistics related experimental research, in this article we aim at examining the process that researchers follow in the design and implementation of experimental linguistics research with a goal to validate specific theoretical linguistic analyses. First, we discuss the general challenges that experimental work faces in finding a compromise between addressing theoretically relevant questions and being able to implement these questions in a specific controlled experimental paradigm. We discuss the Granularity Mismatch Problem (Poeppel and Embick, 2005) which addresses the challenges that research that is trying to bridge the representations and computations of language and their psycholinguistic/neurolinguistic evidence faces, and the basic assumptions that interdisciplinary research needs to consider due to the different conceptual granularity of the objects under study. To illustrate the practical implications of the points addressed, we compare two approaches to perform linguistic experimental research by reviewing a number of our own studies strongly grounded on theoretically informed questions. First, we show how linguistic phenomena similar at a conceptual level can be tested within the same language using measurement of event-related potentials (ERP) by discussing results from two ERP experiments on the processing of long-distance backward dependencies that involve coreference and negative polarity items respectively in Dutch. Second, we examine how the same linguistic phenomenon can be tested in different languages using reading time measures by discussing the outcome of four self-paced reading experiments on the processing of in-situ wh -questions in Mandarin Chinese and French. Finally, we review the implications that our findings have for the specific theoretical linguistics questions that we originally aimed to address. We conclude with an overview of the general insights that can be gained from the role of structural hierarchy and grammatical constraints in processing and the existing limitations on the generalization of results.

  2. Virtual test rig to improve the design and optimisation process of the vehicle steering and suspension systems

    NASA Astrophysics Data System (ADS)

    Mántaras, Daniel A.; Luque, Pablo

    2012-10-01

    A virtual test rig is presented using a three-dimensional model of the elasto-kinematic behaviour of a vehicle. A general approach is put forward to determine the three-dimensional position of the body and the main parameters which influence the handling of the vehicle. For the design process, the variable input data are the longitudinal and lateral acceleration and the curve radius, which are defined by the user as a design goal. For the optimisation process, once the vehicle has been built, the variable input data are the travel of the four struts and the steering wheel angle, which is obtained through monitoring the vehicle. The virtual test rig has been applied to a standard vehicle and the validity of the results has been proven.

  3. A geostatistical extreme-value framework for fast simulation of natural hazard events

    PubMed Central

    Stephenson, David B.

    2016-01-01

    We develop a statistical framework for simulating natural hazard events that combines extreme value theory and geostatistics. Robust generalized additive model forms represent generalized Pareto marginal distribution parameters while a Student’s t-process captures spatial dependence and gives a continuous-space framework for natural hazard event simulations. Efficiency of the simulation method allows many years of data (typically over 10 000) to be obtained at relatively little computational cost. This makes the model viable for forming the hazard module of a catastrophe model. We illustrate the framework by simulating maximum wind gusts for European windstorms, which are found to have realistic marginal and spatial properties, and validate well against wind gust measurements. PMID:27279768

  4. Validity evidence for the Fundamentals of Laparoscopic Surgery (FLS) program as an assessment tool: a systematic review.

    PubMed

    Zendejas, Benjamin; Ruparel, Raaj K; Cook, David A

    2016-02-01

    The Fundamentals of Laparoscopic Surgery (FLS) program uses five simulation stations (peg transfer, precision cutting, loop ligation, and suturing with extracorporeal and intracorporeal knot tying) to teach and assess laparoscopic surgery skills. We sought to summarize evidence regarding the validity of scores from the FLS assessment. We systematically searched for studies evaluating the FLS as an assessment tool (last search update February 26, 2013). We classified validity evidence using the currently standard validity framework (content, response process, internal structure, relations with other variables, and consequences). From a pool of 11,628 studies, we identified 23 studies reporting validity evidence for FLS scores. Studies involved residents (n = 19), practicing physicians (n = 17), and medical students (n = 8), in specialties of general (n = 17), gynecologic (n = 4), urologic (n = 1), and veterinary (n = 1) surgery. Evidence was most common in the form of relations with other variables (n = 22, most often expert-novice differences). Only three studies reported internal structure evidence (inter-rater or inter-station reliability), two studies reported content evidence (i.e., derivation of assessment elements), and three studies reported consequences evidence (definition of pass/fail thresholds). Evidence nearly always supported the validity of FLS total scores. However, the loop ligation task lacks discriminatory ability. Validity evidence confirms expected relations with other variables and acceptable inter-rater reliability, but other validity evidence is sparse. Given the high-stakes use of this assessment (required for board eligibility), we suggest that more validity evidence is required, especially to support its content (selection of tasks and scoring rubric) and the consequences (favorable and unfavorable impact) of assessment.

  5. Clinical validity of a relocation stress scale for the families of patients transferred from intensive care units.

    PubMed

    Oh, HyunSoo; Lee, Seul; Kim, JiSun; Lee, EunJu; Min, HyoNam; Cho, OkJa; Seo, WhaSook

    2015-07-01

    This study was conducted to develop a family relocation stress scale by modifying the Son's Relocation Stress Syndrome Scale, to examine its clinical validity and reliability and to confirm its suitability for measuring family relocation stress. The transfer of ICU patients to general wards is a significant anxiety-producing event for family members. However, no relocation stress scale has been developed specifically for families. A nonexperimental, correlation design was adopted. The study subjects were 95 family members of 95 ICU patients at a university hospital located in Incheon, South Korea. Face and construct validities of the devised family relocation stress scale were examined. Construct validity was examined using factor analysis and by using a nomological validity test. Reliability was also examined. Face and content validity of the scale were verified by confirming that its items adequately measured family relocation stress. Factor analysis yielded four components, and the total variance explained by these four components was 63·0%, which is acceptable. Nomological validity was well supported by significant relationships between relocation stress and degree of preparation for relocation, patient self-care ability, family burden and satisfaction with the relocation process. The devised scale was also found to have good reliability. The family relocation stress scale devised in this study was found to have good validity and reliability, and thus, is believed to offer a means of assessing family relocation stress. The findings of this study provide a reliable and valid assessment tool when nurses prepare families for patient transfer from an ICU to a ward setting, and may also provide useful information to those developing an intervention programme for family relocation stress management. © 2015 John Wiley & Sons Ltd.

  6. Development of a quality-of-life instrument for autoimmune bullous disease: the Autoimmune Bullous Disease Quality of Life questionnaire.

    PubMed

    Sebaratnam, Deshan F; Hanna, Anna Marie; Chee, Shien-ning; Frew, John W; Venugopal, Supriya S; Daniel, Benjamin S; Martin, Linda K; Rhodes, Lesley M; Tan, Jeremy Choon Kai; Wang, Charles Qian; Welsh, Belinda; Nijsten, Tamar; Murrell, Dédée F

    2013-10-01

    Quality-of-life (QOL) evaluation is an increasingly important outcome measure in dermatology, with disease-specific QOL instruments being the most sensitive to changes in disease status. To develop a QOL instrument specific to autoimmune bullous disease (AIBD). A comprehensive item generation process was used to build a 45-item pilot Autoimmune Bullous Disease Quality of Life (ABQOL) questionnaire, distributed to 70 patients with AIBD. Experts in bullous disease refined the pilot ABQOL before factor analysis was performed to yield the final ABQOL questionnaire of 17 questions. We evaluated validity and reliability across a range of indices. Australian dermatology outpatient clinics and private dermatology practices. PATIENTS AND EXPOSURE: Patients with a histological diagnosis of AIBD. The development of an AIBD-specific QOL instrument. Face and content validity were established through the comprehensive patient interview process and expert review. In terms of convergent validity, the ABQOL was found to have a moderate correlation with scores on the Dermatology Life Quality Index (R = 0.63) and the General Health subscale of the 36-Item Short Form Health Survey (R = 0.69; P = .009) and low correlation with the Pemphigus Disease Area Index (R = 0.42) and Autoimmune Bullous Disease Skin Disorder Intensity Score (R = 0.48). In terms of discriminant validity, the ABQOL was found to be more sensitive than the Dermatology Life Quality Index (P = .02). The ABQOL was also found to be a reliable instrument evaluated by internal consistency (Cronbach α coefficient, 0.84) and test-retest reliability (mean percentage variation, 0.92). The ABQOL has been shown to be a valid and reliable instrument that may serve as an end point in clinical trials. Future work should include incorporating patient weighting on questions to further increase content validity and translation of the measure to other languages. anzctr.org.au Identifier: ACTRN12612000750886.

  7. Adaptive vibration control of structures under earthquakes

    NASA Astrophysics Data System (ADS)

    Lew, Jiann-Shiun; Juang, Jer-Nan; Loh, Chin-Hsiung

    2017-04-01

    techniques, for structural vibration suppression under earthquakes. Various control strategies have been developed to protect structures from natural hazards and improve the comfort of occupants in buildings. However, there has been little development of adaptive building control with the integration of real-time system identification and control design. Generalized predictive control, which combines the process of real-time system identification and the process of predictive control design, has received widespread acceptance and has been successfully applied to various test-beds. This paper presents a formulation of the predictive control scheme for adaptive vibration control of structures under earthquakes. Comprehensive simulations are performed to demonstrate and validate the proposed adaptive control technique for earthquake-induced vibration of a building.

  8. Integrated Work Management: PIC, Course 31884

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, Lewis Edward

    The person-in-charge (PIC) plays a key role in the integrated work management (IWM) process at Los Alamos National Laboratory (LANL, or the Laboratory) because the PIC is assigned responsibility and authority by the responsible line manager (RLM) for the overall validation, coordination, release, execution, and closeout of a work activity in accordance with IWM. This course, Integrated Work Management: PIC (Course 31884), describes the PIC’s IWM roles and responsibilities. This course also discusses IWM requirements that the PIC must meet. For a general overview of the IWM process, see self-study Course 31881, Integrated Work Management: Overview. For instruction on themore » preparer’s role, see self-study Course 31883, Integrated Work Management: Preparer.« less

  9. Software Development Technologies for Reactive, Real-Time, and Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Manna, Zohar

    1996-01-01

    The research is directed towards the design and implementation of a comprehensive deductive environment for the development of high-assurance systems, especially reactive (concurrent, real-time, and hybrid) systems. Reactive systems maintain an ongoing interaction with their environment, and are among the most difficult to design and verify. The project aims to provide engineers with a wide variety of tools within a single, general, formal framework in which the tools will be most effective. The entire development process is considered, including the construction, transformation, validation, verification, debugging, and maintenance of computer systems. The goal is to automate the process as much as possible and reduce the errors that pervade hardware and software development.

  10. Comparison of Multiscale Method of Cells-Based Models for Predicting Elastic Properties of Filament Wound C/C-SiC

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Fassin, Marek; Bednarcyk, Brett A.; Reese, Stefanie; Simon, Jaan-Willem

    2017-01-01

    Three different multiscale models, based on the method of cells (generalized and high fidelity) micromechanics models were developed and used to predict the elastic properties of C/C-SiC composites. In particular, the following multiscale modeling strategies were employed: Concurrent multiscale modeling of all phases using the generalized method of cells, synergistic (two-way coupling in space) multiscale modeling with the generalized method of cells, and hierarchical (one-way coupling in space) multiscale modeling with the high fidelity generalized method of cells. The three models are validated against data from a hierarchical multiscale finite element model in the literature for a repeating unit cell of C/C-SiC. Furthermore, the multiscale models are used in conjunction with classical lamination theory to predict the stiffness of C/C-SiC plates manufactured via a wet filament winding and liquid silicon infiltration process recently developed by the German Aerospace Institute.

  11. Recommendations for elaboration, transcultural adaptation and validation process of tests in Speech, Hearing and Language Pathology.

    PubMed

    Pernambuco, Leandro; Espelt, Albert; Magalhães, Hipólito Virgílio; Lima, Kenio Costa de

    2017-06-08

    to present a guide with recommendations for translation, adaptation, elaboration and process of validation of tests in Speech and Language Pathology. the recommendations were based on international guidelines with a focus on the elaboration, translation, cross-cultural adaptation and validation process of tests. the recommendations were grouped into two Charts, one of them with procedures for translation and transcultural adaptation and the other for obtaining evidence of validity, reliability and measures of accuracy of the tests. a guide with norms for the organization and systematization of the process of elaboration, translation, cross-cultural adaptation and validation process of tests in Speech and Language Pathology was created.

  12. Integrating Model-Based Transmission Reduction into a multi-tier architecture

    NASA Astrophysics Data System (ADS)

    Straub, J.

    A multi-tier architecture consists of numerous craft as part of the system, orbital, aerial, and surface tiers. Each tier is able to collect progressively greater levels of information. Generally, craft from lower-level tiers are deployed to a target of interest based on its identification by a higher-level craft. While the architecture promotes significant amounts of science being performed in parallel, this may overwhelm the computational and transmission capabilities of higher-tier craft and links (particularly the deep space link back to Earth). Because of this, a new paradigm in in-situ data processing is required. Model-based transmission reduction (MBTR) is such a paradigm. Under MBTR, each node (whether a single spacecraft in orbit of the Earth or another planet or a member of a multi-tier network) is given an a priori model of the phenomenon that it is assigned to study. It performs activities to validate this model. If the model is found to be erroneous, corrective changes are identified, assessed to ensure their significance for being passed on, and prioritized for transmission. A limited amount of verification data is sent with each MBTR assertion message to allow those that might rely on the data to validate the correct operation of the spacecraft and MBTR engine onboard. Integrating MBTR with a multi-tier framework creates an MBTR hierarchy. Higher levels of the MBTR hierarchy task lower levels with data collection and assessment tasks that are required to validate or correct elements of its model. A model of the expected conditions is sent to the lower level craft; which then engages its own MBTR engine to validate or correct the model. This may include tasking a yet lower level of craft to perform activities. When the MBTR engine at a given level receives all of its component data (whether directly collected or from delegation), it randomly chooses some to validate (by reprocessing the validation data), performs analysis and sends its own results (v- lidation and/or changes of model elements and supporting validation data) to its upstream node. This constrains data transmission to only significant (either because it includes a change or is validation data critical for assessing overall performance) information and reduces the processing requirements (by not having to process insignificant data) at higher-level nodes. This paper presents a framework for multi-tier MBTR and two demonstration mission concepts: an Earth sensornet and a mission to Mars. These multi-tier MBTR concepts are compared to a traditional mission approach.

  13. A new model for fluid velocity slip on a solid surface.

    PubMed

    Shu, Jian-Jun; Teo, Ji Bin Melvin; Chan, Weng Kong

    2016-10-12

    A general adsorption model is developed to describe the interactions between near-wall fluid molecules and solid surfaces. This model serves as a framework for the theoretical modelling of boundary slip phenomena. Based on this adsorption model, a new general model for the slip velocity of fluids on solid surfaces is introduced. The slip boundary condition at a fluid-solid interface has hitherto been considered separately for gases and liquids. In this paper, we show that the slip velocity in both gases and liquids may originate from dynamical adsorption processes at the interface. A unified analytical model that is valid for both gas-solid and liquid-solid slip boundary conditions is proposed based on surface science theory. The corroboration with the experimental data extracted from the literature shows that the proposed model provides an improved prediction compared to existing analytical models for gases at higher shear rates and close agreement for liquid-solid interfaces in general.

  14. Generalizing Landauer's principle

    NASA Astrophysics Data System (ADS)

    Maroney, O. J. E.

    2009-03-01

    In a recent paper [Stud. Hist. Philos. Mod. Phys. 36, 355 (2005)] it is argued that to properly understand the thermodynamics of Landauer’s principle it is necessary to extend the concept of logical operations to include indeterministic operations. Here we examine the thermodynamics of such operations in more detail, extending the work of Landauer to include indeterministic operations and to include logical states with variable entropies, temperatures, and mean energies. We derive the most general statement of Landauer’s principle and prove its universality, extending considerably the validity of previous proofs. This confirms conjectures made that all logical operations may, in principle, be performed in a thermodynamically reversible fashion, although logically irreversible operations would require special, practically rather difficult, conditions to do so. We demonstrate a physical process that can perform any computation without work requirements or heat exchange with the environment. Many widespread statements of Landauer’s principle are shown to be special cases of our generalized principle.

  15. Generalized laws of thermodynamics in the presence of correlations.

    PubMed

    Bera, Manabendra N; Riera, Arnau; Lewenstein, Maciej; Winter, Andreas

    2017-12-19

    The laws of thermodynamics, despite their wide range of applicability, are known to break down when systems are correlated with their environments. Here we generalize thermodynamics to physical scenarios which allow presence of correlations, including those where strong correlations are present. We exploit the connection between information and physics, and introduce a consistent redefinition of heat dissipation by systematically accounting for the information flow from system to bath in terms of the conditional entropy. As a consequence, the formula for the Helmholtz free energy is accordingly modified. Such a remedy not only fixes the apparent violations of Landauer's erasure principle and the second law due to anomalous heat flows, but also leads to a generally valid reformulation of the laws of thermodynamics. In this information-theoretic approach, correlations between system and environment store work potential. Thus, in this view, the apparent anomalous heat flows are the refrigeration processes driven by such potentials.

  16. Hierarchical Clustering on the Basis of Inter-Job Similarity as a Tool in Validity Generalization

    ERIC Educational Resources Information Center

    Mobley, William H.; Ramsay, Robert S.

    1973-01-01

    The present research was stimulated by three related problems frequently faced in validation research: viable procedures for combining similar jobs in order to assess the validity of various predictors, for assessing groups of jobs represented in previous validity studies, and for assessing the applicability of validity findings between units.…

  17. Brunn: an open source laboratory information system for microplates with a graphical plate layout design process.

    PubMed

    Alvarsson, Jonathan; Andersson, Claes; Spjuth, Ola; Larsson, Rolf; Wikberg, Jarl E S

    2011-05-20

    Compound profiling and drug screening generates large amounts of data and is generally based on microplate assays. Current information systems used for handling this are mainly commercial, closed source, expensive, and heavyweight and there is a need for a flexible lightweight open system for handling plate design, and validation and preparation of data. A Bioclipse plugin consisting of a client part and a relational database was constructed. A multiple-step plate layout point-and-click interface was implemented inside Bioclipse. The system contains a data validation step, where outliers can be removed, and finally a plate report with all relevant calculated data, including dose-response curves. Brunn is capable of handling the data from microplate assays. It can create dose-response curves and calculate IC50 values. Using a system of this sort facilitates work in the laboratory. Being able to reuse already constructed plates and plate layouts by starting out from an earlier step in the plate layout design process saves time and cuts down on error sources.

  18. WHO Expert Committee on Specifications for Pharmaceutical Preparations. Forty-ninth report.

    PubMed

    2015-01-01

    The Expert Committee on Specifications for Pharmaceutical Preparations works towards clear, independent and practical standards and guidelines for the quality assurance of medicines. Standards are developed by the Committee through worldwide consultation and an international consensus-building process. The following new guidelines were adopted and recommended for use. Revised procedure for the development of monographs and other texts for The International Pharmacopoeia; Revised updating mechanism for the section on radiopharmaceuticals in The International Pharmacopoeia; Revision of the supplementary guidelines on good manufacturing practices: validation, Appendix 7: non-sterile process validation; General guidance for inspectors on hold-time studies; 16 technical supplements to Model guidance for the storage and transport of time- and temperature-sensitive pharmaceutical products; Recommendations for quality requirements when plant-derived artemisinin is used as a starting material in the production of antimalarial active pharmaceutical ingredients; Multisource (generic) pharmaceutical products: guidelines on registration requirements to establish interchangeability: revision; Guidance on the selection of comparator pharmaceutical products for equivalence assessment of interchangeable multisource (generic) products: revision; and Good review practices: guidelines for national and regional regulatory authorities.

  19. Validation of gamma irradiator controls for quality and regulatory compliance

    NASA Astrophysics Data System (ADS)

    Harding, Rorry B.; Pinteric, Francis J. A.

    1995-09-01

    Since 1978 the U.S. Food and Drug Administration (FDA) has had both the legal authority and the Current Good Manufacturing Practice (CGMP) regulations in place to require irradiator owners who process medical devices to produce evidence of Irradiation Process Validation. One of the key components of Irradiation Process Validation is the validation of the irradiator controls. However, it is only recently that FDA audits have focused on this component of the process validation. What is Irradiator Control System Validation? What constitutes evidence of control? How do owners obtain evidence? What is the irradiator supplier's role in validation? How does the ISO 9000 Quality Standard relate to the FDA's CGMP requirement for evidence of Control System Validation? This paper presents answers to these questions based on the recent experiences of Nordion's engineering and product management staff who have worked with several US-based irradiator owners. This topic — Validation of Irradiator Controls — is a significant regulatory compliance and operations issue within the irradiator suppliers' and users' community.

  20. Mobile applications in oncology: is it possible for patients and healthcare professionals to easily identify relevant tools?

    PubMed

    Brouard, Benoit; Bardo, Pascale; Bonnet, Clément; Mounier, Nicolas; Vignot, Marina; Vignot, Stéphane

    2016-11-01

    Mobile applications represent promising tools in management of chronic diseases, both for patients and healthcare professionals, and especially in oncology. Among the large number of mobile health (mhealth) applications available in mobile stores, it could be difficult for users to identify the most relevant ones. This study evaluated the business model and the scientific validation for mobile applications related to oncology. A systematic review was performed over the two major marketplaces. Purpose, scientific validation, and source of funding were evaluated according to the description of applications in stores. Results were stratified according to targeted audience (general population/patients/healthcare professionals). Five hundred and thirty-nine applications related to oncology were identified: 46.8% dedicated to healthcare professionals, 31.5% to general population, and 21.7% to patients. A lack of information about healthcare professionals' involvement in the development process was noted since only 36.5% of applications mentioned an obvious scientific validation. Most apps were free (72.2%) and without explicit support by industry (94.2%). There is a need to enforce independent review of mhealth applications in oncology. The economic model could be questioned and the source of funding should be clarified. Meanwhile, patients and healthcare professionals should remain cautious about applications' contents. Key messages A systematic review was performed to describe the mobile applications related to oncology and it revealed a lack of information on scientific validation and funding. Independent scientific review and the reporting of conflicts of interest should be encouraged. Users, and all health professionals, should be aware that health applications, whatever the quality of their content, do not actually embrace such an approach.

  1. Validity and reliability of the South African health promoting schools monitoring questionnaire

    PubMed Central

    Struthers, Patricia; de Koker, Petra; Lerebo, Wondwossen; Blignaut, Renette J.

    2017-01-01

    Summary Health promoting schools, as conceptualised by the World Health Organisation, have been developed in many countries to facilitate the health-education link. In 1994, the concept of health promoting schools was introduced in South Africa. In the process of becoming a health promoting school, it is important for schools to monitor and evaluate changes and developments taking place. The Health Promoting Schools (HPS) Monitoring Questionnaire was developed to obtain opinions of students about their school as a health promoting school. It comprises 138 questions in seven sections: socio-demographic information; General health promotion programmes; health related Skills and knowledge; Policies; Environment; Community-school links; and support Services. This paper reports on the reliability and face validity of the HPS Monitoring Questionnaire. Seven experts reviewed the questionnaire and agreed that it has satisfactory face validity. A test-retest reliability study was conducted with 83 students in three high schools in Cape Town, South Africa. The kappa-coefficients demonstrate mostly fair (κ-scores between 0.21 and 0.4) to moderate (κ-scores between 0.41 and 0.6) agreement between test-retest General and Environment items; poor (κ-scores up to 0.2) agreement between Skills and Community test-retest items, fair agreement between Policies items, and for most of the questions focussing on Services a fair agreement was found. The study is a first effort at providing a tool that may be used to monitor and evaluate students’ opinions about changes in health promoting schools. Although the HPS Monitoring Questionnaire has face validity, the results of the reliability testing were inconclusive. Further research is warranted. PMID:27694227

  2. Validity and reliability of the South African health promoting schools monitoring questionnaire.

    PubMed

    Struthers, Patricia; Wegner, Lisa; de Koker, Petra; Lerebo, Wondwossen; Blignaut, Renette J

    2017-04-01

    Health promoting schools, as conceptualised by the World Health Organisation, have been developed in many countries to facilitate the health-education link. In 1994, the concept of health promoting schools was introduced in South Africa. In the process of becoming a health promoting school, it is important for schools to monitor and evaluate changes and developments taking place. The Health Promoting Schools (HPS) Monitoring Questionnaire was developed to obtain opinions of students about their school as a health promoting school. It comprises 138 questions in seven sections: socio-demographic information; General health promotion programmes; health related Skills and knowledge; Policies; Environment; Community-school links; and support Services. This paper reports on the reliability and face validity of the HPS Monitoring Questionnaire. Seven experts reviewed the questionnaire and agreed that it has satisfactory face validity. A test-retest reliability study was conducted with 83 students in three high schools in Cape Town, South Africa. The kappa-coefficients demonstrate mostly fair (κ-scores between 0.21 and 0.4) to moderate (κ-scores between 0.41 and 0.6) agreement between test-retest General and Environment items; poor (κ-scores up to 0.2) agreement between Skills and Community test-retest items, fair agreement between Policies items, and for most of the questions focussing on Services a fair agreement was found. The study is a first effort at providing a tool that may be used to monitor and evaluate students' opinions about changes in health promoting schools. Although the HPS Monitoring Questionnaire has face validity, the results of the reliability testing were inconclusive. Further research is warranted. © The Author 2016. Published by Oxford University Press.

  3. Life stress as a determinant of emotional well-being: development and validation of a Spanish-Language Checklist of Stressful Life Events

    PubMed Central

    Morote Rios, Roxanna; Hjemdal, Odin; Martinez Uribe, Patricia; Corveleyn, Jozef

    2014-01-01

    Objectives: To develop a screening instrument for investigating the prevalence and impact of stressful life events in Spanish-speaking Peruvian adults. Background: Researchers have demonstrated the causal connection between life stress and psychosocial and physical complaints. The need for contextually relevant and updated instruments has been also addressed. Methods: A sequential exploratory design combined qualitative and quantitative information from two studies: first, the content validity of 20 severe stressors (N = 46); then, a criterion-related validity process with affective symptoms as criteria (Hopkins Symptom Checklist (HSCL-25), N = 844). Results: 93% of the participants reported one to eight life events (X = 3.93, Mdn = 3, SD = 7.77). Events increase significantly until 60 years of age (Mdn = 6). Adults born in inland regions (Mdn = 4) or with secondary or technical education (Mdn = 5) reported significantly more stressors than participants born in Lima or with higher education. There are no differences by gender. Four-step hierarchical models showed that life stress is the best unique predictor (β) of HSCL anxiety, depression and general distress (p < .001). Age and gender are significant for the three criteria (p < .01, p < .001); lower education and unemployment are significant unique predictors of general distress and depression (p < .01; p < .05). Previously, the two-factor structure of the HSCL-25 was verified (Satorra–Bentler chi-square, root-mean-square error of approximation = 0.059; standardized root-mean-square residual = 0.055). Conclusion: The Spanish-Language Checklist of Stressful Life Events is a valid instrument to identify adults with significant levels of life stress and possible risk for mental and physical health (clinical utility). PMID:25750790

  4. Life stress as a determinant of emotional well-being: development and validation of a Spanish-Language Checklist of Stressful Life Events.

    PubMed

    Morote Rios, Roxanna; Hjemdal, Odin; Martinez Uribe, Patricia; Corveleyn, Jozef

    2014-01-01

    Objectives : To develop a screening instrument for investigating the prevalence and impact of stressful life events in Spanish-speaking Peruvian adults. Background : Researchers have demonstrated the causal connection between life stress and psychosocial and physical complaints. The need for contextually relevant and updated instruments has been also addressed. Methods : A sequential exploratory design combined qualitative and quantitative information from two studies: first, the content validity of 20 severe stressors ( N  = 46); then, a criterion-related validity process with affective symptoms as criteria (Hopkins Symptom Checklist (HSCL-25), N  = 844). Results : 93% of the participants reported one to eight life events ( X  = 3.93, Mdn = 3, SD = 7.77). Events increase significantly until 60 years of age (Mdn = 6). Adults born in inland regions (Mdn = 4) or with secondary or technical education (Mdn = 5) reported significantly more stressors than participants born in Lima or with higher education. There are no differences by gender. Four-step hierarchical models showed that life stress is the best unique predictor ( β ) of HSCL anxiety, depression and general distress ( p  < .001). Age and gender are significant for the three criteria ( p  < .01, p < .001); lower education and unemployment are significant unique predictors of general distress and depression ( p  < .01; p  < .05). Previously, the two-factor structure of the HSCL-25 was verified (Satorra-Bentler chi-square, root-mean-square error of approximation = 0.059; standardized root-mean-square residual = 0.055). Conclusion : The Spanish-Language Checklist of Stressful Life Events is a valid instrument to identify adults with significant levels of life stress and possible risk for mental and physical health (clinical utility).

  5. Understanding visualization: a formal approach using category theory and semiotics.

    PubMed

    Vickers, Paul; Faith, Joe; Rossiter, Nick

    2013-06-01

    This paper combines the vocabulary of semiotics and category theory to provide a formal analysis of visualization. It shows how familiar processes of visualization fit the semiotic frameworks of both Saussure and Peirce, and extends these structures using the tools of category theory to provide a general framework for understanding visualization in practice, including: Relationships between systems, data collected from those systems, renderings of those data in the form of representations, the reading of those representations to create visualizations, and the use of those visualizations to create knowledge and understanding of the system under inspection. The resulting framework is validated by demonstrating how familiar information visualization concepts (such as literalness, sensitivity, redundancy, ambiguity, generalizability, and chart junk) arise naturally from it and can be defined formally and precisely. This paper generalizes previous work on the formal characterization of visualization by, inter alia, Ziemkiewicz and Kosara and allows us to formally distinguish properties of the visualization process that previous work does not.

  6. Macroscopic Fluctuation Theory for Stationary Non-Equilibrium States

    NASA Astrophysics Data System (ADS)

    Bertini, L.; de Sole, A.; Gabrielli, D.; Jona-Lasinio, G.; Landim, C.

    2002-05-01

    We formulate a dynamical fluctuation theory for stationary non-equilibrium states (SNS) which is tested explicitly in stochastic models of interacting particles. In our theory a crucial role is played by the time reversed dynamics. Within this theory we derive the following results: the modification of the Onsager-Machlup theory in the SNS; a general Hamilton-Jacobi equation for the macroscopic entropy; a non-equilibrium, nonlinear fluctuation dissipation relation valid for a wide class of systems; an H theorem for the entropy. We discuss in detail two models of stochastic boundary driven lattice gases: the zero range and the simple exclusion processes. In the first model the invariant measure is explicitly known and we verify the predictions of the general theory. For the one dimensional simple exclusion process, as recently shown by Derrida, Lebowitz, and Speer, it is possible to express the macroscopic entropy in terms of the solution of a nonlinear ordinary differential equation; by using the Hamilton-Jacobi equation, we obtain a logically independent derivation of this result.

  7. The negotiation of the sick role: general practitioners’ classification of patients with medically unexplained symptoms

    PubMed Central

    Mik-Meyer, Nanna; Obling, Anne Roelsgaard

    2012-01-01

    In encounters between general practitioners (GPs) and patients with medically unexplained symptoms (MUS), the negotiation of the sick role is a social process. In this process, GPs not only use traditional biomedical diagnostic tools but also rely on their own opinions and evaluations of a patient’s particular circumstances in deciding whether that patient is legitimately sick. The doctor is thus a gatekeeper of legitimacy. This article presents results from a qualitative interview study conducted in Denmark with GPs concerning their approach to patients with MUS. We employ a symbolic interaction approach that pays special attention to the external validation of the sick role, making GPs’ accounts of such patients particularly relevant. One of the article’s main findings is that GPs’ criteria for judging the legitimacy of claims by those patients that present with MUS are influenced by the extent to which GPs are able to constitute these patients as people with social problems and problematic personality traits. PMID:22384857

  8. Dynamics of convulsive seizure termination and postictal generalized EEG suppression

    PubMed Central

    Bauer, Prisca R.; Thijs, Roland D.; Lamberts, Robert J.; Velis, Demetrios N.; Visser, Gerhard H.; Tolner, Else A.; Sander, Josemir W.; Lopes da Silva, Fernando H.; Kalitzin, Stiliyan N.

    2017-01-01

    Abstract It is not fully understood how seizures terminate and why some seizures are followed by a period of complete brain activity suppression, postictal generalized EEG suppression. This is clinically relevant as there is a potential association between postictal generalized EEG suppression, cardiorespiratory arrest and sudden death following a seizure. We combined human encephalographic seizure data with data of a computational model of seizures to elucidate the neuronal network dynamics underlying seizure termination and the postictal generalized EEG suppression state. A multi-unit computational neural mass model of epileptic seizure termination and postictal recovery was developed. The model provided three predictions that were validated in EEG recordings of 48 convulsive seizures from 48 subjects with refractory focal epilepsy (20 females, age range 15–61 years). The duration of ictal and postictal generalized EEG suppression periods in human EEG followed a gamma probability distribution indicative of a deterministic process (shape parameter 2.6 and 1.5, respectively) as predicted by the model. In the model and in humans, the time between two clonic bursts increased exponentially from the start of the clonic phase of the seizure. The terminal interclonic interval, calculated using the projected terminal value of the log-linear fit of the clonic frequency decrease was correlated with the presence and duration of postictal suppression. The projected terminal interclonic interval explained 41% of the variation in postictal generalized EEG suppression duration (P < 0.02). Conversely, postictal generalized EEG suppression duration explained 34% of the variation in the last interclonic interval duration. Our findings suggest that postictal generalized EEG suppression is a separate brain state and that seizure termination is a plastic and autonomous process, reflected in increased duration of interclonic intervals that determine the duration of postictal generalized EEG suppression. PMID:28073789

  9. Look--but also listen! ReQuest: A new dimension-oriented GERD symptom scale.

    PubMed

    Bardhan, Karna Dev

    2004-03-01

    The symptom spectrum of gastroesophageal reflux disease (GERD) is much wider than is commonly believed, and in about half the patients endoscopic examination is negative. The role of endoscopy to assess response to treatment is therefore much reduced in GERD. Assessment of symptoms is becoming increasingly important, so different outcome measures are required. The Reflux Questionnaire ReQuest was thus designed as a brief, effective and robust method of tracking and quantifying GERD symptoms during treatment. It comprises seven dimensions: general well-being, acid complaints, upper abdominal/stomach complaints, lower abdominal/digestive complaints, nausea, sleep disturbances and other complaints. In the short version of ReQuest the symptom burden of each dimension is measured by its frequency and intensity (except general well-being, for which only the intensity was determined). The long version also includes 67 symptom descriptions that constitute the dimensions (except general well-being). The rigorous validation process included clinical trial evaluation and statistical assessment of the findings. Important measures of the instrument, such as internal consistency, test/ retest reliability, construct validity and the responsiveness to changes during treatment, among others, all fulfilled or exceeded requirements, thereby demonstrating the accuracy of the instrument. ReQuest meets the criteria set by regulatory authorities and serves as the primary outcome measure for symptom assessment in future clinical trials of current and new treatments. (c) 2004 Prous Science

  10. Generalized Spencer-Lewis equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filippone, W.L.

    The Spencer-Lewis equation, which describes electron transport in homogeneous media when continuous slowing down theory is valid, is derived from the Boltzmann equation. Also derived is a time-dependent generalized Spencer-Lewis equation valid for inhomogeneous media. An independent verification of this last equation is obtained for the one-dimensional case using particle balance considerations.

  11. The Benchmarking Capacity of a General Outcome Measure of Academic Language in Science and Social Studies

    ERIC Educational Resources Information Center

    Mooney, Paul; Lastrapes, Renée E.

    2016-01-01

    The amount of research evaluating the technical merits of general outcome measures of science and social studies achievement is growing. This study targeted criterion validity for critical content monitoring. Questions addressed the concurrent criterion validity of alternate presentation formats of critical content monitoring and the measure's…

  12. Assessing Character Strengths in Youth with Intellectual Disability: Reliability and Factorial Validity of the VIA-Youth

    ERIC Educational Resources Information Center

    Shogren, Karrie A.; Shaw, Leslie A.; Raley, Sheida K.; Wehmeyer, Michael L.; Niemiec, Ryan M.; Adkins, Megan

    2018-01-01

    This article reports the results of an examination of the endorsement, reliability, and factorial validity of the VIA--Youth and assessment of character strengths and virtues developed for the general population in youth with and without intellectual disability. Findings suggest that, generally, youth with intellectual disability endorsed…

  13. Stages of Psychometric Measure Development: The Example of the Generalized Expertise Measure (GEM)

    ERIC Educational Resources Information Center

    Germain, Marie-Line

    2006-01-01

    This paper chronicles the steps, methods, and presents hypothetical results of quantitative and qualitative studies being conducted to develop a Generalized Expertise Measure (GEM). Per Hinkin (1995), the stages of scale development are domain and item generation, content expert validation, and pilot test. Content/face validity and internal…

  14. System diagnostic builder

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Burke, Roger

    1992-01-01

    The System Diagnostic Builder (SDB) is an automated software verification and validation tool using state-of-the-art Artificial Intelligence (AI) technologies. The SDB is used extensively by project BURKE at NASA-JSC as one component of a software re-engineering toolkit. The SDB is applicable to any government or commercial organization which performs verification and validation tasks. The SDB has an X-window interface, which allows the user to 'train' a set of rules for use in a rule-based evaluator. The interface has a window that allows the user to plot up to five data parameters (attributes) at a time. Using these plots and a mouse, the user can identify and classify a particular behavior of the subject software. Once the user has identified the general behavior patterns of the software, he can train a set of rules to represent his knowledge of that behavior. The training process builds rules and fuzzy sets to use in the evaluator. The fuzzy sets classify those data points not clearly identified as a particular classification. Once an initial set of rules is trained, each additional data set given to the SDB will be used by a machine learning mechanism to refine the rules and fuzzy sets. This is a passive process and, therefore, it does not require any additional operator time. The evaluation component of the SDB can be used to validate a single software system using some number of different data sets, such as a simulator. Moreover, it can be used to validate software systems which have been re-engineered from one language and design methodology to a totally new implementation.

  15. A perspective on medical school admission research and practice over the last 25 years.

    PubMed

    Kreiter, Clarence D; Axelson, Rick D

    2013-01-01

    Over the last 25 years a large body of research has investigated how best to select applicants to study medicine. Although these studies have inspired little actual change in admission practice, the implications of this research are substantial. Five areas of inquiry are discussed: (1) the interview and related techniques, (2) admission tests, (3) other measures of personal competencies, (4) the decision process, and (5) defining and measuring the criterion. In each of these areas we summarize consequential developments and discuss their implication for improving practice. (1) The traditional interview has been shown to lack both reliability and validity. Alternatives have been developed that display promising measurement characteristics. (2) Admission test scores have been shown to predict academic and clinical performance and are generally the most useful measures obtained about an applicant. (3) Due to the high-stakes nature of the admission decision, it is difficult to support a logical validity argument for the use of personality tests. Although standardized letters of recommendation appear to offer some promise, more research is needed. (4) The methods used to make the selection decision should be responsive to validity research on how best to utilize applicant information. (5) Few resources have been invested in obtaining valid criterion measures. Future research might profitably focus on composite score as a method for generating a measure of a physician's career success. There are a number of social and organization factors that resist evidence-based change. However, research over the last 25 years does present important findings that could be used to improve the admission process.

  16. Development and Validation of the Parents’ Beliefs about Children’s Emotions Questionnaire

    PubMed Central

    Halberstadt, Amy G.; Dunsmore, Julie C.; Bryant, Alfred J.; Parker, Alison E.; Beale, Karen S.; Thompson, Julie A.

    2014-01-01

    Parents’ beliefs about children’s emotions comprise an important aspect of parental emotion socialization and may relate to children’s mental health and well-being. Thus, the goal of this study was to develop a multi-faceted questionnaire assessing parents’ beliefs about children’s emotions (PBACE). Central to our work was inclusion of multiple ethnic groups throughout the questionnaire development process, from initial item creation through assessment of measurement invariance and validity. Participants included 1080 African American, European American, and Lumbee American Indian parents of 4- to 10-year old children who completed the initial item pool for the PBACE. Exploratory factor analyses were conducted with 720 of these parents to identify factor structure and reduce items. Confirmatory factor analysis was then conducted with a holdout sample of 360 parents to evaluate model fit and assess measurement invariance across ethnicity and across parent gender. Finally, validity of the PBACE scales was assessed via correlations with measures of parental emotional expressivity and reactions to children’s emotions. The PBACE is comprised of 33 items in seven scales. All scales generally demonstrated measurement invariance across ethnic groups and parent gender, thereby allowing interpretations of differences across these ethnic groups and between mothers and fathers as true differences rather than by-products of measurement variance. Initial evidence of discriminant and construct validity for the scale interpretations was also obtained. Results suggest that the PBACE will be useful for researchers interested in emotion-related socialization processes in diverse ethnic groups and their impact on children’s socioemotional outcomes and well-being. PMID:23914957

  17. Development and validation of the Parents' Beliefs About Children's Emotions Questionnaire.

    PubMed

    Halberstadt, Amy G; Dunsmore, Julie C; Bryant, Alfred; Parker, Alison E; Beale, Karen S; Thompson, Julie A

    2013-12-01

    Parents' beliefs about children's emotions comprise an important aspect of parental emotion socialization and may relate to children's mental health and well-being. Thus, the goal of this study was to develop the multifaceted Parents' Beliefs About Children's Emotions (PBACE) questionnaire. Central to our work was inclusion of multiple ethnic groups throughout the questionnaire development process, from initial item creation through assessment of measurement invariance and validity. Participants included 1,080 African American, European American, and Lumbee American Indian parents of 4- to 10-year-old children who completed the initial item pool for the PBACE. Exploratory factor analyses were conducted with 720 of these parents to identify factor structure and reduce items. Confirmatory factor analysis was then conducted with a holdout sample of 360 parents to evaluate model fit and assess measurement invariance across ethnicity and across parent gender. Finally, validity of the PBACE scales was assessed via correlations with measures of parental emotional expressivity and reactions to children's emotions. The PBACE is composed of 33 items in 7 scales. All scales generally demonstrated measurement invariance across ethnic groups and parent gender, thereby allowing interpretations of differences across these ethnic groups and between mothers and fathers as true differences rather than by-products of measurement variance. Initial evidence of discriminant and construct validity for the scale interpretations was also obtained. Results suggest that the PBACE will be useful for researchers interested in emotion-related socialization processes in diverse ethnic groups and their impact on children's socioemotional outcomes and well-being. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  18. Enhanced TCAS 2/CDTI traffic Sensor digital simulation model and program description

    NASA Technical Reports Server (NTRS)

    Goka, T.

    1984-01-01

    Digital simulation models of enhanced TCAS 2/CDTI traffic sensors are developed, based on actual or projected operational and performance characteristics. Two enhanced Traffic (or Threat) Alert and Collision Avoidance Systems are considered. A digital simulation program is developed in FORTRAN. The program contains an executive with a semireal time batch processing capability. The simulation program can be interfaced with other modules with a minimum requirement. Both the traffic sensor and CAS logic modules are validated by means of extensive simulation runs. Selected validation cases are discussed in detail, and capabilities and limitations of the actual and simulated systems are noted. The TCAS systems are not specifically intended for Cockpit Display of Traffic Information (CDTI) applications. These systems are sufficiently general to allow implementation of CDTI functions within the real systems' constraints.

  19. TIE: An Ability Test of Emotional Intelligence

    PubMed Central

    Śmieja, Magdalena; Orzechowski, Jarosław; Stolarski, Maciej S.

    2014-01-01

    The Test of Emotional Intelligence (TIE) is a new ability scale based on a theoretical model that defines emotional intelligence as a set of skills responsible for the processing of emotion-relevant information. Participants are provided with descriptions of emotional problems, and asked to indicate which emotion is most probable in a given situation, or to suggest the most appropriate action. Scoring is based on the judgments of experts: professional psychotherapists, trainers, and HR specialists. The validation study showed that the TIE is a reliable and valid test, suitable for both scientific research and individual assessment. Its internal consistency measures were as high as .88. In line with theoretical model of emotional intelligence, the results of the TIE shared about 10% of common variance with a general intelligence test, and were independent of major personality dimensions. PMID:25072656

  20. Validation of column-based chromatography processes for the purification of proteins. Technical report No. 14.

    PubMed

    2008-01-01

    PDA Technical Report No. 14 has been written to provide current best practices, such as application of risk-based decision making, based in sound science to provide a foundation for the validation of column-based chromatography processes and to expand upon information provided in Technical Report No. 42, Process Validation of Protein Manufacturing. The intent of this technical report is to provide an integrated validation life-cycle approach that begins with the use of process development data for the definition of operational parameters as a basis for validation, confirmation, and/or minor adjustment to these parameters at manufacturing scale during production of conformance batches and maintenance of the validated state throughout the product's life cycle.

  1. Validity Issues in Clinical Assessment.

    ERIC Educational Resources Information Center

    Foster, Sharon L.; Cone, John D.

    1995-01-01

    Validation issues that arise with measures of constructs and behavior are addressed with reference to general reasons for using assessment procedures in clinical psychology. A distinction is made between the representational phase of validity assessment and the elaborative validity phase in which the meaning and utility of scores are examined.…

  2. Machine learning for the New York City power grid.

    PubMed

    Rudin, Cynthia; Waltz, David; Anderson, Roger N; Boulanger, Albert; Salleb-Aouissi, Ansaf; Chow, Maggie; Dutta, Haimonti; Gross, Philip N; Huang, Bert; Ierome, Steve; Isaac, Delfina F; Kressner, Arthur; Passonneau, Rebecca J; Radeva, Axinia; Wu, Leon

    2012-02-01

    Power companies can benefit from the use of knowledge discovery methods and statistical machine learning for preventive maintenance. We introduce a general process for transforming historical electrical grid data into models that aim to predict the risk of failures for components and systems. These models can be used directly by power companies to assist with prioritization of maintenance and repair work. Specialized versions of this process are used to produce 1) feeder failure rankings, 2) cable, joint, terminator, and transformer rankings, 3) feeder Mean Time Between Failure (MTBF) estimates, and 4) manhole events vulnerability rankings. The process in its most general form can handle diverse, noisy, sources that are historical (static), semi-real-time, or realtime, incorporates state-of-the-art machine learning algorithms for prioritization (supervised ranking or MTBF), and includes an evaluation of results via cross-validation and blind test. Above and beyond the ranked lists and MTBF estimates are business management interfaces that allow the prediction capability to be integrated directly into corporate planning and decision support; such interfaces rely on several important properties of our general modeling approach: that machine learning features are meaningful to domain experts, that the processing of data is transparent, and that prediction results are accurate enough to support sound decision making. We discuss the challenges in working with historical electrical grid data that were not designed for predictive purposes. The “rawness” of these data contrasts with the accuracy of the statistical models that can be obtained from the process; these models are sufficiently accurate to assist in maintaining New York City’s electrical grid.

  3. Measuring attitudes towards the dying process: A systematic review of tools.

    PubMed

    Groebe, Bernadette; Strupp, Julia; Eisenmann, Yvonne; Schmidt, Holger; Schlomann, Anna; Rietz, Christian; Voltz, Raymond

    2018-04-01

    At the end of life, anxious attitudes concerning the dying process are common in patients in Palliative Care. Measurement tools can identify vulnerabilities, resources and the need for subsequent treatment to relieve suffering and support well-being. To systematically review available tools measuring attitudes towards dying, their operationalization, the method of measurement and the methodological quality including generalizability to different contexts. Systematic review according to the PRISMA Statement. Methodological quality of tools assessed by standardized review criteria. MEDLINE, PsycINFO, PsyndexTests and the Health and Psychosocial Instruments were searched from their inception to April 2017. A total of 94 identified studies reported the development and/or validation of 44 tools. Of these, 37 were questionnaires and 7 alternative measurement methods (e.g. projective measures). In 34 of 37 questionnaires, the emotional evaluation (e.g. anxiety) towards dying is measured. Dying is operationalized in general items ( n = 20), in several specific aspects of dying ( n = 34) and as dying of others ( n = 14). Methodological quality of tools was reported inconsistently. Nine tools reported good internal consistency. Of 37 tools, 4 were validated in a clinical sample (e.g. terminal cancer; Huntington disease), indicating questionable generalizability to clinical contexts for most tools. Many tools exist to measure attitudes towards the dying process using different endpoints. This overview can serve as decision framework on which tool to apply in which contexts. For clinical application, only few tools were available. Further validation of existing tools and potential alternative methods in various populations is needed.

  4. An integrated assessment instrument: Developing and validating instrument for facilitating critical thinking abilities and science process skills on electrolyte and nonelectrolyte solution matter

    NASA Astrophysics Data System (ADS)

    Astuti, Sri Rejeki Dwi; Suyanta, LFX, Endang Widjajanti; Rohaeti, Eli

    2017-05-01

    The demanding of assessment in learning process was impact by policy changes. Nowadays, assessment is not only emphasizing knowledge, but also skills and attitudes. However, in reality there are many obstacles in measuring them. This paper aimed to describe how to develop integrated assessment instrument and to verify instruments' validity such as content validity and construct validity. This instrument development used test development model by McIntire. Development process data was acquired based on development test step. Initial product was observed by three peer reviewer and six expert judgments (two subject matter experts, two evaluation experts and two chemistry teachers) to acquire content validity. This research involved 376 first grade students of two Senior High Schools in Bantul Regency to acquire construct validity. Content validity was analyzed used Aiken's formula. The verifying of construct validity was analyzed by exploratory factor analysis using SPSS ver 16.0. The result show that all constructs in integrated assessment instrument are asserted valid according to content validity and construct validity. Therefore, the integrated assessment instrument is suitable for measuring critical thinking abilities and science process skills of senior high school students on electrolyte solution matter.

  5. Spanish translation and linguistic validation of the quality of life in neurological disorders (Neuro-QoL) measurement system.

    PubMed

    Correia, H; Pérez, B; Arnold, B; Wong, Alex W K; Lai, J S; Kallen, M; Cella, D

    2015-03-01

    The quality of life in neurological disorders (Neuro-QoL) measurement system is a 470-item compilation of health-related quality of life domains for adults and children with neurological disorders. It was developed and cognitively debriefed in English and Spanish, with general population and clinical samples in the USA. This paper describes the Spanish translation and linguistic validation process. The translation methodology combined forward and back-translations, multiple reviews, and cognitive debriefing with 30 adult and 30 pediatric Spanish-speaking respondents in the USA. The adult Fatigue bank was later also tested in Spain and Argentina. A universal approach to translation was adopted to produce a Spanish version that can be used in various countries. Translators from several countries were involved in the process. Cognitive debriefing results indicated that most of the 470 Spanish items were well understood. Translations were revised as needed where difficulty was reported or where participants' comments revealed misunderstanding of an item's intended meaning. Additional testing of the universal Spanish adult Fatigue item bank in Spain and Argentina confirmed good understanding of the items and that no country-specific word changes were necessary. All the adult and pediatric Neuro-QoL measures have been linguistically validated with Spanish speakers in the USA. Instruments are available for use at www.assessmentcenter.net.

  6. Spanish translation and linguistic validation of the quality of life in neurological disorders (Neuro-QoL) measurement system

    PubMed Central

    Pérez, B.; Arnold, B.; Wong, Alex W. K.; Lai, JS; Kallen, M.; Cella, D.

    2017-01-01

    Introduction The quality of life in neurological disorders (Neuro-QoL) measurement system is a 470-item compilation of health-related quality of life domains for adults and children with neurological disorders. It was developed and cognitively debriefed in English and Spanish, with general population and clinical samples in the USA. This paper describes the Spanish translation and linguistic validation process. Methods The translation methodology combined forward and back-translations, multiple reviews, and cognitive debriefing with 30 adult and 30 pediatric Spanish-speaking respondents in the USA. The adult Fatigue bank was later also tested in Spain and Argentina. A universal approach to translation was adopted to produce a Spanish version that can be used in various countries. Translators from several countries were involved in the process. Results Cognitive debriefing results indicated that most of the 470 Spanish items were well understood. Translations were revised as needed where difficulty was reported or where participants’ comments revealed misunderstanding of an item’s intended meaning. Additional testing of the universal Spanish adult Fatigue item bank in Spain and Argentina confirmed good understanding of the items and that no country-specific word changes were necessary. Conclusion All the adult and pediatric Neuro-QoL measures have been linguistically validated with Spanish speakers in the USA. Instruments are available for use at www.assessmentcenter.net. PMID:25236708

  7. Is comorbidity in the eating disorders related to perceptions of parenting? Criterion validity of the revised Young Parenting Inventory.

    PubMed

    Sheffield, Alexandra; Waller, Glenn; Emanuelli, Francesca; Murray, James

    2006-01-01

    Recent studies support the reliability and validity of the Young Parenting Inventory-Revised (YPI-R) and its use in investigating the role of parenting in the aetiology and maintenance of eating pathology. However, criterion validity has yet to be fully established. To investigate one aspect of criterion validity, this study examines the association between parenting and comorbid problems in the eating disorders (including general psychopathology and impulsivity). The participants were 124 women with eating disorders. They completed the YPI-R and the Brief Symptom Inventory (BSI; a measure of general psychopathology). They were also interviewed about their use of a number of impulsive behaviours. YPI-R scales were significant predictors of one of the nine BSI scales, and distinguished those patients who did or did not use specific impulsive behaviours. The criterion validity of the YPI-R is partially supported with regards to general psychopathology and impulsivity. The findings highlight the specificity of the parenting styles measured by the YPI-R, and the need for further research using this tool.

  8. Validation of biomarkers to predict response to immunotherapy in cancer: Volume I - pre-analytical and analytical validation.

    PubMed

    Masucci, Giuseppe V; Cesano, Alessandra; Hawtin, Rachael; Janetzki, Sylvia; Zhang, Jenny; Kirsch, Ilan; Dobbin, Kevin K; Alvarez, John; Robbins, Paul B; Selvan, Senthamil R; Streicher, Howard Z; Butterfield, Lisa H; Thurin, Magdalena

    2016-01-01

    Immunotherapies have emerged as one of the most promising approaches to treat patients with cancer. Recently, there have been many clinical successes using checkpoint receptor blockade, including T cell inhibitory receptors such as cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) and programmed cell death-1 (PD-1). Despite demonstrated successes in a variety of malignancies, responses only typically occur in a minority of patients in any given histology. Additionally, treatment is associated with inflammatory toxicity and high cost. Therefore, determining which patients would derive clinical benefit from immunotherapy is a compelling clinical question. Although numerous candidate biomarkers have been described, there are currently three FDA-approved assays based on PD-1 ligand expression (PD-L1) that have been clinically validated to identify patients who are more likely to benefit from a single-agent anti-PD-1/PD-L1 therapy. Because of the complexity of the immune response and tumor biology, it is unlikely that a single biomarker will be sufficient to predict clinical outcomes in response to immune-targeted therapy. Rather, the integration of multiple tumor and immune response parameters, such as protein expression, genomics, and transcriptomics, may be necessary for accurate prediction of clinical benefit. Before a candidate biomarker and/or new technology can be used in a clinical setting, several steps are necessary to demonstrate its clinical validity. Although regulatory guidelines provide general roadmaps for the validation process, their applicability to biomarkers in the cancer immunotherapy field is somewhat limited. Thus, Working Group 1 (WG1) of the Society for Immunotherapy of Cancer (SITC) Immune Biomarkers Task Force convened to address this need. In this two volume series, we discuss pre-analytical and analytical (Volume I) as well as clinical and regulatory (Volume II) aspects of the validation process as applied to predictive biomarkers for cancer immunotherapy. To illustrate the requirements for validation, we discuss examples of biomarker assays that have shown preliminary evidence of an association with clinical benefit from immunotherapeutic interventions. The scope includes only those assays and technologies that have established a certain level of validation for clinical use (fit-for-purpose). Recommendations to meet challenges and strategies to guide the choice of analytical and clinical validation design for specific assays are also provided.

  9. The Predictive Validity of Savry Ratings for Assessing Youth Offenders in Singapore

    PubMed Central

    Chu, Chi Meng; Goh, Mui Leng; Chong, Dominic

    2015-01-01

    Empirical support for the usage of the SAVRY has been reported in studies conducted in many Western contexts, but not in a Singaporean context. This study compared the predictive validity of the SAVRY ratings for violent and general recidivism against the Youth Level of Service/Case Management Inventory (YLS/CMI) ratings within the Singaporean context. Using a sample of 165 male young offenders (Mfollow-up = 4.54 years), results showed that the SAVRY Total Score and Summary Risk Rating, as well as YLS/CMI Total Score and Overall Risk Rating, predicted violent and general recidivism. SAVRY Protective Total Score was only significantly predictive of desistance from general recidivism, and did not show incremental predictive validity for violent and general recidivism over the SAVRY Total Score. Overall, the results suggest that the SAVRY is suited (to varying degrees) for assessing the risk of violent and general recidivism in young offenders within the Singaporean context, but might not be better than the YLS/CMI. PMID:27231403

  10. Reconstruction of Twist Torque in Main Parachute Risers

    NASA Technical Reports Server (NTRS)

    Day, Joshua D.

    2015-01-01

    The reconstruction of twist torque in the Main Parachute Risers of the Capsule Parachute Assembly System (CPAS) has been successfully used to validate CPAS Model Memo conservative twist torque equations. Reconstruction of basic, one degree of freedom drop tests was used to create a functional process for the evaluation of more complex, rigid body simulation. The roll, pitch, and yaw of the body, the fly-out angles of the parachutes, and the relative location of the parachutes to the body are inputs to the torque simulation. The data collected by the Inertial Measurement Unit (IMU) was used to calculate the true torque. The simulation then used photogrammetric and IMU data as inputs into the Model Memo equations. The results were then compared to the true torque results to validate the Model Memo equations. The Model Memo parameters were based off of steel risers and the parameters will need to be re-evaluated for different materials. Photogrammetric data was found to be more accurate than the inertial data in accounting for the relative rotation between payload and cluster. The Model Memo equations were generally a good match and when not matching were generally conservative.

  11. Using process elicitation and validation to understand and improve chemotherapy ordering and delivery.

    PubMed

    Mertens, Wilson C; Christov, Stefan C; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Cassells, Lucinda J; Marquard, Jenna L

    2012-11-01

    Chemotherapy ordering and administration, in which errors have potentially severe consequences, was quantitatively and qualitatively evaluated by employing process formalism (or formal process definition), a technique derived from software engineering, to elicit and rigorously describe the process, after which validation techniques were applied to confirm the accuracy of the described process. The chemotherapy ordering and administration process, including exceptional situations and individuals' recognition of and responses to those situations, was elicited through informal, unstructured interviews with members of an interdisciplinary team. The process description (or process definition), written in a notation developed for software quality assessment purposes, guided process validation (which consisted of direct observations and semistructured interviews to confirm the elicited details for the treatment plan portion of the process). The overall process definition yielded 467 steps; 207 steps (44%) were dedicated to handling 59 exceptional situations. Validation yielded 82 unique process events (35 new expected but not yet described steps, 16 new exceptional situations, and 31 new steps in response to exceptional situations). Process participants actively altered the process as ambiguities and conflicts were discovered by the elicitation and validation components of the study. Chemotherapy error rates declined significantly during and after the project, which was conducted from October 2007 through August 2008. Each elicitation method and the subsequent validation discussions contributed uniquely to understanding the chemotherapy treatment plan review process, supporting rapid adoption of changes, improved communication regarding the process, and ensuing error reduction.

  12. Development and psychometric validation of the general practice nurse satisfaction scale.

    PubMed

    Halcomb, Elizabeth J; Caldwell, Belinda; Salamonson, Yenna; Davidson, Patricia M

    2011-09-01

    To develop an instrument to assess consumer satisfaction with nursing in general practice to provide feedback to nurses about consumers' perceptions of their performance. Prospective psychometric instrument validation study. A literature review was conducted to generate items for an instrument to measure consumer satisfaction with nursing in general practice. Face and content validity were evaluated by an expert panel, which had extensive experience in general practice nursing and research. Included in the questionnaire battery was the 27-item General Practice Nurse Satisfaction (GPNS) scale, as well as demographic and health status items. This survey was distributed to 739 consumers following intervention administered by a practice nurse in 16 general practices across metropolitan, rural, and regional Australia. Participants had the option of completing the survey online or receiving a hard copy of the survey form at the time of their visit. These data were collected between June and August 2009. Satisfaction data from 739 consumers were collected following their consultation with a general practice nurse. From the initial 27-item GPNS scale, a 21-item instrument was developed. Two factors, "confidence and credibility" and "interpersonal and communication" were extracted using principal axis factoring and varimax rotation. These two factors explained 71.9% of the variance. Cronbach's α was 0.97. The GPNS scale has demonstrated acceptable psychometric properties and can be used both in research and clinical practice for evaluating consumer satisfaction with general practice nurses. Assessing consumer satisfaction is important for developing and evaluating nursing roles. The GPNS scale is a valid and reliable tool that can be utilized to assess consumer satisfaction with general practice nurses and can assist in performance management and improving the quality of nursing services. © 2011 Sigma Theta Tau International.

  13. Validation of an instrument to measure inter-organisational linkages in general practice.

    PubMed

    Amoroso, Cheryl; Proudfoot, Judith; Bubner, Tanya; Jayasinghe, Upali W; Holton, Christine; Winstanley, Julie; Beilby, Justin; Harris, Mark F

    2007-12-03

    Linkages between general medical practices and external services are important for high quality chronic disease care. The purpose of this research is to describe the development, evaluation and use of a brief tool that measures the comprehensiveness and quality of a general practice's linkages with external providers for the management of patients with chronic disease. In this study, clinical linkages are defined as the communication, support, and referral arrangements between services for the care and assistance of patients with chronic disease. An interview to measure surgery-level (rather than individual clinician-level) clinical linkages was developed, piloted, reviewed, and evaluated with 97 Australian general practices. Two validated survey instruments were posted to patients, and a survey of locally available services was developed and posted to participating Divisions of General Practice (support organisations). Hypotheses regarding internal validity, association with local services, and patient satisfaction were tested using factor analysis, logistic regression and multilevel regression models. The resulting General Practice Clinical Linkages Interview (GP-CLI) is a nine-item tool with three underlying factors: referral and advice linkages, shared care and care planning linkages, and community access and awareness linkages. Local availability of chronic disease services has no affect on the comprehensiveness of services with which practices link, however, comprehensiveness of clinical linkages has an association with patient assessment of access, receptionist services, and of continuity of care in their general practice. The GP-CLI may be useful to researchers examining comparable health care systems for measuring the comprehensiveness and quality of linkages at a general practice-level with related services, possessing both internal and external validity. The tool can be used with large samples exploring the impact, outcomes, and facilitators of high quality clinical linkages in general practice.

  14. [Design and Validation of a Questionnaire on Vaccination in Students of Health Sciences, Spain].

    PubMed

    Fernández-Prada, María; Ramos-Martín, Pedro; Madroñal-Menéndez, Jaime; Martínez-Ortega, Carmen; González-Cabrera, Joaquín

    2016-11-07

    Immunization rates among medicine and nursing students -and among health professional in general- during hospital training are low. It is necessary to investigate the causes for these low immunization rates. The objective of this study was to design and validate a questionnaire for exploring the attitudes and behaviours of medicine and nursing students toward immunization of vaccine-preventable diseases. An instrument validation study. The sample included 646 nursing and medicine students at University of Oviedo, Spain. It was a non-ramdom sampling. After the content validation process, a 24-item questionnaire was designed to assess attitudes and behaviours/behavioural intentions. Reliability (ordinal alpha), internal validity (exploratory factor analysis by parellel analysis), ANOVA and mediational model tests were performed. Exploratory factor analysis yielded two factors which accounted for 48.8% of total variance. Ordinal alpha for the total score was 0.92. Differences were observed across academic years in the dimensions of attitudes (F5.447=3.728) and knowledge (F5.448=65.59), but not in behaviours/behavioural intentions (F5.461=1.680). Attitudes demonstrated to be a moderating variable of knowledge and attitudes/behavioural attitudes (Indirect effect B=0.15; SD=0.3; 95% CI:0.09-0.19). We developed a questionnaie based on sufficient evidence of reliability and internal validity. Scores on attitudes and knowledge increase with the academic year. Attitudes act as a moderating variable between knowledge and behaviours/behavioural intentions.

  15. Quality of life in postmenopausal women: translation and validation of MSkinQOL questionnaire to measure the effect of a skincare product in USA.

    PubMed

    Segot-Chicq, Evelyne; Fanchon, Chantal

    2013-12-01

    The 28-item Menopausal Skin Quality Of Life (MSkinQOL), a previously validated French questionnaire, developed to assess psychological features of menopausal women and to measure the benefits of using cosmetic skincare products was translated and validated to assess a skincare product in the USA. Construct validity, reliability, reproducibility, and responsiveness were assessed with two groups of 100 nonmenopausal (NM) and 100 postmenopausal (PM) women. The group of PM women applied a specially developed skincare product twice daily for 1 month and filled in the same questionnaire after 1 month as well as a general self-assessment questionnaire about the efficacy and cosmetic properties of the product. No ceiling or floor effects were identified. Construct and internal validity was assessed using a multitrait analysis: questionnaire items proved closely correlated, and each dimension covers a different aspect of women answers profile. The three dimensions showed good reliability and stability. Baseline values for social effects of skin appearance, health status, and self-esteem were significantly different between PM and NM volunteers. Values of these three dimensions were significantly improved after 2 weeks of product application, and further improved after 4 weeks. This study shows that a careful translation and a rigorous process of validation lead to a reliable tool adapted to each country to explore and measure quality of life in healthy PM women. © 2013 Wiley Periodicals, Inc.

  16. Survey Instrument Validity Part I: Principles of Survey Instrument Development and Validation in Athletic Training Education Research

    ERIC Educational Resources Information Center

    Burton, Laura J.; Mazerolle, Stephanie M.

    2011-01-01

    Context: Instrument validation is an important facet of survey research methods and athletic trainers must be aware of the important underlying principles. Objective: To discuss the process of survey development and validation, specifically the process of construct validation. Background: Athletic training researchers frequently employ the use of…

  17. Ground-water models: Validate or invalidate

    USGS Publications Warehouse

    Bredehoeft, J.D.; Konikow, Leonard F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  18. Development and validation of the Chinese version of the Diabetes Management Self-efficacy Scale.

    PubMed

    Vivienne Wu, Shu-Fang; Courtney, Mary; Edwards, Helen; McDowell, Jan; Shortridge-Baggett, Lillie M; Chang, Pei-Jen

    2008-04-01

    The purpose of this study was to translate the Diabetes Management Self-Efficacy Scale (DMSES) into Chinese and test the validity and reliability of the instrument within a Taiwanese population. A two-stage design was used for this study. Stage I consisted of a multi-stepped process of forward and backward translation, using focus groups and consensus meetings to translate the 20-item Australia/English version DMSES to Chinese and test content validity. Stage II established the psychometric properties of the Chinese version DMSES (C-DMSES) by examining the criterion, convergent and construct validity, internal consistency and stability testing. The sample for Stage II comprised 230 patients with type 2 diabetes aged 30 years or more from a diabetes outpatient clinic in Taiwan. Three items were modified to better reflect Chinese practice. The C-DMSES obtained a total average CVI score of .86. The convergent validity of the C-DMSES correlated well with the validated measure of the General Self-Efficacy Scale in measuring self-efficacy (r=.55; p<.01). Criterion-related validity showed that the C-DMSES was a significant predictor of the Summary of Diabetes Self-Care Activities scores (Beta=.58; t=10.75, p<.01). Factor analysis supported the C-DMSES being composed of four subscales. Good internal consistency (Cronbach's alpha=.77 to .93) and test-retest reliability (Pearson correlation coefficient r=.86, p<.01) were found. The C-DMSES is a brief and psychometrically sound measure for evaluation of self-efficacy towards management of diabetes by persons with type 2 diabetes in Chinese populations.

  19. Experimental study on combined cold forging process of backward cup extrusion and piercing

    NASA Astrophysics Data System (ADS)

    Henry, Robinson; Liewald, Mathias

    2018-05-01

    A reduction in material usage of cold forged components while maintaining the functional requirements can be achieved using hollow or tubular preforms. These preforms are used to meet lightweight requirements and to decrease production costs of cold formed components. To increase production efficiency in common multi-stage cold forming processes, manufacturing of hollow preforms by combining the processes backward cup extrusion and piercing was established and will be discussed in this paper. Corresponding investigations and experimental studies are reported in this article. The objectives of the experimental investigations have been the detection of significant process parameters, determination of process limits for the combined processes and validation of the numerical investigations. In addition, the general influence concerning surface quality and diameter tolerance of hollow performs are discussed in this paper. The final goal is to summarize a guideline for industrial application, moreover, to transfer the knowledge to industry, as regards what are required part geometries to reduce the number of forming stages as well as tool cost.

  20. Combustion Chemistry of Fuels: Quantitative Speciation Data Obtained from an Atmospheric High-temperature Flow Reactor with Coupled Molecular-beam Mass Spectrometer.

    PubMed

    Köhler, Markus; Oßwald, Patrick; Krueger, Dominik; Whitside, Ryan

    2018-02-19

    This manuscript describes a high-temperature flow reactor experiment coupled to the powerful molecular beam mass spectrometry (MBMS) technique. This flexible tool offers a detailed observation of chemical gas-phase kinetics in reacting flows under well-controlled conditions. The vast range of operating conditions available in a laminar flow reactor enables access to extraordinary combustion applications that are typically not achievable by flame experiments. These include rich conditions at high temperatures relevant for gasification processes, the peroxy chemistry governing the low temperature oxidation regime or investigations of complex technical fuels. The presented setup allows measurements of quantitative speciation data for reaction model validation of combustion, gasification and pyrolysis processes, while enabling a systematic general understanding of the reaction chemistry. Validation of kinetic reaction models is generally performed by investigating combustion processes of pure compounds. The flow reactor has been enhanced to be suitable for technical fuels (e.g. multi-component mixtures like Jet A-1) to allow for phenomenological analysis of occurring combustion intermediates like soot precursors or pollutants. The controlled and comparable boundary conditions provided by the experimental design allow for predictions of pollutant formation tendencies. Cold reactants are fed premixed into the reactor that are highly diluted (in around 99 vol% in Ar) in order to suppress self-sustaining combustion reactions. The laminar flowing reactant mixture passes through a known temperature field, while the gas composition is determined at the reactors exhaust as a function of the oven temperature. The flow reactor is operated at atmospheric pressures with temperatures up to 1,800 K. The measurements themselves are performed by decreasing the temperature monotonically at a rate of -200 K/h. With the sensitive MBMS technique, detailed speciation data is acquired and quantified for almost all chemical species in the reactive process, including radical species.

  1. Health risk assessment and the practice of industrial hygiene.

    PubMed

    Paustenbach, D J

    1990-07-01

    It has been claimed that there may be as many as 2000 airborne chemicals to which persons could be exposed in the workplace and in the community. Of these, occupational exposure limits have been set for approximately 700 chemicals, and only about 30 chemicals have limits for the ambient air. It is likely that some type of health risk assessment methodology will be used to establish limits for the remainder. Although these methods have been used for over 10 yr to set environmental limits, each step of the process (hazard identification, dose-response assessment, exposure assessment, and risk characterization) contains a number of traps into which scientists and risk managers can fall. For example, regulatory approaches to the hazard identification step have allowed little discrimination between the various animal carcinogens, even though these chemicals can vary greatly in their potency and mechanisms of action. In general, epidemiology data have been given little weight compared to the results of rodent bioassays. The dose-response extrapolation process, as generally practiced, often does not present the range of equally plausible values. Procedures which acknowledge and quantitatively account for some or all of the different classes of chemical carcinogens have not been widely adopted. For example, physiologically based pharmacokinetic (PB-PK) and biologically based models need to become a part of future risk assessments. The exposure evaluation portion of risk assessments can now be significantly more valid because of better dispersion models, validated exposure parameters, and the use of computers to account for complex environmental factors. Using these procedures, industrial hygienists are now able to quantitatively estimate the risks caused not only by the inhalation of chemicals but also those caused by dermal contact and incidental ingestion. The appropriate use of risk assessment methods should allow scientists and risk managers to set scientifically valid environmental and occupational standards for air contaminants.

  2. [Management of medication errors in general medical practice: Study in a pluriprofessionnal health care center].

    PubMed

    Pourrain, Laure; Serin, Michel; Dautriche, Anne; Jacquetin, Fréderic; Jarny, Christophe; Ballenecker, Isabelle; Bahous, Mickaël; Sgro, Catherine

    2018-06-07

    Medication errors are the most frequent medical care adverse events in France. Their management process used in hospital remains poorly applied in primary ambulatory care. The main objective of our study was to assess medication error management in general ambulatory practice. The secondary objectives were the characterization of the errors and the analysis of their root causes in order to implement corrective measures. The study was performed in a pluriprofessionnal health care house, applying the stages and tools validated by the French high health authority, that we previously adapted to ambulatory medical cares. During the 3 months study 4712 medical consultations were performed and we collected 64 medication errors. Most of affected patients were at the extreme ages of life (9,4 % before 9 years and 64 % after 70 years). Medication errors occurred at home in 39,1 % of cases, at pluriprofessionnal health care house (25,0 %) or at drugstore (17,2 %). They led to serious clinical consequences (classified as major, critical or catastrophic) in 17,2 % of cases. Drug induced adverse effects occurred in 5 patients, 3 of them needing hospitalization (1 patient recovered, 1 displayed sequelae and 1 died). In more than half of cases, the errors occurred at prescribing stage. The most frequent type of errors was the use of a wrong drug, different from that indicated for the patient (37,5 %) and poor treatment adherence (18,75 %). The systemic reported causes were a care process dysfunction (in coordination or procedure), the health care action context (patient home, not planned act, professional overwork), human factors such as patient and professional condition. The professional team adherence to the study was excellent. Our study demonstrates, for the first time in France, that medication errors management in ambulatory general medical care can be implemented in a pluriprofessionnal health care house with two conditions: the presence of a trained team coordinator, and the use of validated adapted and simple processes and tools. This study also shows that medications errors in general practice are specific of the care process organization. We identified vulnerable points, as transferring and communication between home and care facilities or conversely, medical coordination and involvement of the patient himself in his care. Copyright © 2018 Société française de pharmacologie et de thérapeutique. Published by Elsevier Masson SAS. All rights reserved.

  3. Information Quality in Regulatory Decision Making: Peer Review versus Good Laboratory Practice.

    PubMed

    McCarty, Lynn S; Borgert, Christopher J; Mihaich, Ellen M

    2012-07-01

    There is an ongoing discussion on the provenance of toxicity testing data regarding how best to ensure its validity and credibility. A central argument is whether journal peer-review procedures are superior to Good Laboratory Practice (GLP) standards employed for compliance with regulatory mandates. We sought to evaluate the rationale for regulatory decision making based on peer-review procedures versus GLP standards. We examined pertinent published literature regarding how scientific data quality and validity are evaluated for peer review, GLP compliance, and development of regulations. Some contend that peer review is a coherent, consistent evaluative procedure providing quality control for experimental data generation, analysis, and reporting sufficient to reliably establish relative merit, whereas GLP is seen as merely a tracking process designed to thwart investigator corruption. This view is not supported by published analyses pointing to subjectivity and variability in peer-review processes. Although GLP is not designed to establish relative merit, it is an internationally accepted quality assurance, quality control method for documenting experimental conduct and data. Neither process is completely sufficient for establishing relative scientific soundness. However, changes occurring both in peer-review processes and in regulatory guidance resulting in clearer, more transparent communication of scientific information point to an emerging convergence in ensuring information quality. The solution to determining relative merit lies in developing a well-documented, generally accepted weight-of-evidence scheme to evaluate both peer-reviewed and GLP information used in regulatory decision making where both merit and specific relevance inform the process.

  4. Validation of a pulsed electric field process to pasteurize strawberry puree

    USDA-ARS?s Scientific Manuscript database

    An inexpensive data acquisition method was developed to validate the exact number and shape of the pulses applied during pulsed electric fields (PEF) processing. The novel validation method was evaluated in conjunction with developing a pasteurization PEF process for strawberry puree. Both buffered...

  5. Risk-based Methodology for Validation of Pharmaceutical Batch Processes.

    PubMed

    Wiles, Frederick

    2013-01-01

    In January 2011, the U.S. Food and Drug Administration published new process validation guidance for pharmaceutical processes. The new guidance debunks the long-held industry notion that three consecutive validation batches or runs are all that are required to demonstrate that a process is operating in a validated state. Instead, the new guidance now emphasizes that the level of monitoring and testing performed during process performance qualification (PPQ) studies must be sufficient to demonstrate statistical confidence both within and between batches. In some cases, three qualification runs may not be enough. Nearly two years after the guidance was first published, little has been written defining a statistical methodology for determining the number of samples and qualification runs required to satisfy Stage 2 requirements of the new guidance. This article proposes using a combination of risk assessment, control charting, and capability statistics to define the monitoring and testing scheme required to show that a pharmaceutical batch process is operating in a validated state. In this methodology, an assessment of process risk is performed through application of a process failure mode, effects, and criticality analysis (PFMECA). The output of PFMECA is used to select appropriate levels of statistical confidence and coverage which, in turn, are used in capability calculations to determine when significant Stage 2 (PPQ) milestones have been met. The achievement of Stage 2 milestones signals the release of batches for commercial distribution and the reduction of monitoring and testing to commercial production levels. Individuals, moving range, and range/sigma charts are used in conjunction with capability statistics to demonstrate that the commercial process is operating in a state of statistical control. The new process validation guidance published by the U.S. Food and Drug Administration in January of 2011 indicates that the number of process validation batches or runs required to demonstrate that a pharmaceutical process is operating in a validated state should be based on sound statistical principles. The old rule of "three consecutive batches and you're done" is no longer sufficient. The guidance, however, does not provide any specific methodology for determining the number of runs required, and little has been published to augment this shortcoming. The paper titled "Risk-based Methodology for Validation of Pharmaceutical Batch Processes" describes a statistically sound methodology for determining when a statistically valid number of validation runs has been acquired based on risk assessment and calculation of process capability.

  6. Orchestration of Molecular Information through Higher Order Chemical Recognition

    NASA Astrophysics Data System (ADS)

    Frezza, Brian M.

    Broadly defined, higher order chemical recognition is the process whereby discrete chemical building blocks capable of specifically binding to cognate moieties are covalently linked into oligomeric chains. These chains, or sequences, are then able to recognize and bind to their cognate sequences with a high degree of cooperativity. Principally speaking, DNA and RNA are the most readily obtained examples of this chemical phenomenon, and function via Watson-Crick cognate pairing: guanine pairs with cytosine and adenine with thymine (DNA) or uracil (RNA), in an anti-parallel manner. While the theoretical principles, techniques, and equations derived herein apply generally to any higher-order chemical recognition system, in practice we utilize DNA oligomers as a model-building material to experimentally investigate and validate our hypotheses. Historically, general purpose information processing has been a task limited to semiconductor electronics. Molecular computing on the other hand has been limited to ad hoc approaches designed to solve highly specific and unique computation problems, often involving components or techniques that cannot be applied generally in a manner suitable for precise and predictable engineering. Herein, we provide a fundamental framework for harnessing high-order recognition in a modular and programmable fashion to synthesize molecular information process networks of arbitrary construction and complexity. This document provides a solid foundation for routinely embedding computational capability into chemical and biological systems where semiconductor electronics are unsuitable for practical application.

  7. Turning Back the Title VII Clock: The Resegregation of the American Work Force through Validity Generalization.

    ERIC Educational Resources Information Center

    Goldstein, Barry L.; Patterson, Patrick O.

    1988-01-01

    Refers to Title VII of the Civil Rights Act of 1964 and the Supreme Court's disparate impact interpretation of Title VII in Griggs versus Duke Power Company. Contends that attacks on the Griggs decision are legally unsound and that claims made by advocates of validity generalization are scientifically unsupported. (Author/NB)

  8. A novel process of viral vector barcoding and library preparation enables high-diversity library generation and recombination-free paired-end sequencing

    PubMed Central

    Davidsson, Marcus; Diaz-Fernandez, Paula; Schwich, Oliver D.; Torroba, Marcos; Wang, Gang; Björklund, Tomas

    2016-01-01

    Detailed characterization and mapping of oligonucleotide function in vivo is generally a very time consuming effort that only allows for hypothesis driven subsampling of the full sequence to be analysed. Recent advances in deep sequencing together with highly efficient parallel oligonucleotide synthesis and cloning techniques have, however, opened up for entirely new ways to map genetic function in vivo. Here we present a novel, optimized protocol for the generation of universally applicable, barcode labelled, plasmid libraries. The libraries are designed to enable the production of viral vector preparations assessing coding or non-coding RNA function in vivo. When generating high diversity libraries, it is a challenge to achieve efficient cloning, unambiguous barcoding and detailed characterization using low-cost sequencing technologies. With the presented protocol, diversity of above 3 million uniquely barcoded adeno-associated viral (AAV) plasmids can be achieved in a single reaction through a process achievable in any molecular biology laboratory. This approach opens up for a multitude of in vivo assessments from the evaluation of enhancer and promoter regions to the optimization of genome editing. The generated plasmid libraries are also useful for validation of sequencing clustering algorithms and we here validate the newly presented message passing clustering process named Starcode. PMID:27874090

  9. Percent Grammatical Responses as a General Outcome Measure: Initial Validity

    ERIC Educational Resources Information Center

    Eisenberg, Sarita L.; Guo, Ling-Yu

    2018-01-01

    Purpose: This report investigated the validity of using percent grammatical responses (PGR) as a measure for assessing grammaticality. To establish construct validity, we computed the correlation of PGR with another measure of grammar skills and with an unrelated skill area. To establish concurrent validity for PGR, we computed the correlation of…

  10. A Model for Estimating the Reliability and Validity of Criterion-Referenced Measures.

    ERIC Educational Resources Information Center

    Edmonston, Leon P.; Randall, Robert S.

    A decision model designed to determine the reliability and validity of criterion referenced measures (CRMs) is presented. General procedures which pertain to the model are discussed as to: Measures of relationship, Reliability, Validity (content, criterion-oriented, and construct validation), and Item Analysis. The decision model is presented in…

  11. 45 CFR 153.630 - Data validation requirements when HHS operates risk adjustment.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Data validation requirements when HHS operates... Program § 153.630 Data validation requirements when HHS operates risk adjustment. (a) General requirement... performed on its risk adjustment data as described in this section. (b) Initial validation audit. (1) An...

  12. 45 CFR 153.630 - Data validation requirements when HHS operates risk adjustment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Data validation requirements when HHS operates... Program § 153.630 Data validation requirements when HHS operates risk adjustment. (a) General requirement... performed on its risk adjustment data as described in this section. (b) Initial validation audit. (1) An...

  13. 22 CFR 51.4 - Validity of passports.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Validity of passports. 51.4 Section 51.4 Foreign Relations DEPARTMENT OF STATE NATIONALITY AND PASSPORTS PASSPORTS General § 51.4 Validity of passports. (a) Signature of bearer. A passport book is valid only when signed by the bearer in the space...

  14. 22 CFR 51.4 - Validity of passports.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Validity of passports. 51.4 Section 51.4 Foreign Relations DEPARTMENT OF STATE NATIONALITY AND PASSPORTS PASSPORTS General § 51.4 Validity of passports. (a) Signature of bearer. A passport book is valid only when signed by the bearer in the space...

  15. 22 CFR 51.4 - Validity of passports.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Validity of passports. 51.4 Section 51.4 Foreign Relations DEPARTMENT OF STATE NATIONALITY AND PASSPORTS PASSPORTS General § 51.4 Validity of passports. (a) Signature of bearer. A passport book is valid only when signed by the bearer in the space...

  16. 22 CFR 51.4 - Validity of passports.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Validity of passports. 51.4 Section 51.4 Foreign Relations DEPARTMENT OF STATE NATIONALITY AND PASSPORTS PASSPORTS General § 51.4 Validity of passports. (a) Signature of bearer. A passport book is valid only when signed by the bearer in the space...

  17. 22 CFR 51.4 - Validity of passports.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Validity of passports. 51.4 Section 51.4 Foreign Relations DEPARTMENT OF STATE NATIONALITY AND PASSPORTS PASSPORTS General § 51.4 Validity of passports. (a) Signature of bearer. A passport book is valid only when signed by the bearer in the space...

  18. Validity and Reliability in Social Science Research

    ERIC Educational Resources Information Center

    Drost, Ellen A.

    2011-01-01

    In this paper, the author aims to provide novice researchers with an understanding of the general problem of validity in social science research and to acquaint them with approaches to developing strong support for the validity of their research. She provides insight into these two important concepts, namely (1) validity; and (2) reliability, and…

  19. 29 CFR 1607.7 - Use of other validity studies.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 4 2011-07-01 2011-07-01 false Use of other validity studies. 1607.7 Section 1607.7 Labor... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection...

  20. 20 CFR 404.725 - Evidence of a valid ceremonial marriage.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Evidence of a valid ceremonial marriage. 404... DISABILITY INSURANCE (1950- ) Evidence Evidence of Age, Marriage, and Death § 404.725 Evidence of a valid ceremonial marriage. (a) General. A valid ceremonial marriage is one that follows procedures set by law in...

  1. Multibody dynamical modeling for spacecraft docking process with spring-damper buffering device: A new validation approach

    NASA Astrophysics Data System (ADS)

    Daneshjou, Kamran; Alibakhshi, Reza

    2018-01-01

    In the current manuscript, the process of spacecraft docking, as one of the main risky operations in an on-orbit servicing mission, is modeled based on unconstrained multibody dynamics. The spring-damper buffering device is utilized here in the docking probe-cone system for micro-satellites. Owing to the impact occurs inevitably during docking process and the motion characteristics of multibody systems are remarkably affected by this phenomenon, a continuous contact force model needs to be considered. Spring-damper buffering device, keeping the spacecraft stable in an orbit when impact occurs, connects a base (cylinder) inserted in the chaser satellite and the end of docking probe. Furthermore, by considering a revolute joint equipped with torsional shock absorber, between base and chaser satellite, the docking probe can experience both translational and rotational motions simultaneously. Although spacecraft docking process accompanied by the buffering mechanisms may be modeled by constrained multibody dynamics, this paper deals with a simple and efficient formulation to eliminate the surplus generalized coordinates and solve the impact docking problem based on unconstrained Lagrangian mechanics. By an example problem, first, model verification is accomplished by comparing the computed results with those recently reported in the literature. Second, according to a new alternative validation approach, which is based on constrained multibody problem, the accuracy of presented model can be also evaluated. This proposed verification approach can be applied to indirectly solve the constrained multibody problems by minimum required effort. The time history of impact force, the influence of system flexibility and physical interaction between shock absorber and penetration depth caused by impact are the issues followed in this paper. Third, the MATLAB/SIMULINK multibody dynamic analysis software will be applied to build impact docking model to validate computed results and then, investigate the trajectories of both satellites to take place the successful capture process.

  2. Creating an open access cal/val repository via the LACO-Wiki online validation platform

    NASA Astrophysics Data System (ADS)

    Perger, Christoph; See, Linda; Dresel, Christopher; Weichselbaum, Juergen; Fritz, Steffen

    2017-04-01

    There is a major gap in the amount of in-situ data available on land cover and land use, either as field-based ground truth information or from image interpretation, both of which are used for the calibration and validation (cal/val) of products derived from Earth Observation. Although map producers generally publish their confusion matrices and the accuracy measures associated with their land cover and land use products, the cal/val data (also referred to as reference data) are rarely shared in an open manner. Although there have been efforts in compiling existing reference datasets and making them openly available, e.g. through the GOFC/GOLD (Global Observation for Forest Cover and Land Dynamics) portal or the European Commission's Copernicus Reference Data Access (CORDA), this represents a tiny fraction of the reference data collected and stored locally around the world. Moreover, the validation of land cover and land use maps is usually undertaken with tools and procedures specific to a particular institute or organization due to the lack of standardized validation procedures; thus, there are currently no incentives to share the reference data more broadly with the land cover and land use community. In an effort to provide a set of standardized, online validation tools and to build an open repository of cal/val data, the LACO-Wiki online validation portal has been developed, which will be presented in this paper. The portal contains transparent, documented and reproducible validation procedures that can be applied to local as well as global products. LACO-Wiki was developed through a user consultation process that resulted in a 4-step wizard-based workflow, which supports the user from uploading the map product for validation, through to the sampling process and the validation of these samples, until the results are processed and a final report is created that includes a range of commonly reported accuracy measures. One of the design goals of LACO-Wiki has been to simplify the workflows as much as possible so that the tool can be used both professionally and in an educational or non-expert context. By using the tool for validation, the user agrees to share their validation samples and therefore contribute to an open access cal/val repository. Interest in the use of LACO-Wiki for validation of national land cover or related products has already been expressed, e.g. by national stakeholders under the umbrella of the European Environment Agency (EEA), and for global products by GOFC/GOLD and the Group on Earth Observation (GEO). Thus, LACO-Wiki has the potential to become the focal point around which an international land cover validation community could be built, and could significantly advance the state-of-the-art in land cover cal/val, particularly given recent developments in opening up of the Landsat archive and the open availability of Sentinel imagery. The platform will also offer open access to crowdsourced in-situ data, for example, from the recently developed LACO-Wiki mobile smartphone app, which can be used to collect additional validation information in the field, as well as to validation data collected via its partner platform, Geo-Wiki, where an already established community of citizen scientists collect land cover and land use data for different research applications.

  3. More About Robustness of Coherence

    NASA Astrophysics Data System (ADS)

    Li, Pi-Yu; Liu, Feng; Xu, Yan-Qin; La, Dong-Sheng

    2018-07-01

    Quantum coherence is an important physical resource in quantum computation and quantum information processing. In this paper, the distribution of the robustness of coherence in multipartite quantum system is considered. It is shown that the additivity of the robustness of coherence is not always valid for general quantum state, but the robustness of coherence is decreasing under partial trace for any bipartite quantum system. The ordering states with the coherence measures RoC, the l 1 norm of coherence C_{l1} and the relative entropy of coherence C r are also discussed.

  4. Hierarchy of Efficiently Computable and Faithful Lower Bounds to Quantum Discord

    NASA Astrophysics Data System (ADS)

    Piani, Marco

    2016-08-01

    Quantum discord expresses a fundamental nonclassicality of correlations that is more general than entanglement, but that, in its standard definition, is not easily evaluated. We derive a hierarchy of computationally efficient lower bounds to the standard quantum discord. Every nontrivial element of the hierarchy constitutes by itself a valid discordlike measure, based on a fundamental feature of quantum correlations: their lack of shareability. Our approach emphasizes how the difference between entanglement and discord depends on whether shareability is intended as a static property or as a dynamical process.

  5. Field scale test of multi-dimensional flow and morphodynamic simulations used for restoration design analysis

    USGS Publications Warehouse

    McDonald, Richard R.; Nelson, Jonathan M.; Fosness, Ryan L.; Nelson, Peter O.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan

    2016-01-01

    Two- and three-dimensional morphodynamic simulations are becoming common in studies of channel form and process. The performance of these simulations are often validated against measurements from laboratory studies. Collecting channel change information in natural settings for model validation is difficult because it can be expensive and under most channel forming flows the resulting channel change is generally small. Several channel restoration projects designed in part to armor large meanders with several large spurs constructed of wooden piles on the Kootenai River, ID, have resulted in rapid bed elevation change following construction. Monitoring of these restoration projects includes post- restoration (as-built) Digital Elevation Models (DEMs) as well as additional channel surveys following high channel forming flows post-construction. The resulting sequence of measured bathymetry provides excellent validation data for morphodynamic simulations at the reach scale of a real river. In this paper we test the performance a quasi-three-dimensional morphodynamic simulation against the measured elevation change. The resulting simulations predict the pattern of channel change reasonably well but many of the details such as the maximum scour are under predicted.

  6. [Validation of the Questionnaire of Emotional Maladjustment and Adaptive Resources in Infertility (DERA)].

    PubMed

    Moreno-Rosset, Carmen; Antequera Jurado, Rosario; Jenaro Río, Cristina

    2009-02-01

    Validation of the Questionnaire of Emotional Maladjustment and Adaptive Resources in Infertility (DERA). Given the absence of measures to help psychologists working with infertile couples, this paper presents the process of developing a standardized measure to assess emotional maladjustment and adaptive resources in this population. A cross-sectional design was utilized to gather data from the assisted reproduction units of two public hospitals. Preliminary analyses were performed with a sample of 85 infertile patients. Psychometric properties of the measure were tested with a second sample of 490 infertile patients. Concerning reliability analyses, alpha indexes were adequate both for the measure and its factors. Concerning validity, second-order factor analysis yielded a four-factor solution that conjointly explains 56% of the total variance. Additional analyses with a third sample of 50 participants from the general population matched with a sample of 50 infertile participants were performed. In sum, this measure seems to be a useful psychological assessment tool to determine emotional adjustment, and individual, and interpersonal resources, for coping with infertility diagnosis and treatment.

  7. Development and preliminary validation of an index for indicating the risks of the design of working hours to health and wellbeing.

    PubMed

    Schomann, Carsten; Giebel, Ole; Nachreiner, Friedhelm

    2006-01-01

    BASS 4, a computer program for the design and evaluation of workings hours, is an example of an ergonomics-based software tool that can be used by safety practitioners at the shop floor with regard to legal, ergonomic, and economic criteria. Based on experiences with this computer program, a less sophisticated Working-Hours-Risk Index for assessing the quality of work schedules (including flexible work hours) to indicate risks to health and wellbeing has been developed to provide a quick and easy applicable tool for legally required risk assessments. The results of a validation study show that this risk index seems to be a promising indicator for predicting risks of health complaints and wellbeing. The purpose of the Risk Index is to simplify the evaluation process at the shop floor and provide some more general information about the quality of a work schedule that can be used for triggering preventive interventions. Such a risk index complies with practitioners' expectations and requests for easy, useful, and valid instruments.

  8. Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) processing speed scores as measures of noncredible responding: The third generation of embedded performance validity indicators.

    PubMed

    Erdodi, Laszlo A; Abeare, Christopher A; Lichtenstein, Jonathan D; Tyson, Bradley T; Kucharski, Brittany; Zuccato, Brandon G; Roth, Robert M

    2017-02-01

    Research suggests that select processing speed measures can also serve as embedded validity indicators (EVIs). The present study examined the diagnostic utility of Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) subtests as EVIs in a mixed clinical sample of 205 patients medically referred for neuropsychological assessment (53.3% female, mean age = 45.1). Classification accuracy was calculated against 3 composite measures of performance validity as criterion variables. A PSI ≤79 produced a good combination of sensitivity (.23-.56) and specificity (.92-.98). A Coding scaled score ≤5 resulted in good specificity (.94-1.00), but low and variable sensitivity (.04-.28). A Symbol Search scaled score ≤6 achieved a good balance between sensitivity (.38-.64) and specificity (.88-.93). A Coding-Symbol Search scaled score difference ≥5 produced adequate specificity (.89-.91) but consistently low sensitivity (.08-.12). A 2-tailed cutoff on the Coding/Symbol Search raw score ratio (≤1.41 or ≥3.57) produced acceptable specificity (.87-.93), but low sensitivity (.15-.24). Failing ≥2 of these EVIs produced variable specificity (.81-.93) and sensitivity (.31-.59). Failing ≥3 of these EVIs stabilized specificity (.89-.94) at a small cost to sensitivity (.23-.53). Results suggest that processing speed based EVIs have the potential to provide a cost-effective and expedient method for evaluating the validity of cognitive data. Given their generally low and variable sensitivity, however, they should not be used in isolation to determine the credibility of a given response set. They also produced unacceptably high rates of false positive errors in patients with moderate-to-severe head injury. Combining evidence from multiple EVIs has the potential to improve overall classification accuracy. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. CBT Specific Process in Exposure-Based Treatments: Initial Examination in a Pediatric OCD Sample

    PubMed Central

    Benito, Kristen Grabill; Conelea, Christine; Garcia, Abbe M.; Freeman, Jennifer B.

    2012-01-01

    Cognitive-Behavioral theory and empirical support suggest that optimal activation of fear is a critical component for successful exposure treatment. Using this theory, we developed coding methodology for measuring CBT-specific process during exposure. We piloted this methodology in a sample of young children (N = 18) who previously received CBT as part of a randomized controlled trial. Results supported the preliminary reliability and predictive validity of coding variables with 12 week and 3 month treatment outcome data, generally showing results consistent with CBT theory. However, given our limited and restricted sample, additional testing is warranted. Measurement of CBT-specific process using this methodology may have implications for understanding mechanism of change in exposure-based treatments and for improving dissemination efforts through identification of therapist behaviors associated with improved outcome. PMID:22523609

  10. Study of photon correlation techniques for processing of laser velocimeter signals

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1977-01-01

    The objective was to provide the theory and a system design for a new type of photon counting processor for low level dual scatter laser velocimeter (LV) signals which would be capable of both the first order measurements of mean flow and turbulence intensity and also the second order time statistics: cross correlation auto correlation, and related spectra. A general Poisson process model for low level LV signals and noise which is valid from the photon-resolved regime all the way to the limiting case of nonstationary Gaussian noise was used. Computer simulation algorithms and higher order statistical moment analysis of Poisson processes were derived and applied to the analysis of photon correlation techniques. A system design using a unique dual correlate and subtract frequency discriminator technique is postulated and analyzed. Expectation analysis indicates that the objective measurements are feasible.

  11. Rationality versus reality: the challenges of evidence-based decision making for health policy makers

    PubMed Central

    2010-01-01

    Background Current healthcare systems have extended the evidence-based medicine (EBM) approach to health policy and delivery decisions, such as access-to-care, healthcare funding and health program continuance, through attempts to integrate valid and reliable evidence into the decision making process. These policy decisions have major impacts on society and have high personal and financial costs associated with those decisions. Decision models such as these function under a shared assumption of rational choice and utility maximization in the decision-making process. Discussion We contend that health policy decision makers are generally unable to attain the basic goals of evidence-based decision making (EBDM) and evidence-based policy making (EBPM) because humans make decisions with their naturally limited, faulty, and biased decision-making processes. A cognitive information processing framework is presented to support this argument, and subtle cognitive processing mechanisms are introduced to support the focal thesis: health policy makers' decisions are influenced by the subjective manner in which they individually process decision-relevant information rather than on the objective merits of the evidence alone. As such, subsequent health policy decisions do not necessarily achieve the goals of evidence-based policy making, such as maximizing health outcomes for society based on valid and reliable research evidence. Summary In this era of increasing adoption of evidence-based healthcare models, the rational choice, utility maximizing assumptions in EBDM and EBPM, must be critically evaluated to ensure effective and high-quality health policy decisions. The cognitive information processing framework presented here will aid health policy decision makers by identifying how their decisions might be subtly influenced by non-rational factors. In this paper, we identify some of the biases and potential intervention points and provide some initial suggestions about how the EBDM/EBPM process can be improved. PMID:20504357

  12. Rationality versus reality: the challenges of evidence-based decision making for health policy makers.

    PubMed

    McCaughey, Deirdre; Bruning, Nealia S

    2010-05-26

    Current healthcare systems have extended the evidence-based medicine (EBM) approach to health policy and delivery decisions, such as access-to-care, healthcare funding and health program continuance, through attempts to integrate valid and reliable evidence into the decision making process. These policy decisions have major impacts on society and have high personal and financial costs associated with those decisions. Decision models such as these function under a shared assumption of rational choice and utility maximization in the decision-making process. We contend that health policy decision makers are generally unable to attain the basic goals of evidence-based decision making (EBDM) and evidence-based policy making (EBPM) because humans make decisions with their naturally limited, faulty, and biased decision-making processes. A cognitive information processing framework is presented to support this argument, and subtle cognitive processing mechanisms are introduced to support the focal thesis: health policy makers' decisions are influenced by the subjective manner in which they individually process decision-relevant information rather than on the objective merits of the evidence alone. As such, subsequent health policy decisions do not necessarily achieve the goals of evidence-based policy making, such as maximizing health outcomes for society based on valid and reliable research evidence. In this era of increasing adoption of evidence-based healthcare models, the rational choice, utility maximizing assumptions in EBDM and EBPM, must be critically evaluated to ensure effective and high-quality health policy decisions. The cognitive information processing framework presented here will aid health policy decision makers by identifying how their decisions might be subtly influenced by non-rational factors. In this paper, we identify some of the biases and potential intervention points and provide some initial suggestions about how the EBDM/EBPM process can be improved.

  13. Client reflections on confirmation and disconfirmation of expectations in cognitive behavioral therapy for generalized anxiety disorder with and without motivational interviewing.

    PubMed

    Button, Melissa L; Norouzian, Nikoo; Westra, Henny A; Constantino, Michael J; Antony, Martin M

    2018-01-22

    Addressing methodological shortcomings of prior work on process expectations, this study examined client process expectations both prospectively and retrospectively following treatment. Differences between clients receiving cognitive behavioral therapy (CBT) versus motivational interviewing integrated with CBT (MI-CBT) were also examined. Grounded theory analysis was used to study narratives of 10 participants (N = 5 CBT, 5 MI-CBT) who completed treatment for severe generalized anxiety disorder as part of a larger randomized controlled trial. Clients in both groups reported and elaborated expectancy disconfirmations more than expectancy confirmations. Compared to CBT clients, MI-CBT clients reported experiencing greater agency in the treatment process than expected (e.g., that they did most of the work) and that therapy provided a corrective experience. Despite nearly all clients achieving recovery status, CBT clients described therapy as not working in some ways (i.e., tasks did not fit, lack of improvement) and that they overcame initial skepticism regarding treatment. Largely converging with MI theory, findings highlight the role of key therapist behaviors (e.g., encouraging client autonomy, validating) in facilitating client experiences of the self as an agentic individual who is actively engaged in the therapy process and capable of effecting change.

  14. Realisation of the guidelines for faculty-internal exams at the Department of General Medicine at the University of Munich: Pushing medical exams one step ahead with IMSm.

    PubMed

    Boeder, Niklas; Holzer, Matthias; Schelling, Jörg

    2012-01-01

    Graded exams are prerequisites for the admission to the medical state examination. Accordingly the exams must be of good quality in order to allow benchmarking with the faculty and between different universities. Criteria for good quality need to be considered - namely objectivity, validity and reliability. The guidelines for the processing of exams published by the GMA are supposed to help maintaining those criteria. In 2008 the Department of General Medicine at the University of Munich fulfils only 14 of 18 items. A review process, appropriate training of the staff and the introduction of the IMSm software were the main changes that helped to improve the 'GMA-score' to 30 fulfilled items. We see the introduction of the IMSm system as our biggest challenge ahead. IMSm helps to streamline the necessary workflow and improves their quality (e.g. by the detection of cueing, item analysis). Overall, we evaluate the steps to improve the exam process as very positive. We plan to engage co-workers outside the department to assist in the various review processes in the future. Furthermore we think it might be of value to get into contact with other departments and faculties to benefit from each other's question pools.

  15. Effective virus inactivation and removal by steps of Biotest Pharmaceuticals IGIV production process

    PubMed Central

    Dichtelmüller, Herbert O.; Flechsig, Eckhard; Sananes, Frank; Kretschmar, Michael; Dougherty, Christopher J.

    2012-01-01

    The virus validation of three steps of Biotest Pharmaceuticals IGIV production process is described here. The steps validated are precipitation and removal of fraction III of the cold ethanol fractionation process, solvent/detergent treatment and 35 nm virus filtration. Virus validation was performed considering combined worst case conditions. By these validated steps sufficient virus inactivation/removal is achieved, resulting in a virus safe product. PMID:24371563

  16. Validation and Comprehension: An Integrated Overview

    ERIC Educational Resources Information Center

    Kendeou, Panayiota

    2014-01-01

    In this article, I review and discuss the work presented in this special issue while focusing on a number of issues that warrant further investigation in validation research. These issues pertain to the nature of the validation processes, the processes and mechanisms that support validation during comprehension, the factors that influence…

  17. Process evaluation to explore internal and external validity of the "Act in Case of Depression" care program in nursing homes.

    PubMed

    Leontjevas, Ruslan; Gerritsen, Debby L; Koopmans, Raymond T C M; Smalbrugge, Martin; Vernooij-Dassen, Myrra J F J

    2012-06-01

    A multidisciplinary, evidence-based care program to improve the management of depression in nursing home residents was implemented and tested using a stepped-wedge design in 23 nursing homes (NHs): "Act in case of Depression" (AiD). Before effect analyses, to evaluate AiD process data on sampling quality (recruitment and randomization, reach) and intervention quality (relevance and feasibility, extent to which AiD was performed), which can be used for understanding internal and external validity. In this article, a model is presented that divides process evaluation data into first- and second-order process data. Qualitative and quantitative data based on personal files of residents, interviews of nursing home professionals, and a research database were analyzed according to the following process evaluation components: sampling quality and intervention quality. Nursing home. The pattern of residents' informed consent rates differed for dementia special care units and somatic units during the study. The nursing home staff was satisfied with the AiD program and reported that the program was feasible and relevant. With the exception of the first screening step (nursing staff members using a short observer-based depression scale), AiD components were not performed fully by NH staff as prescribed in the AiD protocol. Although NH staff found the program relevant and feasible and was satisfied with the program content, individual AiD components may have different feasibility. The results on sampling quality implied that statistical analyses of AiD effectiveness should account for the type of unit, whereas the findings on intervention quality implied that, next to the type of unit, analyses should account for the extent to which individual AiD program components were performed. In general, our first-order process data evaluation confirmed internal and external validity of the AiD trial, and this evaluation enabled further statistical fine tuning. The importance of evaluating the first-order process data before executing statistical effect analyses is thus underlined. Copyright © 2012 American Medical Directors Association, Inc. Published by Elsevier Inc. All rights reserved.

  18. Validity and reliability of temperature measurement by heat flow thermistors, flexible thermocouple probes and thermistors in a stirred water bath.

    PubMed

    Versey, Nathan G; Gore, Christopher J; Halson, Shona L; Plowman, Jamie S; Dawson, Brian T

    2011-09-01

    We determined the validity and reliability of heat flow thermistors, flexible thermocouple probes and general purpose thermistors compared with a calibrated reference thermometer in a stirred water bath. Validity (bias) was defined as the difference between the observed and criterion values, and reliability as the repeatability (standard deviation or typical error) of measurement. Data were logged every 5 s for 10 min at water temperatures of 14, 26 and 38 °C for ten heat flow thermistors and 24 general purpose thermistors, and at 35, 38 and 41 °C for eight flexible thermocouple probes. Statistical analyses were conducted using spreadsheets for validity and reliability, where an acceptable bias was set at ±0.1 °C. None of the heat flow thermistors, 17% of the flexible thermocouple probes and 71% of the general purpose thermistors met the validity criterion for temperature. The inter-probe reliabilities were 0.03 °C for heat flow thermistors, 0.04 °C for flexible thermocouple probes and 0.09 °C for general purpose thermistors. The within trial intra-probe reliability of all three temperature probes was 0.01 °C. The results suggest that these temperature sensors should be calibrated individually before use at relevant temperatures and the raw data corrected using individual linear regression equations.

  19. Development and validation of a public attitudes toward epilepsy (PATE) scale.

    PubMed

    Lim, Kheng-Seang; Wu, Cathie; Choo, Wan-Yuen; Tan, Chong-Tin

    2012-06-01

    A quantitative scale of public attitudes toward epilepsy is essential to determine the magnitude of social stigma against epilepsy. This study aims to develop and validate a cross-culturally applicable scale of public attitudes toward epilepsy. A set of questions was selected from questionnaires identified from a literature review, following which a panel review determined the final version, consisting of 18 items. A 1-5 Likert scale was used for scoring. Additional questions, related to perception of the productivity of people with epilepsy and of a modified epilepsy stigma scale, were added as part of construct validation. One hundred and thirty heterogeneous respondents were collected, consisting of various age groups, ethnicity and occupation status levels. After item and factor analyses, the final version consisted of 14 items. Psychometric properties of the scale were first determined using factor analysis, which revealed a general and a personal domain, with good internal consistency (Cronbach's coefficient 0.868 and 0.633, respectively). Construct validation was demonstrated. The mean score for the personal domain was higher than that for the general domain (2.72±0.56 and 2.09±0.59, respectively). The mean scores of those with tertiary education were significantly lower for the general domain, but not for the personal domain. Age was positively correlated with the mean scores in the personal domain, but not in the general domain. This scale is a reliable and valid scale to assess public attitudes toward epilepsy, in both the general and personal domains. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Psychometric properties and validation of the Italian version of the Family Assessment Measure Third Edition - Short Version - in a nonclinical sample.

    PubMed

    Pellerone, Monica; Ramaci, Tiziana; Parrello, Santa; Guariglia, Paola; Giaimo, Flavio

    2017-01-01

    Family functioning plays an important role in developing and maintaining dysfunctional behaviors, especially during adolescence. The lack of indicators of family functioning, as determinants of personal and interpersonal problems, represents an obstacle to the activities aimed at developing preventive and intervention strategies. The Process Model of Family Functioning provides a conceptual framework organizing and integrating various concepts into a comprehensive family assessment; this model underlines that through the process of task accomplishment, each family meets objectives central to its life as a group. The Family Assessment Measure Third Edition (FAM III), based on the Process Model of Family Functioning, is among the most frequently used self-report instruments to measure family functioning. The present study aimed to evaluate the psychometric properties of the Italian version of the Family Assessment Measure Third Edition - Short Version (Brief FAM-III). It consists of three modules: General Scale, which evaluates the family as a system; Dyadic Relationships Scale, which examines how each family member perceives his/her relationship with another member; and Self-Rating Scale, which indicates how each family member is perceived within the nucleus. The developed Brief FAM-III together with the Family Assessment Device were administered to 484 subjects, members of 162 Italian families, formed of 162 fathers aged between 35 and 73 years; 162 mothers aged between 34 and 69 years; and 160 children aged between 12 and 35 years. Correlation, paired-sample t -test, and reliability analyses were carried out. General item analysis shows good indices of reliability with Cronbach's α coefficients equal to 0.96. The Brief FAM-III has satisfactory internal consistency, with Cronbach's α equal to 0.90 for General Scale, 0.94 for Dyadic Relationships Scale, and 0.88 for the Self-Rating Scale. The Brief FAM-III can be a psychometrically reliable and valid measure for the assessment of family strengths and weaknesses within Italian contexts. The instrument can be used to obtain an overall idea of family functioning, for the purposes of preliminary screening, and for monitoring family functioning over time or during treatment.

  1. Psychometric properties and validation of the Italian version of the Family Assessment Measure Third Edition – Short Version – in a nonclinical sample

    PubMed Central

    Pellerone, Monica; Ramaci, Tiziana; Parrello, Santa; Guariglia, Paola; Giaimo, Flavio

    2017-01-01

    Background Family functioning plays an important role in developing and maintaining dysfunctional behaviors, especially during adolescence. The lack of indicators of family functioning, as determinants of personal and interpersonal problems, represents an obstacle to the activities aimed at developing preventive and intervention strategies. The Process Model of Family Functioning provides a conceptual framework organizing and integrating various concepts into a comprehensive family assessment; this model underlines that through the process of task accomplishment, each family meets objectives central to its life as a group. The Family Assessment Measure Third Edition (FAM III), based on the Process Model of Family Functioning, is among the most frequently used self-report instruments to measure family functioning. Materials and methods The present study aimed to evaluate the psychometric properties of the Italian version of the Family Assessment Measure Third Edition – Short Version (Brief FAM-III). It consists of three modules: General Scale, which evaluates the family as a system; Dyadic Relationships Scale, which examines how each family member perceives his/her relationship with another member; and Self-Rating Scale, which indicates how each family member is perceived within the nucleus. The developed Brief FAM-III together with the Family Assessment Device were administered to 484 subjects, members of 162 Italian families, formed of 162 fathers aged between 35 and 73 years; 162 mothers aged between 34 and 69 years; and 160 children aged between 12 and 35 years. Correlation, paired-sample t-test, and reliability analyses were carried out. Results General item analysis shows good indices of reliability with Cronbach’s α coefficients equal to 0.96. The Brief FAM-III has satisfactory internal consistency, with Cronbach’s α equal to 0.90 for General Scale, 0.94 for Dyadic Relationships Scale, and 0.88 for the Self-Rating Scale. Conclusion The Brief FAM-III can be a psychometrically reliable and valid measure for the assessment of family strengths and weaknesses within Italian contexts. The instrument can be used to obtain an overall idea of family functioning, for the purposes of preliminary screening, and for monitoring family functioning over time or during treatment. PMID:28280402

  2. TES Validation Reports

    Atmospheric Science Data Center

    2014-06-30

    ... Reports: TES Data Versions: TES Validation Report Version 6.0 (PDF) R13 processing version; F07_10 file versions TES Validation Report Version 5.0 (PDF) R12 processing version; F06_08, F06_09 file ...

  3. A PetriNet-Based Approach for Supporting Traceability in Cyber-Physical Manufacturing Systems

    PubMed Central

    Huang, Jiwei; Zhu, Yeping; Cheng, Bo; Lin, Chuang; Chen, Junliang

    2016-01-01

    With the growing popularity of complex dynamic activities in manufacturing processes, traceability of the entire life of every product has drawn significant attention especially for food, clinical materials, and similar items. This paper studies the traceability issue in cyber-physical manufacturing systems from a theoretical viewpoint. Petri net models are generalized for formulating dynamic manufacturing processes, based on which a detailed approach for enabling traceability analysis is presented. Models as well as algorithms are carefully designed, which can trace back the lifecycle of a possibly contaminated item. A practical prototype system for supporting traceability is designed, and a real-life case study of a quality control system for bee products is presented to validate the effectiveness of the approach. PMID:26999141

  4. A PetriNet-Based Approach for Supporting Traceability in Cyber-Physical Manufacturing Systems.

    PubMed

    Huang, Jiwei; Zhu, Yeping; Cheng, Bo; Lin, Chuang; Chen, Junliang

    2016-03-17

    With the growing popularity of complex dynamic activities in manufacturing processes, traceability of the entire life of every product has drawn significant attention especially for food, clinical materials, and similar items. This paper studies the traceability issue in cyber-physical manufacturing systems from a theoretical viewpoint. Petri net models are generalized for formulating dynamic manufacturing processes, based on which a detailed approach for enabling traceability analysis is presented. Models as well as algorithms are carefully designed, which can trace back the lifecycle of a possibly contaminated item. A practical prototype system for supporting traceability is designed, and a real-life case study of a quality control system for bee products is presented to validate the effectiveness of the approach.

  5. A model of the human observer and decision maker

    NASA Technical Reports Server (NTRS)

    Wewerinke, P. H.

    1981-01-01

    The decision process is described in terms of classical sequential decision theory by considering the hypothesis that an abnormal condition has occurred by means of a generalized likelihood ratio test. For this, a sufficient statistic is provided by the innovation sequence which is the result of the perception an information processing submodel of the human observer. On the basis of only two model parameters, the model predicts the decision speed/accuracy trade-off and various attentional characteristics. A preliminary test of the model for single variable failure detection tasks resulted in a very good fit of the experimental data. In a formal validation program, a variety of multivariable failure detection tasks was investigated and the predictive capability of the model was demonstrated.

  6. Exercise barriers self-efficacy: development and validation of a subcale for individuals with cancer-related lymphedema.

    PubMed

    Buchan, Jena; Janda, Monika; Box, Robyn; Rogers, Laura; Hayes, Sandi

    2015-03-18

    No tool exists to measure self-efficacy for overcoming lymphedema-related exercise barriers in individuals with cancer-related lymphedema. However, an existing scale measures confidence to overcome general exercise barriers in cancer survivors. Therefore, the purpose of this study was to develop, validate and assess the reliability of a subscale, to be used in conjunction with the general barriers scale, for determining exercise barriers self-efficacy in individuals facing lymphedema-related exercise barriers. A lymphedema-specific exercise barriers self-efficacy subscale was developed and validated using a cohort of 106 cancer survivors with cancer-related lymphedema, from Brisbane, Australia. An initial ten-item lymphedema-specific barrier subscale was developed and tested, with participant feedback and principal components analysis results used to guide development of the final version. Validity and test-retest reliability analyses were conducted on the final subscale. The final lymphedema-specific subscale contained five items. Principal components analysis revealed these items loaded highly (>0.75) on a separate factor when tested with a well-established nine-item general barriers scale. The final five-item subscale demonstrated good construct and criterion validity, high internal consistency (Cronbach's alpha = 0.93) and test-retest reliability (ICC = 0.67, p < 0.01). A valid and reliable lymphedema-specific subscale has been developed to assess exercise barriers self-efficacy in individuals with cancer-related lymphedema. This scale can be used in conjunction with an existing general exercise barriers scale to enhance exercise adherence in this understudied patient group.

  7. Assessing Cognitive Performance in Badminton Players: A Reproducibility and Validity Study

    PubMed Central

    van de Water, Tanja; Faber, Irene; Elferink-Gemser, Marije

    2017-01-01

    Abstract Fast reaction and good inhibitory control are associated with elite sports performance. To evaluate the reproducibility and validity of a newly developed Badminton Reaction Inhibition Test (BRIT), fifteen elite (25 ± 4 years) and nine non-elite (24 ± 4 years) Dutch male badminton players participated in the study. The BRIT measured four components: domain-general reaction time, badminton-specific reaction time, domain-general inhibitory control and badminton-specific inhibitory control. Five participants were retested within three weeks on the badminton-specific components. Reproducibility was acceptable for badminton-specific reaction time (ICC = 0.626, CV = 6%) and for badminton-specific inhibitory control (ICC = 0.317, CV = 13%). Good construct validity was shown for badminton-specific reaction time discriminating between elite and non-elite players (F = 6.650, p < 0.05). Elite players did not outscore non-elite players on domain-general reaction time nor on both components of inhibitory control (p > 0.05). Concurrent validity for domain-general reaction time was good, as it was associated with a national ranking for elite (p = 0.70, p < 0.01) and non-elite (p = 0.70, p < 0.05) players. No relationship was found between the national ranking and badminton-specific reaction time, nor both components of inhibitory control (p > 0.05). In conclusion, reproducibility and validity of inhibitory control assessment was not confirmed, however, the BRIT appears a reproducible and valid measure of reaction time in badminton players. Reaction time measured with the BRIT may provide input for training programs aiming to improve badminton players’ performance. PMID:28210347

  8. Assessing Cognitive Performance in Badminton Players: A Reproducibility and Validity Study.

    PubMed

    van de Water, Tanja; Huijgen, Barbara; Faber, Irene; Elferink-Gemser, Marije

    2017-01-01

    Fast reaction and good inhibitory control are associated with elite sports performance. To evaluate the reproducibility and validity of a newly developed Badminton Reaction Inhibition Test (BRIT), fifteen elite (25 ± 4 years) and nine non-elite (24 ± 4 years) Dutch male badminton players participated in the study. The BRIT measured four components: domain-general reaction time, badminton-specific reaction time, domain-general inhibitory control and badminton-specific inhibitory control. Five participants were retested within three weeks on the badminton-specific components. Reproducibility was acceptable for badminton-specific reaction time (ICC = 0.626, CV = 6%) and for badminton-specific inhibitory control (ICC = 0.317, CV = 13%). Good construct validity was shown for badminton-specific reaction time discriminating between elite and non-elite players (F = 6.650, p < 0.05). Elite players did not outscore non-elite players on domain-general reaction time nor on both components of inhibitory control (p > 0.05). Concurrent validity for domain-general reaction time was good, as it was associated with a national ranking for elite (p = 0.70, p < 0.01) and non-elite (p = 0.70, p < 0.05) players. No relationship was found between the national ranking and badminton-specific reaction time, nor both components of inhibitory control (p > 0.05). In conclusion, reproducibility and validity of inhibitory control assessment was not confirmed, however, the BRIT appears a reproducible and valid measure of reaction time in badminton players. Reaction time measured with the BRIT may provide input for training programs aiming to improve badminton players' performance.

  9. Validity and Reliability of a General Nutrition Knowledge Questionnaire for Japanese Adults.

    PubMed

    Matsumoto, Mai; Tanaka, Rie; Ikemoto, Shinji

    2017-01-01

    Nutrition knowledge is necessary for individuals to adopt appropriate dietary habits, and needs to be evaluated before nutrition education is provided. However, there is no tool to assess general nutrition knowledge of adults in Japan. Our aims were to determine the validity and reliability of a general nutrition knowledge questionnaire for Japanese adults. We developed the pilot version of the Japanese general nutrition knowledge questionnaire (JGNKQ) and administered the pilot study to assess content validity and internal reliability to 1,182 Japanese adults aged 18-64 y. The JGNKQ was further modified based on the pilot study and the final version consisted of 5 sections and 147 items. The JGNKQ was administered to female undergraduate Japanese students in their senior year twice in 2015 to assess construct validity and test-retest reliability. Ninety-six students majoring in nutrition and 44 students in other majors who studied at the same university completed the first questionnaire. Seventy-five students completed the questionnaire twice. The responses from the first questionnaire and both questionnaires were used to assess construct validity and test-retest reliability, respectively. The students in nutrition major had significantly higher scores than the students in other majors on all sections of the questionnaire (p=0.000); therefore, the questionnaire had good construct validity. The test-retest reliability correlation coefficient value of overall and each section except "The use of dietary information to make dietary choices" were 0.75, 0.67, 0.67, 0.68 and 0.61, respectively. We suggest that the JGNKQ is an effective tool to assess the nutrition knowledge level of Japanese adults.

  10. Validity in the hiring and evaluation process.

    PubMed

    Gregg, Robert E

    2006-01-01

    Validity means "based on sound principles." Hiring decisions, discharges, and layoffs are often challenged in court. Unfortunately the employer's defenses are too often found "invalid." The Americans With Disabilities Act requires the employer to show a "validated" hiring process. Defense of discharges or layoffs often focuses on validity of the employer's decision. This article explains the elements of validity needed for sound and defendable employment decisions.

  11. Development and Validation of an Assessment Instrument for Course Experience in a General Education Integrated Science Course

    ERIC Educational Resources Information Center

    Liu, Juhong Christie; St. John, Kristen; Courtier, Anna M. Bishop

    2017-01-01

    Identifying instruments and surveys to address geoscience education research (GER) questions is among the high-ranked needs in a 2016 survey of the GER community (St. John et al., 2016). The purpose of this study was to develop and validate a student-centered assessment instrument to measure course experience in a general education integrated…

  12. Aptitude Tests and Successful College Students: The Predictive Validity of the General Aptitude Test (GAT) in Saudi Arabia

    ERIC Educational Resources Information Center

    Alnahdi, Ghaleb Hamad

    2015-01-01

    Aptitude tests should predict student success at the university level. This study examined the predictive validity of the General Aptitude Test (GAT) in Saudi Arabia. Data for 27420 students enrolled at Prince Sattam bin Abdulaziz University were analyzed. Of these students, 17565 were male students, and 9855 were female students. Multiple…

  13. The Factorial Validity of The Maslach Burnout Inventory--General Survey in Representative Samples of Eight Different Occupational Groups

    ERIC Educational Resources Information Center

    Langballe, Ellen Melbye; Falkum, Erik; Innstrand, Siw Tone; Aasland, Olaf Gjerlow

    2006-01-01

    The Maslach Burnout Inventory--General Survey (MBI-GS) is designed to measure the three subdimensions (exhaustion, cynicism, and professional efficacy) of burnout in a wide range of occupations. This article examines the factorial validity of the MBI-GS across eight different occupational groups in Norway: lawyers, physicians, nurses, teachers,…

  14. Adaptation of General Belongingness Scale into Turkish for Adolescents: Validity and Reliability Studies

    ERIC Educational Resources Information Center

    Yildiz, Mehmet Ali

    2017-01-01

    The current research aims to adapt the General Belongingness Scale (GBS), developed by Malone, Pillow, and Osman (2012), into Turkish for adolescents and to conduct the validity and reliability studies for it. Ages of the participants, a total of 567 adolescents including 274 males (48.3%) and 293 females (51.7%) ranged between 14 and 18 (average…

  15. Can Findings from Randomized Controlled Trials of Social Skills Training in Autism Spectrum Disorder Be Generalized? The Neglected Dimension of External Validity

    ERIC Educational Resources Information Center

    Jonsson, Ulf; Olsson, Nora Choque; Bölte, Sven

    2016-01-01

    Systematic reviews have traditionally focused on internal validity, while external validity often has been overlooked. In this study, we systematically reviewed determinants of external validity in the accumulated randomized controlled trials of social skills group interventions for children and adolescents with autism spectrum disorder. We…

  16. Onboard Processing and Autonomous Operations on the IPEX Cubesat

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Doubleday, Joshua; Ortega, Kevin; Flatley, Tom; Crum, Gary; Geist, Alessandro; Lin, Michael; Williams, Austin; Bellardo, John; Puig-Suari, Jordi; hide

    2012-01-01

    IPEX is a 1u Cubesat sponsored by NASA Earth Science Technology Office (ESTO), the goals or which are: (1) Flight validate high performance flight computing, (2) Flight validate onboard instrument data processing product generation software, (3) flight validate autonomous operations for instrument processing, (4) enhance NASA outreach and university ties.

  17. Reinstatement of contextual conditioned anxiety in virtual reality and the effects of transcutaneous vagus nerve stimulation in humans.

    PubMed

    Genheimer, Hannah; Andreatta, Marta; Asan, Esther; Pauli, Paul

    2017-12-20

    Since exposure therapy for anxiety disorders incorporates extinction of contextual anxiety, relapses may be due to reinstatement processes. Animal research demonstrated more stable extinction memory and less anxiety relapse due to vagus nerve stimulation (VNS). We report a valid human three-day context conditioning, extinction and return of anxiety protocol, which we used to examine effects of transcutaneous VNS (tVNS). Seventy-five healthy participants received electric stimuli (unconditioned stimuli, US) during acquisition (Day1) when guided through one virtual office (anxiety context, CTX+) but never in another (safety context, CTX-). During extinction (Day2), participants received tVNS, sham, or no stimulation and revisited both contexts without US delivery. On Day3, participants received three USs for reinstatement followed by a test phase. Successful acquisition, i.e. startle potentiation, lower valence, higher arousal, anxiety and contingency ratings in CTX+ versus CTX-, the disappearance of these effects during extinction, and successful reinstatement indicate validity of this paradigm. Interestingly, we found generalized reinstatement in startle responses and differential reinstatement in valence ratings. Altogether, our protocol serves as valid conditioning paradigm. Reinstatement effects indicate different anxiety networks underlying physiological versus verbal responses. However, tVNS did neither affect extinction nor reinstatement, which asks for validation and improvement of the stimulation protocol.

  18. The application and value of information sources in clinical practice: an examination of the perspective of naturopaths.

    PubMed

    Steel, Amie; Adams, Jon

    2011-06-01

    The approach of evidence-based medicine (EBM), providing a paradigm to validate information sources and a process for critiquing their value, is an important platform for guiding practice. Researchers have explored the application and value of information sources in clinical practice with regard to a range of health professions; however, naturopathic practice has been overlooked. An exploratory study of naturopaths' perspectives of the application and value of information sources has been undertaken. Semi-structured interviews with 12 naturopaths in current clinical practice, concerning the information sources used in clinical practice and their perceptions of these sources. Thematic analysis identified differences in the application of the variety of information sources used, depending upon the perceived validity. Internet databases were viewed as highly valid. Textbooks, formal education and interpersonal interactions were judged based upon a variety of factors, whilst validation of general internet sites and manufacturers information was required prior to use. The findings of this study will provide preliminary aid to those responsible for supporting naturopaths' information use and access. In particular, it may assist publishers, medical librarians and professional associations in developing strategies to expand the clinically useful information sources available to naturopaths. © 2011 The authors. Health Information and Libraries Journal © 2011 Health Libraries Group.

  19. Development and Validation of the Agency for Healthcare Research and Quality Measures of Potentially Preventable Emergency Department (ED) Visits: The ED Prevention Quality Indicators for General Health Conditions.

    PubMed

    Davies, Sheryl; Schultz, Ellen; Raven, Maria; Wang, Nancy Ewen; Stocks, Carol L; Delgado, Mucio Kit; McDonald, Kathryn M

    2017-10-01

    To develop and validate rates of potentially preventable emergency department (ED) visits as indicators of community health. Agency for Healthcare Research and Quality, Healthcare Cost and Utilization Project 2008-2010 State Inpatient Databases and State Emergency Department Databases. Empirical analyses and structured panel reviews. Panels of 14-17 clinicians and end users evaluated a set of ED Prevention Quality Indicators (PQIs) using a Modified Delphi process. Empirical analyses included assessing variation in ED PQI rates across counties and sensitivity of those rates to county-level poverty, uninsurance, and density of primary care physicians (PCPs). ED PQI rates varied widely across U.S. communities. Indicator rates were significantly associated with county-level poverty, median income, Medicaid insurance, and levels of uninsurance. A few indicators were significantly associated with PCP density, with higher rates in areas with greater density. A clinical and an end-user panel separately rated the indicators as having strong face validity for most uses evaluated. The ED PQIs have undergone initial validation as indicators of community health with potential for use in public reporting, population health improvement, and research. © Health Research and Educational Trust.

  20. Validation of a program for supercritical power plant calculations

    NASA Astrophysics Data System (ADS)

    Kotowicz, Janusz; Łukowicz, Henryk; Bartela, Łukasz; Michalski, Sebastian

    2011-12-01

    This article describes the validation of a supercritical steam cycle. The cycle model was created with the commercial program GateCycle and validated using in-house code of the Institute of Power Engineering and Turbomachinery. The Institute's in-house code has been used extensively for industrial power plants calculations with good results. In the first step of the validation process, assumptions were made about the live steam temperature and pressure, net power, characteristic quantities for high- and low-pressure regenerative heat exchangers and pressure losses in heat exchangers. These assumptions were then used to develop a steam cycle model in Gate-Cycle and a model based on the code developed in-house at the Institute of Power Engineering and Turbomachinery. Properties, such as thermodynamic parameters at characteristic points of the steam cycle, net power values and efficiencies, heat provided to the steam cycle and heat taken from the steam cycle, were compared. The last step of the analysis was calculation of relative errors of compared values. The method used for relative error calculations is presented in the paper. The assigned relative errors are very slight, generally not exceeding 0.1%. Based on our analysis, it can be concluded that using the GateCycle software for calculations of supercritical power plants is possible.

  1. Development and validation of the positive affect and well-being scale for the neurology quality of life (Neuro-QOL) measurement system.

    PubMed

    Salsman, John M; Victorson, David; Choi, Seung W; Peterman, Amy H; Heinemann, Allen W; Nowinski, Cindy; Cella, David

    2013-11-01

    To develop and validate an item-response theory-based patient-reported outcomes assessment tool of positive affect and well-being (PAW). This is part of a larger NINDS-funded study to develop a health-related quality of life measurement system across major neurological disorders, called Neuro-QOL. Informed by a literature review and qualitative input from clinicians and patients, item pools were created to assess PAW concepts. Items were administered to a general population sample (N = 513) and a group of individuals with a variety of neurologic conditions (N = 581) for calibration and validation purposes, respectively. A 23-item calibrated bank and a 9-item short form of PAW was developed, reflecting components of positive affect, life satisfaction, or an overall sense of purpose and meaning. The Neuro-QOL PAW measure demonstrated sufficient unidimensionality and displayed good internal consistency, test-retest reliability, model fit, convergent and discriminant validity, and responsiveness. The Neuro-QOL PAW measure was designed to aid clinicians and researchers to better evaluate and understand the potential role of positive health processes for individuals with chronic neurological conditions. Further psychometric testing within and between neurological conditions, as well as testing in non-neurologic chronic diseases, will help evaluate the generalizability of this new tool.

  2. Spanish validation of the Exercise Therapy Burden Questionnaire (ETBQ) for the assessment of barriers associated to doing physical therapy for the treatment of chronic illness.

    PubMed

    Navarro-Albarracín, César; Poiraudeau, Serge; Chico-Matallana, Noelia; Vergara-Martín, Jesús; Martin, William; Castro-Sánchez, Adelaida María; Matarán-Peñarrocha, Guillermo A

    2018-06-08

    To validate the Spanish version of the Exercise Therapy Burden Questionnaire (ETBQ) for the assessment of barriers associated to doing physical therapy for the treatment of chronic ailments. A sample of 177 patients, 55.93% men and 44.07% women, with an average age of 51.03±14.91 was recruited. The reliability of the questionnaire was tested with Cronbach's alpha coefficient, and the validity of the instrument was assessed through the divergent validation process and factor analysis. The factor analysis was different to the original questionnaire, composed of a dimension, in this case determined three dimensions: (1) General limitations for doing physical exercise. (2) Physical limitations for doing physical exercise. (3) Limitations caused by the patients' predisposition to their exercises. The reliability of the test-retest was measured through the intraclass correlation coefficient (ICC) and the Bland-Altman plot. Cronbach's alpha was 0.8715 for the total ETBQ. The ICC of the test-retest was 0.745 and the Bland-Altman plot showed no systematic trend. We have obtained the translated version in Spanish of the ETBQ questionnaire. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.

  3. Validation of the VERITAS-Pro treatment adherence scale in a Spanish sample population with hemophilia.

    PubMed

    Cuesta-Barriuso, Rubén; Torres-Ortuño, Ana; Galindo-Piñana, Pilar; Nieto-Munuera, Joaquín; Duncan, Natalie; López-Pina, José Antonio

    2017-01-01

    We aimed to conduct a validation in Spanish of the Validated Hemophilia Regimen Treatment Adherence Scale - Prophylaxis (VERITAS-Pro) questionnaire for use in patients with hemophilia under prophylactic treatment. The VERITAS-Pro scale was adapted through a process of back translation from English to Spanish. A bilingual native Spanish translator translated the scale from English to Spanish. Subsequently, a bilingual native English translator translated the scale from Spanish to English. The disagreements were resolved by agreement between the research team and translators. Seventy-three patients with hemophilia, aged 13-62 years, were enrolled in the study. The scale was applied twice (2 months apart) to evaluate the test-retest reliability. Internal consistency reliability was lower on the Spanish VERITAS-Pro than on the English version. Test-retest reliability was high, ranging from 0.83 to 0.92. No significant differences ( P >0.05) were found between test and retest scores in subscales of VERITAS-Pro. In general, Spanish patients showed higher rates of nonadherence than American patients in all subscales. The Spanish version of the VERITAS-Pro has high levels of consistency and empirical validity. This scale can be administered to assess the degree of adherence of prophylactic treatment in patients with hemophilia.

  4. Construction and preliminary validation of the Barcelona Immigration Stress Scale.

    PubMed

    Tomás-Sábado, Joaquín; Qureshi, Adil; Antonin, Montserrat; Collazos, Francisco

    2007-06-01

    In the study of mental health and migration, an increasing number of researchers have shifted the focus away from the concept of acculturation towards the stress present in the migratory experience. The bulk of research on acculturative stress has been carried out in the United States, and thus the definition and measurement of the construct has been predicated on that cultural and demographic context, which is of dubious applicability in Europe in general, and Spain in particular. Further, some scales have focused on international students, which down-played the importance of the migratory process, because it deals with a special subset of people who are not formally immigrating. The Barcelona Immigration Stress Scale was developed to measure acculturative stress appropriate to immigrants in Spain, using expert and focus group review and has 42 items. The scale shows acceptable internal validity, and, consistent with other scales, suggests that immigration stress is a complex construct.

  5. Rapid hybridization of nucleic acids using isotachophoresis

    PubMed Central

    Bercovici, Moran; Han, Crystal M.; Liao, Joseph C.; Santiago, Juan G.

    2012-01-01

    We use isotachophoresis (ITP) to control and increase the rate of nucleic acid hybridization reactions in free solution. We present a new physical model, validation experiments, and demonstrations of this assay. We studied the coupled physicochemical processes of preconcentration, mixing, and chemical reaction kinetics under ITP. Our experimentally validated model enables a closed form solution for ITP-aided reaction kinetics, and reveals a new characteristic time scale which correctly predicts order 10,000-fold speed-up of chemical reaction rate for order 100 pM reactants, and greater enhancement at lower concentrations. At 500 pM concentration, we measured a reaction time which is 14,000-fold lower than that predicted for standard second-order hybridization. The model and method are generally applicable to acceleration of reactions involving nucleic acids, and may be applicable to a wide range of reactions involving ionic reactants. PMID:22733732

  6. Nuclear quadrupole resonance lineshape analysis for different motional models: Stochastic Liouville approach

    NASA Astrophysics Data System (ADS)

    Kruk, D.; Earle, K. A.; Mielczarek, A.; Kubica, A.; Milewska, A.; Moscicki, J.

    2011-12-01

    A general theory of lineshapes in nuclear quadrupole resonance (NQR), based on the stochastic Liouville equation, is presented. The description is valid for arbitrary motional conditions (particularly beyond the valid range of perturbation approaches) and interaction strengths. It can be applied to the computation of NQR spectra for any spin quantum number and for any applied magnetic field. The treatment presented here is an adaptation of the "Swedish slow motion theory," [T. Nilsson and J. Kowalewski, J. Magn. Reson. 146, 345 (2000), 10.1006/jmre.2000.2125] originally formulated for paramagnetic systems, to NQR spectral analysis. The description is formulated for simple (Brownian) diffusion, free diffusion, and jump diffusion models. The two latter models account for molecular cooperativity effects in dense systems (such as liquids of high viscosity or molecular glasses). The sensitivity of NQR slow motion spectra to the mechanism of the motional processes modulating the nuclear quadrupole interaction is discussed.

  7. Best Practices: How to Evaluate Psychological Science for Use by Organizations.

    PubMed

    Fiske, Susan T; Borgida, Eugene

    2011-01-01

    We discuss how organizations can evaluate psychological science for its potential usefulness to their own purposes. Common sense is often the default but inadequate alternative, and bench-marking supplies only collective hunches instead of validated principles. External validity is an empirical process of identifying moderator variables, not a simple yes-no judgment about whether lab results replicate in the field. Hence, convincing criteria must specify what constitutes high-quality empirical evidence for organizational use. First, we illustrate some theories and science that have potential use. Then we describe generally accepted criteria for scientific quality and consensus, starting with peer review for quality, and scientific agreement in forms ranging from surveys of experts to meta-analyses to National Research Council consensus reports. Linkages of basic science to organizations entail communicating expert scientific consensus, motivating managerial interest, and translating broad principles to specific contexts. We close with parting advice to both sides of the researcher-practitioner divide.

  8. A realistic host-vector transmission model for describing malaria prevalence pattern.

    PubMed

    Mandal, Sandip; Sinha, Somdatta; Sarkar, Ram Rup

    2013-12-01

    Malaria continues to be a major public health concern all over the world even after effective control policies have been employed, and considerable understanding of the disease biology have been attained, from both the experimental and modelling perspective. Interactions between different general and local processes, such as dependence on age and immunity of the human host, variations of temperature and rainfall in tropical and sub-tropical areas, and continued presence of asymptomatic infections, regulate the host-vector interactions, and are responsible for the continuing disease prevalence pattern.In this paper, a general mathematical model of malaria transmission is developed considering short and long-term age-dependent immunity of human host and its interaction with pathogen-infected mosquito vector. The model is studied analytically and numerically to understand the role of different parameters related to mosquitoes and humans. To validate the model with a disease prevalence pattern in a particular region, real epidemiological data from the north-eastern part of India was used, and the effect of seasonal variation in mosquito density was modelled based on local climactic data. The model developed based on general features of host-vector interactions, and modified simply incorporating local environmental factors with minimal changes, can successfully explain the disease transmission process in the region. This provides a general approach toward modelling malaria that can be adapted to control future outbreaks of malaria.

  9. Quantitative analysis of packed and compacted granular systems by x-ray microtomography

    NASA Astrophysics Data System (ADS)

    Fu, Xiaowei; Milroy, Georgina E.; Dutt, Meenakshi; Bentham, A. Craig; Hancock, Bruno C.; Elliott, James A.

    2005-04-01

    The packing and compaction of powders are general processes in pharmaceutical, food, ceramic and powder metallurgy industries. Understanding how particles pack in a confined space and how powders behave during compaction is crucial for producing high quality products. This paper outlines a new technique, based on modern desktop X-ray tomography and image processing, to quantitatively investigate the packing of particles in the process of powder compaction and provide great insights on how powder densify during powder compaction, which relate in terms of materials properties and processing conditions to tablet manufacture by compaction. A variety of powder systems were considered, which include glass, sugar, NaCl, with a typical particle size of 200-300 mm and binary mixtures of NaCl-Glass Spheres. The results are new and have been validated by SEM observation and numerical simulations using discrete element methods (DEM). The research demonstrates that XMT technique has the potential in further investigating of pharmaceutical processing and even verifying other physical models on complex packing.

  10. Basic processes in reading aloud and colour naming: towards a better understanding of the role of spatial attention.

    PubMed

    Robidoux, Serje; Rauwerda, Derek; Besner, Derek

    2014-05-01

    Whether or not lexical access from print requires spatial attention has been debated intensively for the last 30 years. Studies involving colour naming generally find evidence that "unattended" words are processed. In contrast, reading-based experiments do not find evidence of distractor processing. One theory ascribes the discrepancy to weaker attentional demands for colour identification. If colour naming does not capture all of a subject's attention, the remaining attentional resources can be deployed to process the distractor word. The present study combined exogenous spatial cueing with colour naming and reading aloud separately and found that colour naming is less sensitive to the validity of a spatial cue than is reading words aloud. Based on these results, we argue that colour naming studies do not effectively control attention so that no conclusions about unattended distractor processing can be drawn from them. Thus we reiterate the consistent conclusion drawn from reading aloud and lexical decision studies: There is no word identification without (spatial) attention.

  11. Major hydrogeochemical processes in an acid mine drainage affected estuary.

    PubMed

    Asta, Maria P; Calleja, Maria Ll; Pérez-López, Rafael; Auqué, Luis F

    2015-02-15

    This study provides geochemical data with the aim of identifying and quantifying the main processes occurring in an Acid Mine Drainage (AMD) affected estuary. With that purpose, water samples of the Huelva estuary were collected during a tidal half-cycle and ion-ion plots and geochemical modeling were performed to obtain a general conceptual model. Modeling results indicated that the main processes responsible for the hydrochemical evolution of the waters are: (i) the mixing of acid fluvial water with alkaline ocean water; (ii) precipitation of Fe oxyhydroxysulfates (schwertmannite) and hydroxides (ferrihydrite); (iii) precipitation of Al hydroxysulfates (jurbanite) and hydroxides (amorphous Al(OH)3); (iv) dissolution of calcite; and (v) dissolution of gypsum. All these processes, thermodynamically feasible in the light of their calculated saturation states, were quantified by mass-balance calculations and validated by reaction-path calculations. In addition, sorption processes were deduced by the non-conservative behavior of some elements (e.g., Cu and Zn). Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Emotion unfolded by motion: a role for parietal lobe in decoding dynamic facial expressions.

    PubMed

    Sarkheil, Pegah; Goebel, Rainer; Schneider, Frank; Mathiak, Klaus

    2013-12-01

    Facial expressions convey important emotional and social information and are frequently applied in investigations of human affective processing. Dynamic faces may provide higher ecological validity to examine perceptual and cognitive processing of facial expressions. Higher order processing of emotional faces was addressed by varying the task and virtual face models systematically. Blood oxygenation level-dependent activation was assessed using functional magnetic resonance imaging in 20 healthy volunteers while viewing and evaluating either emotion or gender intensity of dynamic face stimuli. A general linear model analysis revealed that high valence activated a network of motion-responsive areas, indicating that visual motion areas support perceptual coding for the motion-based intensity of facial expressions. The comparison of emotion with gender discrimination task revealed increased activation of inferior parietal lobule, which highlights the involvement of parietal areas in processing of high level features of faces. Dynamic emotional stimuli may help to emphasize functions of the hypothesized 'extended' over the 'core' system for face processing.

  13. A real-world approach to Evidence-Based Medicine in general practice: a competency framework derived from a systematic review and Delphi process.

    PubMed

    Galbraith, Kevin; Ward, Alison; Heneghan, Carl

    2017-05-03

    Evidence-Based Medicine (EBM) skills have been included in general practice curricula and competency frameworks. However, GPs experience numerous barriers to developing and maintaining EBM skills, and some GPs feel the EBM movement misunderstands, and threatens their traditional role. We therefore need a new approach that acknowledges the constraints encountered in real-world general practice. The aim of this study was to synthesise from empirical research a real-world EBM competency framework for general practice, which could be applied in training, in the individual pursuit of continuing professional development, and in routine care. We sought to integrate evidence from the literature with evidence derived from the opinions of experts in the fields of general practice and EBM. We synthesised two sets of themes describing the meaning of EBM in general practice. One set of themes was derived from a mixed-methods systematic review of the literature; the other set was derived from the further development of those themes using a Delphi process among a panel of EBM and general practice experts. From these two sets of themes we constructed a real-world EBM competency framework for general practice. A simple competency framework was constructed, that acknowledges the constraints of real-world general practice: (1) mindfulness - in one's approach towards EBM itself, and to the influences on decision-making; (2) pragmatism - in one's approach to finding and evaluating evidence; and (3) knowledge of the patient - as the most useful resource in effective communication of evidence. We present a clinical scenario to illustrate how a GP might demonstrate these competencies in their routine daily work. We have proposed a real-world EBM competency framework for general practice, derived from empirical research, which acknowledges the constraints encountered in modern general practice. Further validation of these competencies is required, both as an educational resource and as a strategy for actual practice.

  14. You don't have to believe everything you read: background knowledge permits fast and efficient validation of information.

    PubMed

    Richter, Tobias; Schroeder, Sascha; Wöhrmann, Britta

    2009-03-01

    In social cognition, knowledge-based validation of information is usually regarded as relying on strategic and resource-demanding processes. Research on language comprehension, in contrast, suggests that validation processes are involved in the construction of a referential representation of the communicated information. This view implies that individuals can use their knowledge to validate incoming information in a routine and efficient manner. Consistent with this idea, Experiments 1 and 2 demonstrated that individuals are able to reject false assertions efficiently when they have validity-relevant beliefs. Validation processes were carried out routinely even when individuals were put under additional cognitive load during comprehension. Experiment 3 demonstrated that the rejection of false information occurs automatically and interferes with affirmative responses in a nonsemantic task (epistemic Stroop effect). Experiment 4 also revealed complementary interference effects of true information with negative responses in a nonsemantic task. These results suggest the existence of fast and efficient validation processes that protect mental representations from being contaminated by false and inaccurate information.

  15. Validation of a General and Sport Nutrition Knowledge Questionnaire in Adolescents and Young Adults: GeSNK.

    PubMed

    Calella, Patrizia; Iacullo, Vittorio Maria; Valerio, Giuliana

    2017-04-29

    Good knowledge of nutrition is widely thought to be an important aspect to maintaining a balanced and healthy diet. The aim of this study was to develop and validate a new reliable tool to measure the general and the sport nutrition knowledge (GeSNK) in people who used to practice sports at different levels. The development of (GeSNK) was carried out in six phases as follows: (1) item development and selection by a panel of experts; (2) pilot study in order to assess item difficulty and item discrimination; (3) measurement of the internal consistency; (4) reliability assessment with a 2-week test-retest analysis; (5) concurrent validity was tested by administering the questionnaire along with other two similar tools; (6) construct validity by administering the questionnaire to three groups of young adults with different general nutrition and sport nutrition knowledge. The final questionnaire, consisted of 62 items of the original 183 questions. It is a consistent, valid, and suitable instrument that can be applied over time, making it a promising tool to look at the relationship between nutrition knowledge, demographic characteristics, and dietary behavior in adolescents and young adults.

  16. Design for validation: An approach to systems validation

    NASA Technical Reports Server (NTRS)

    Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)

    1989-01-01

    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.

  17. LACO-Wiki: A land cover validation tool and a new, innovative teaching resource for remote sensing and the geosciences

    NASA Astrophysics Data System (ADS)

    See, Linda; Perger, Christoph; Dresel, Christopher; Hofer, Martin; Weichselbaum, Juergen; Mondel, Thomas; Steffen, Fritz

    2016-04-01

    The validation of land cover products is an important step in the workflow of generating a land cover map from remotely-sensed imagery. Many students of remote sensing will be given exercises on classifying a land cover map followed by the validation process. Many algorithms exist for classification, embedded within proprietary image processing software or increasingly as open source tools. However, there is little standardization for land cover validation, nor a set of open tools available for implementing this process. The LACO-Wiki tool was developed as a way of filling this gap, bringing together standardized land cover validation methods and workflows into a single portal. This includes the storage and management of land cover maps and validation data; step-by-step instructions to guide users through the validation process; sound sampling designs; an easy-to-use environment for validation sample interpretation; and the generation of accuracy reports based on the validation process. The tool was developed for a range of users including producers of land cover maps, researchers, teachers and students. The use of such a tool could be embedded within the curriculum of remote sensing courses at a university level but is simple enough for use by students aged 13-18. A beta version of the tool is available for testing at: http://www.laco-wiki.net.

  18. Entropy bounds, acceleration radiation, and the generalized second law

    NASA Astrophysics Data System (ADS)

    Unruh, William G.; Wald, Robert M.

    1983-05-01

    We calculate the net change in generalized entropy occurring when one attempts to empty the contents of a thin box into a black hole in the manner proposed recently by Bekenstein. The case of a "thick" box also is treated. It is shown that, as in our previous analysis, the effects of acceleration radiation prevent a violation of the generalized second law of thermodynamics. Thus, in this example, the validity of the generalized second law is shown to rest only on the validity of the ordinary second law and the existence of acceleration radiation. No additional assumptions concerning entropy bounds on the contents of the box need to be made.

  19. 38 CFR 3.14 - Validity of enlistments.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Validity of enlistments. 3.14 Section 3.14 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.14 Validity of...

  20. 38 CFR 3.14 - Validity of enlistments.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Validity of enlistments. 3.14 Section 3.14 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.14 Validity of...

  1. 38 CFR 3.14 - Validity of enlistments.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Validity of enlistments. 3.14 Section 3.14 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.14 Validity of...

  2. 38 CFR 3.14 - Validity of enlistments.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Validity of enlistments. 3.14 Section 3.14 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.14 Validity of...

  3. 38 CFR 3.14 - Validity of enlistments.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Validity of enlistments. 3.14 Section 3.14 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.14 Validity of...

  4. Natural language processing in pathology: a scoping review.

    PubMed

    Burger, Gerard; Abu-Hanna, Ameen; de Keizer, Nicolette; Cornet, Ronald

    2016-07-22

    Encoded pathology data are key for medical registries and analyses, but pathology information is often expressed as free text. We reviewed and assessed the use of NLP (natural language processing) for encoding pathology documents. Papers addressing NLP in pathology were retrieved from PubMed, Association for Computing Machinery (ACM) Digital Library and Association for Computational Linguistics (ACL) Anthology. We reviewed and summarised the study objectives; NLP methods used and their validation; software implementations; the performance on the dataset used and any reported use in practice. The main objectives of the 38 included papers were encoding and extraction of clinically relevant information from pathology reports. Common approaches were word/phrase matching, probabilistic machine learning and rule-based systems. Five papers (13%) compared different methods on the same dataset. Four papers did not specify the method(s) used. 18 of the 26 studies that reported F-measure, recall or precision reported values of over 0.9. Proprietary software was the most frequently mentioned category (14 studies); General Architecture for Text Engineering (GATE) was the most applied architecture overall. Practical system use was reported in four papers. Most papers used expert annotation validation. Different methods are used in NLP research in pathology, and good performances, that is, high precision and recall, high retrieval/removal rates, are reported for all of these. Lack of validation and of shared datasets precludes performance comparison. More comparative analysis and validation are needed to provide better insight into the performance and merits of these methods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  5. Using ontologies to improve semantic interoperability in health data.

    PubMed

    Liyanage, Harshana; Krause, Paul; De Lusignan, Simon

    2015-07-10

    The present-day health data ecosystem comprises a wide array of complex heterogeneous data sources. A wide range of clinical, health care, social and other clinically relevant information are stored in these data sources. These data exist either as structured data or as free-text. These data are generally individual person-based records, but social care data are generally case based and less formal data sources may be shared by groups. The structured data may be organised in a proprietary way or be coded using one-of-many coding, classification or terminologies that have often evolved in isolation and designed to meet the needs of the context that they have been developed. This has resulted in a wide range of semantic interoperability issues that make the integration of data held on these different systems changing. We present semantic interoperability challenges and describe a classification of these. We propose a four-step process and a toolkit for those wishing to work more ontologically, progressing from the identification and specification of concepts to validating a final ontology. The four steps are: (1) the identification and specification of data sources; (2) the conceptualisation of semantic meaning; (3) defining to what extent routine data can be used as a measure of the process or outcome of care required in a particular study or audit and (4) the formalisation and validation of the final ontology. The toolkit is an extension of a previous schema created to formalise the development of ontologies related to chronic disease management. The extensions are focused on facilitating rapid building of ontologies for time-critical research studies.

  6. A New Scheme for Considering Soil Water-Heat Transport Coupling Based on Community Land Model: Model Description and Preliminary Validation

    NASA Astrophysics Data System (ADS)

    Wang, Chenghai; Yang, Kai

    2018-04-01

    Land surface models (LSMs) have developed significantly over the past few decades, with the result that most LSMs can generally reproduce the characteristics of the land surface. However, LSMs fail to reproduce some details of soil water and heat transport during seasonal transition periods because they neglect the effects of interactions between water movement and heat transfer in the soil. Such effects are critical for a complete understanding of water-heat transport within a soil thermohydraulic regime. In this study, a fully coupled water-heat transport scheme (FCS) is incorporated into the Community Land Model (version 4.5) to replaces its original isothermal scheme, which is more complete in theory. Observational data from five sites are used to validate the performance of the FCS. The simulation results at both single-point and global scale show that the FCS improved the simulation of soil moisture and temperature. FCS better reproduced the characteristics of drier and colder surface layers in arid regions by considering the diffusion of soil water vapor, which is a nonnegligible process in soil, especially for soil surface layers, while its effects in cold regions are generally inverse. It also accounted for the sensible heat fluxes caused by liquid water flow, which can contribute to heat transfer in both surface and deep layers. The FCS affects the estimation of surface sensible heat (SH) and latent heat (LH) and provides the details of soil heat and water transportation, which benefits to understand the inner physical process of soil water-heat migration.

  7. The General Assessment of Personality Disorder (GAPD): factor structure, incremental validity of self-pathology, and relations to DSM-IV personality disorders.

    PubMed

    Hentschel, Annett G; Livesley, W John

    2013-01-01

    Recent developments in the classification of personality disorder, especially moves toward more dimensional systems, create the need to assess general personality disorder apart from individual differences in personality pathology. The General Assessment of Personality Disorder (GAPD) is a self-report questionnaire designed to evaluate general personality disorder. The measure evaluates 2 major components of disordered personality: self or identity problems and interpersonal dysfunction. This study explores whether there is a single factor reflecting general personality pathology as proposed by the Diagnostic and Statistical Manual of Mental Disorders (5th ed.), whether self-pathology has incremental validity over interpersonal pathology as measured by GAPD, and whether GAPD scales relate significantly to Diagnostic and Statistical Manual of Mental Disorders (4th ed. [DSM-IV]) personality disorders. Based on responses from a German psychiatric sample of 149 participants, parallel analysis yielded a 1-factor model. Self Pathology scales of the GAPD increased the predictive validity of the Interpersonal Pathology scales of the GAPD. The GAPD scales showed a moderate to high correlation for 9 of 12 DSM-IV personality disorders.

  8. Psychometric Properties of the Bermond-Vorst Alexithymia Questionnaire (BVAQ) in the General Population and a Clinical Population.

    PubMed

    de Vroege, Lars; Emons, Wilco H M; Sijtsma, Klaas; van der Feltz-Cornelis, Christina M

    2018-01-01

    The Bermond-Vorst Alexithymia Questionnaire (BVAQ) has been validated in student samples and small clinical samples, but not in the general population; thus, representative general-population norms are lacking. We examined the factor structure of the BVAQ in Longitudinal Internet Studies for the Social Sciences panel data from the Dutch general population ( N  = 974). Factor analyses revealed a first-order five-factor model and a second-order two-factor model. However, in the second-order model, the factor interpreted as analyzing ability loaded on both the affective factor and the cognitive factor. Further analyses showed that the first-order test scores are more reliable than the second-order test scores. External and construct validity were addressed by comparing BVAQ scores with a clinical sample of patients suffering from somatic symptom and related disorder (SSRD) ( N  = 235). BVAQ scores differed significantly between the general population and patients suffering from SSRD, suggesting acceptable construct validity. Age was positively associated with alexithymia. Males showed higher levels of alexithymia. The BVAQ is a reliable alternative measure for measuring alexithymia.

  9. Hydrological and water quality processes simulation by the integrated MOHID model

    NASA Astrophysics Data System (ADS)

    Epelde, Ane; Antiguedad, Iñaki; Brito, David; Eduardo, Jauch; Neves, Ramiro; Sauvage, Sabine; Sánchez-Pérez, José Miguel

    2016-04-01

    Different modelling approaches have been used in recent decades to study the water quality degradation caused by non-point source pollution. In this study, the MOHID fully distributed and physics-based model has been employed to simulate hydrological processes and nitrogen dynamics in a nitrate vulnerable zone: the Alegria River watershed (Basque Country, Northern Spain). The results of this study indicate that the MOHID code is suitable for hydrological processes simulation at the watershed scale, as the model shows satisfactory performance at simulating the discharge (with NSE: 0.74 and 0.76 during calibration and validation periods, respectively). The agronomical component of the code, allowed the simulation of agricultural practices, which lead to adequate crop yield simulation in the model. Furthermore, the nitrogen exportation also shows satisfactory performance (with NSE: 0.64 and 0.69 during calibration and validation periods, respectively). While the lack of field measurements do not allow to evaluate the nutrient cycling processes in depth, it has been observed that the MOHID model simulates the annual denitrification according to general ranges established for agricultural watersheds (in this study, 9 kg N ha-1 year-1). In addition, the model has simulated coherently the spatial distribution of the denitrification process, which is directly linked to the simulated hydrological conditions. Thus, the model has localized the highest rates nearby the discharge zone of the aquifer and also where the aquifer thickness is low. These results evidence the strength of this model to simulate watershed scale hydrological processes as well as the crop production and the agricultural activity derived water quality degradation (considering both nutrient exportation and nutrient cycling processes).

  10. Validation, Edits, and Application Processing System Report: Phase I.

    ERIC Educational Resources Information Center

    Gray, Susan; And Others

    Findings of phase 1 of a study of the 1979-1980 Basic Educational Opportunity Grants validation, edits, and application processing system are presented. The study was designed to: assess the impact of the validation effort and processing system edits on the correct award of Basic Grants; and assess the characteristics of students most likely to…

  11. Validating workplace performance assessments in health sciences students: a case study from speech pathology.

    PubMed

    McAllister, Sue; Lincoln, Michelle; Ferguson, Allison; McAllister, Lindy

    2013-01-01

    Valid assessment of health science students' ability to perform in the real world of workplace practice is critical for promoting quality learning and ultimately certifying students as fit to enter the world of professional practice. Current practice in performance assessment in the health sciences field has been hampered by multiple issues regarding assessment content and process. Evidence for the validity of scores derived from assessment tools are usually evaluated against traditional validity categories with reliability evidence privileged over validity, resulting in the paradoxical effect of compromising the assessment validity and learning processes the assessments seek to promote. Furthermore, the dominant statistical approaches used to validate scores from these assessments fall under the umbrella of classical test theory approaches. This paper reports on the successful national development and validation of measures derived from an assessment of Australian speech pathology students' performance in the workplace. Validation of these measures considered each of Messick's interrelated validity evidence categories and included using evidence generated through Rasch analyses to support score interpretation and related action. This research demonstrated that it is possible to develop an assessment of real, complex, work based performance of speech pathology students, that generates valid measures without compromising the learning processes the assessment seeks to promote. The process described provides a model for other health professional education programs to trial.

  12. Factor analysis methods and validity evidence: A systematic review of instrument development across the continuum of medical education

    NASA Astrophysics Data System (ADS)

    Wetzel, Angela Payne

    Previous systematic reviews indicate a lack of reporting of reliability and validity evidence in subsets of the medical education literature. Psychology and general education reviews of factor analysis also indicate gaps between current and best practices; yet, a comprehensive review of exploratory factor analysis in instrument development across the continuum of medical education had not been previously identified. Therefore, the purpose for this study was critical review of instrument development articles employing exploratory factor or principal component analysis published in medical education (2006--2010) to describe and assess the reporting of methods and validity evidence based on the Standards for Educational and Psychological Testing and factor analysis best practices. Data extraction of 64 articles measuring a variety of constructs that have been published throughout the peer-reviewed medical education literature indicate significant errors in the translation of exploratory factor analysis best practices to current practice. Further, techniques for establishing validity evidence tend to derive from a limited scope of methods including reliability statistics to support internal structure and support for test content. Instruments reviewed for this study lacked supporting evidence based on relationships with other variables and response process, and evidence based on consequences of testing was not evident. Findings suggest a need for further professional development within the medical education researcher community related to (1) appropriate factor analysis methodology and reporting and (2) the importance of pursuing multiple sources of reliability and validity evidence to construct a well-supported argument for the inferences made from the instrument. Medical education researchers and educators should be cautious in adopting instruments from the literature and carefully review available evidence. Finally, editors and reviewers are encouraged to recognize this gap in best practices and subsequently to promote instrument development research that is more consistent through the peer-review process.

  13. Fisher's geometrical model emerges as a property of complex integrated phenotypic networks.

    PubMed

    Martin, Guillaume

    2014-05-01

    Models relating phenotype space to fitness (phenotype-fitness landscapes) have seen important developments recently. They can roughly be divided into mechanistic models (e.g., metabolic networks) and more heuristic models like Fisher's geometrical model. Each has its own drawbacks, but both yield testable predictions on how the context (genomic background or environment) affects the distribution of mutation effects on fitness and thus adaptation. Both have received some empirical validation. This article aims at bridging the gap between these approaches. A derivation of the Fisher model "from first principles" is proposed, where the basic assumptions emerge from a more general model, inspired by mechanistic networks. I start from a general phenotypic network relating unspecified phenotypic traits and fitness. A limited set of qualitative assumptions is then imposed, mostly corresponding to known features of phenotypic networks: a large set of traits is pleiotropically affected by mutations and determines a much smaller set of traits under optimizing selection. Otherwise, the model remains fairly general regarding the phenotypic processes involved or the distribution of mutation effects affecting the network. A statistical treatment and a local approximation close to a fitness optimum yield a landscape that is effectively the isotropic Fisher model or its extension with a single dominant phenotypic direction. The fit of the resulting alternative distributions is illustrated in an empirical data set. These results bear implications on the validity of Fisher's model's assumptions and on which features of mutation fitness effects may vary (or not) across genomic or environmental contexts.

  14. Development of a patient-administered self-assessment tool (SATp) for follow-up of colorectal cancer patients in general practice.

    PubMed

    Ngune, Irene; Jiwa, Moyez; McManus, Alexandra; Hughes, Jeff; Parsons, Richard; Hodder, Rupert; Entriken, Fiona

    2014-01-01

    Treatment for colorectal cancer (CRC) may result in physical, social, and psychological needs that affect patients' quality of life post-treatment. A comprehensive assessment should be conducted to identify these needs in CRC patients post treatment, however, there is a lack of tools and processes available in general practice. This study aimed to develop a patient-completed needs screening tool that identifies potentially unmet physical, psychological, and social needs in CRC and facilitates consultation with a general practitioner (GP) to address these needs. The development of the self-assessment tool for patients (SATp) included a review of the literature; face and content validity with reference to an expert panel; psychometric testing including readability, internal consistency, and test-retest reliability; and usability in clinical practice. The SATp contains 25 questions. The tool had internal consistency (Cronbach's alpha 0.70-0.97), readability (reading ease 82.5%), and test-retest reliability (kappa 0.689-1.000). A total of 66 patients piloted the SATp. Participants were on average 69.2 (SD 9.9) years old and had a median follow-up period of 26.7 months. The SATp identified a total of 547 needs (median 7 needs/per patient; IQR [3-12.25]). Needs were categorised into social (175[32%]), psychological (175[32%]), and physical (197[36%]) domains. SATp is a reliable self-assessment tool useful for identifying CRC patient needs. Further testing of this tool for validity and usability is underway.

  15. 42 CFR 478.15 - QIO review of changes resulting from DRG validation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner... validation under section 1866(a)(1)(F) of the Act is entitled to a review of that change if— (i) The change...

  16. 42 CFR 478.15 - QIO review of changes resulting from DRG validation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner... validation under section 1866(a)(1)(F) of the Act is entitled to a review of that change if— (i) The change...

  17. 42 CFR 478.15 - QIO review of changes resulting from DRG validation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner... validation under section 1866(a)(1)(F) of the Act is entitled to a review of that change if— (i) The change...

  18. 42 CFR 478.15 - QIO review of changes resulting from DRG validation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner... validation under section 1866(a)(1)(F) of the Act is entitled to a review of that change if— (i) The change...

  19. 20 CFR 404.346 - Your relationship as wife, husband, widow, or widower based upon a deemed valid marriage.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... widower based upon a deemed valid marriage. 404.346 Section 404.346 Employees' Benefits SOCIAL SECURITY... relationship as wife, husband, widow, or widower based upon a deemed valid marriage. (a) General. If your... explained in § 404.345, you may be eligible for benefits based upon a deemed valid marriage. You will be...

  20. The Reliability and Validity of the Social Responsiveness Scale in a UK General Child Population

    ERIC Educational Resources Information Center

    Wigham, Sarah; McConachie, Helen; Tandos, Jonathan; Le Couteur, Ann S.

    2012-01-01

    This is the first UK study to report the reliability, validity, and factor structure of the Social Responsiveness Scale (SRS) in a general population sample. Parents of 500 children (aged 5-8 years) in North East England completed the SRS. Profiles of scores were similar to USA norms, and a single factor structure was identified. Good construct…

  1. Virtual reality simulator training for laparoscopic colectomy: what metrics have construct validity?

    PubMed

    Shanmugan, Skandan; Leblanc, Fabien; Senagore, Anthony J; Ellis, C Neal; Stein, Sharon L; Khan, Sadaf; Delaney, Conor P; Champagne, Bradley J

    2014-02-01

    Virtual reality simulation for laparoscopic colectomy has been used for training of surgical residents and has been considered as a model for technical skills assessment of board-eligible colorectal surgeons. However, construct validity (the ability to distinguish between skill levels) must be confirmed before widespread implementation. This study was designed to specifically determine which metrics for laparoscopic sigmoid colectomy have evidence of construct validity. General surgeons that had performed fewer than 30 laparoscopic colon resections and laparoscopic colorectal experts (>200 laparoscopic colon resections) performed laparoscopic sigmoid colectomy on the LAP Mentor model. All participants received a 15-minute instructional warm-up and had never used the simulator before the study. Performance was then compared between each group for 21 metrics (procedural, 14; intraoperative errors, 7) to determine specifically which measurements demonstrate construct validity. Performance was compared with the Mann-Whitney U-test (p < 0.05 was significant). Fifty-three surgeons; 29 general surgeons, and 24 colorectal surgeons enrolled in the study. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 of 14 procedural metrics by distinguishing levels of surgical experience (p < 0.05). The most discriminatory procedural metrics (p < 0.01) favoring experts were reduced instrument path length, accuracy of the peritoneal/medial mobilization, and dissection of the inferior mesenteric artery. Intraoperative errors were not discriminatory for most metrics and favored general surgeons for colonic wall injury (general surgeons, 0.7; colorectal surgeons, 3.5; p = 0.045). Individual variability within the general surgeon and colorectal surgeon groups was not accounted for. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 procedure-specific metrics. However, using virtual reality simulator metrics to detect intraoperative errors did not discriminate between groups. If the virtual reality simulator continues to be used for the technical assessment of trainees and board-eligible surgeons, the evaluation of performance should be limited to procedural metrics.

  2. Validation of the F-18 high alpha research vehicle flight control and avionics systems modifications

    NASA Technical Reports Server (NTRS)

    Chacon, Vince; Pahle, Joseph W.; Regenie, Victoria A.

    1990-01-01

    The verification and validation process is a critical portion of the development of a flight system. Verification, the steps taken to assure the system meets the design specification, has become a reasonably understood and straightforward process. Validation is the method used to ensure that the system design meets the needs of the project. As systems become more integrated and more critical in their functions, the validation process becomes more complex and important. The tests, tools, and techniques which are being used for the validation of the high alpha research vehicle (HARV) turning valve control system (TVCS) are discussed, and their solutions are documented. The emphasis of this paper is on the validation of integrated systems.

  3. Validation of the F-18 high alpha research vehicle flight control and avionics systems modifications

    NASA Technical Reports Server (NTRS)

    Chacon, Vince; Pahle, Joseph W.; Regenie, Victoria A.

    1990-01-01

    The verification and validation process is a critical portion of the development of a flight system. Verification, the steps taken to assure the system meets the design specification, has become a reasonably understood and straightforward process. Validation is the method used to ensure that the system design meets the needs of the project. As systems become more integrated and more critical in their functions, the validation process becomes more complex and important. The tests, tools, and techniques which are being used for the validation of the high alpha research vehicle (HARV) turning vane control system (TVCS) are discussed and the problems and their solutions are documented. The emphasis of this paper is on the validation of integrated system.

  4. 29 CFR 1607.7 - Use of other validity studies.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection... described in test manuals. While publishers of selection procedures have a professional obligation to...

  5. 29 CFR 1607.7 - Use of other validity studies.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection... described in test manuals. While publishers of selection procedures have a professional obligation to...

  6. 29 CFR 1607.7 - Use of other validity studies.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection... described in test manuals. While publishers of selection procedures have a professional obligation to...

  7. Validity of the Microcomputer Evaluation Screening and Assessment Aptitude Scores.

    ERIC Educational Resources Information Center

    Janikowski, Timothy P.; And Others

    1991-01-01

    Examined validity of Microcomputer Evaluation Screening and Assessment (MESA) aptitude scores relative to General Aptitude Test Battery (GATB) using multitrait-multimethod correlational analyses. Findings from 54 rehabilitation clients and 29 displaced workers revealed no evidence to support the construct validity of the MESA. (Author/NB)

  8. Longitudinal Models of Reliability and Validity: A Latent Curve Approach.

    ERIC Educational Resources Information Center

    Tisak, John; Tisak, Marie S.

    1996-01-01

    Dynamic generalizations of reliability and validity that will incorporate longitudinal or developmental models, using latent curve analysis, are discussed. A latent curve model formulated to depict change is incorporated into the classical definitions of reliability and validity. The approach is illustrated with sociological and psychological…

  9. The Cross Validation of the Attitudes toward Mainstreaming Scale (ATMS).

    ERIC Educational Resources Information Center

    Berryman, Joan D.; Neal, W. R. Jr.

    1980-01-01

    Reliability and factorial validity of the Attitudes Toward Mainstreaming Scale was supported in a cross-validation study with teachers. Three factors emerged: learning capability, general mainstreaming, and traditional limiting disabilities. Factor intercorrelations varied from .42 to .55; correlations between total scores and individual factors…

  10. Development and Validation of an Instrument to Evaluate Perceived Wellbeing Associated with the Ingestion of Water: The Water Ingestion-Related Wellbeing Instrument (WIRWI)

    PubMed Central

    Espinosa-Montero, Juan; Monterrubio-Flores, Eric A.; Sanchez-Estrada, Marcela; Buendia-Jimenez, Inmaculada; Lieberman, Harris R.; Allaert, François-Andre; Barquera, Simon

    2016-01-01

    Background Ingestion of water has been associated with general wellbeing. When water intake is insufficient, symptoms such as thirst, fatigue and impaired memory result. Currently there are no instruments to assess water consumption associated with wellbeing. The objective of our study was to develop and validate such an instrument in urban, low socioeconomic, adult Mexican population. Methods To construct the Water Ingestion-Related Wellbeing Instrument (WIRWI), a qualitative study in which wellbeing related to everyday practices and experiences in water consumption were investigated. To validate the WIRWI a formal, five-process procedure was used. Face and content validation were addressed, consistency was assessed by exploratory and confirmatory psychometric factor analyses, repeatability, reproducibility and concurrent validity were assessed by conducting correlation tests with other measures of wellbeing such as a quality of life instrument, the SF-36, and objective parameters such as urine osmolality, 24-hour urine total volume and others. Results The final WIRWI is composed of 17 items assessing physical and mental dimensions. Items were selected based on their content and face validity. Exploratory and confirmatory factor analyses yielded Cronbach's alpha of 0.87 and 0.86, respectively. The final confirmatory factor analysis demonstrated that the model estimates were satisfactory for the constructs. Statistically significant correlations with the SF-36, total liquid consumption and simple water consumption were observed. Conclusion The resulting WIRWI is a reliable tool for assessing wellbeing associated with consumption of plain water in Mexican adults and could be useful for similar groups. PMID:27388902

  11. A Self-Validation Method for High-Temperature Thermocouples Under Oxidizing Atmospheres

    NASA Astrophysics Data System (ADS)

    Mokdad, S.; Failleau, G.; Deuzé, T.; Briaudeau, S.; Kozlova, O.; Sadli, M.

    2015-08-01

    Thermocouples are prone to significant drift in use particularly when they are exposed to high temperatures. Indeed, high-temperature exposure can affect the response of a thermocouple progressively by changing the structure of the thermoelements and inducing inhomogeneities. Moreover, an oxidizing atmosphere contributes to thermocouple drift by changing the chemical nature of the metallic wires by the effect of oxidation. In general, severe uncontrolled drift of thermocouples results from these combined influences. A periodic recalibration of the thermocouple can be performed, but sometimes it is not possible to remove the sensor out of the process. Self-validation methods for thermocouples provide a solution to avoid this drawback, but there are currently no high-temperature contact thermometers with self-validation capability at temperatures up to . LNE-Cnam has developed fixed-point devices integrated to the thermocouples consisting of machined alumina-based devices for operation under oxidizing atmospheres. These devices require small amounts of pure metals (typically less than 2 g). They are suitable for self-validation of high-temperature thermocouples up to . In this paper the construction and the characterization of these integrated fixed-point devices are described. The phase-transition plateaus of gold, nickel, and palladium, which enable coverage of the temperature range between and , are assessed with this self-validation technique. Results of measurements performed at LNE-Cnam with the integrated self-validation module at several levels of temperature will be presented. The performance of the devices are assessed and discussed, in terms of robustness and metrological characteristics. Uncertainty budgets are also proposed and detailed.

  12. Development and Validation of an Instrument to Evaluate Perceived Wellbeing Associated with the Ingestion of Water: The Water Ingestion-Related Wellbeing Instrument (WIRWI).

    PubMed

    Espinosa-Montero, Juan; Monterrubio-Flores, Eric A; Sanchez-Estrada, Marcela; Buendia-Jimenez, Inmaculada; Lieberman, Harris R; Allaert, François-Andre; Barquera, Simon

    2016-01-01

    Ingestion of water has been associated with general wellbeing. When water intake is insufficient, symptoms such as thirst, fatigue and impaired memory result. Currently there are no instruments to assess water consumption associated with wellbeing. The objective of our study was to develop and validate such an instrument in urban, low socioeconomic, adult Mexican population. To construct the Water Ingestion-Related Wellbeing Instrument (WIRWI), a qualitative study in which wellbeing related to everyday practices and experiences in water consumption were investigated. To validate the WIRWI a formal, five-process procedure was used. Face and content validation were addressed, consistency was assessed by exploratory and confirmatory psychometric factor analyses, repeatability, reproducibility and concurrent validity were assessed by conducting correlation tests with other measures of wellbeing such as a quality of life instrument, the SF-36, and objective parameters such as urine osmolality, 24-hour urine total volume and others. The final WIRWI is composed of 17 items assessing physical and mental dimensions. Items were selected based on their content and face validity. Exploratory and confirmatory factor analyses yielded Cronbach's alpha of 0.87 and 0.86, respectively. The final confirmatory factor analysis demonstrated that the model estimates were satisfactory for the constructs. Statistically significant correlations with the SF-36, total liquid consumption and simple water consumption were observed. The resulting WIRWI is a reliable tool for assessing wellbeing associated with consumption of plain water in Mexican adults and could be useful for similar groups.

  13. The Validity of Peer Review in a General Medicine Journal

    PubMed Central

    Jackson, Jeffrey L.; Srinivasan, Malathi; Rea, Joanna; Fletcher, Kathlyn E.; Kravitz, Richard L.

    2011-01-01

    All the opinions in this article are those of the authors and should not be construed to reflect, in any way, those of the Department of Veterans Affairs. Background Our study purpose was to assess the predictive validity of reviewer quality ratings and editorial decisions in a general medicine journal. Methods Submissions to the Journal of General Internal Medicine (JGIM) between July 2004 and June 2005 were included. We abstracted JGIM peer review quality ratings, verified the publication status of all articles and calculated an impact factor for published articles (Rw) by dividing the 3-year citation rate by the average for this group of papers; an Rw>1 indicates a greater than average impact. Results Of 507 submissions, 128 (25%) were published in JGIM, 331 rejected (128 with review) and 48 were either not resubmitted after revision was requested or were withdrawn by the author. Of 331 rejections, 243 were published elsewhere. Articles published in JGIM had a higher citation rate than those published elsewhere (Rw: 1.6 vs. 1.1, p = 0.002). Reviewer quality ratings of article quality had good internal consistency and reviewer recommendations markedly influenced publication decisions. There was no quality rating cutpoint that accurately distinguished high from low impact articles. There was a stepwise increase in Rw for articles rejected without review, rejected after review or accepted by JGIM (Rw 0.60 vs. 0.87 vs. 1.56, p<0.0005). However, there was low agreement between reviewers for quality ratings and publication recommendations. The editorial publication decision accurately discriminated high and low impact articles in 68% of submissions. We found evidence of better accuracy with a greater number of reviewers. Conclusions The peer review process largely succeeds in selecting high impact articles and dispatching lower impact ones, but the process is far from perfect. While the inter-rater reliability between individual reviewers is low, the accuracy of sorting is improved with a greater number of reviewers. PMID:21799867

  14. Validation study and routine control monitoring of moist heat sterilization procedures.

    PubMed

    Shintani, Hideharu

    2012-06-01

    The proposed approach to validation of steam sterilization in autoclaves follows the basic life cycle concepts applicable to all validation programs. Understand the function of sterilization process, develop and understand the cycles to carry out the process, and define a suitable test or series of tests to confirm that the function of the process is suitably ensured by the structure provided. Sterilization of product and components and parts that come in direct contact with sterilized product is the most critical of pharmaceutical processes. Consequently, this process requires a most rigorous and detailed approach to validation. An understanding of the process requires a basic understanding of microbial death, the parameters that facilitate that death, the accepted definition of sterility, and the relationship between the definition and sterilization parameters. Autoclaves and support systems need to be designed, installed, and qualified in a manner that ensures their continued reliability. Lastly, the test program must be complete and definitive. In this paper, in addition to validation study, documentation of IQ, OQ and PQ concretely were described.

  15. Using eye tracking to identify faking attempts during penile plethysmography assessment.

    PubMed

    Trottier, Dominique; Rouleau, Joanne-Lucine; Renaud, Patrice; Goyette, Mathieu

    2014-01-01

    Penile plethysmography (PPG) is considered the most rigorous method for sexual interest assessment. Nevertheless, it is subject to faking attempts by participants, which compromises the internal validity of the instrument. To date, various attempts have been made to limit voluntary control of sexual response during PPG assessments, without satisfactory results. This exploratory research examined eye-tracking technologies' ability to identify the presence of cognitive strategies responsible for erectile inhibition during PPG assessment. Eye movements and penile responses for 20 subjects were recorded while exploring animated human-like computer-generated stimuli in a virtual environment under three distinct viewing conditions: (a) the free visual exploration of a preferred sexual stimulus without erectile inhibition; (b) the viewing of a preferred sexual stimulus with erectile inhibition; and (c) the free visual exploration of a non-preferred sexual stimulus. Results suggest that attempts to control erectile responses generate specific eye-movement variations, characterized by a general deceleration of the exploration process and limited exploration of the erogenous zone. Findings indicate that recording eye movements can provide significant information on the presence of competing covert processes responsible for erectile inhibition. The use of eye-tracking technologies during PPG could therefore lead to improved internal validity of the plethysmographic procedure.

  16. Development and initial validation of a computer-administered health literacy assessment in Spanish and English: FLIGHT/VIDAS.

    PubMed

    Ownby, Raymond L; Acevedo, Amarilis; Waldrop-Valverde, Drenna; Jacobs, Robin J; Caballero, Joshua; Davenport, Rosemary; Homs, Ana-Maria; Czaja, Sara J; Loewenstein, David

    2013-01-01

    Current measures of health literacy have been criticized on a number of grounds, including use of a limited range of content, development on small and atypical patient groups, and poor psychometric characteristics. In this paper, we report the development and preliminary validation of a new computer-administered and -scored health literacy measure addressing these limitations. Items in the measure reflect a wide range of content related to health promotion and maintenance as well as care for diseases. The development process has focused on creating a measure that will be useful in both Spanish and English, while not requiring substantial time for clinician training and individual administration and scoring. The items incorporate several formats, including questions based on brief videos, which allow for the assessment of listening comprehension and the skills related to obtaining information on the Internet. In this paper, we report the interim analyses detailing the initial development and pilot testing of the items (phase 1 of the project) in groups of Spanish and English speakers. We then describe phase 2, which included a second round of testing of the items, in new groups of Spanish and English speakers, and evaluation of the new measure's reliability and validity in relation to other measures. Data are presented that show that four scales (general health literacy, numeracy, conceptual knowledge, and listening comprehension), developed through a process of item and factor analyses, have significant relations to existing measures of health literacy.

  17. Adaptation and evaluation of the measurement properties of the Brazilian version of the Self-efficacy for Appropriate Medication Adherence Scale1

    PubMed Central

    Pedrosa, Rafaela Batista dos Santos; Rodrigues, Roberta Cunha Matheus

    2016-01-01

    Objectives: to undertake the cultural adaptation of, and to evaluate the measurement properties of, the Brazilian version of the Self-efficacy for Appropriate Medication Adherence Scale in coronary heart disease (CHD) patients, with outpatient monitoring at a teaching hospital. Method: the process of cultural adaptation was undertaken in accordance with the international literature. The data were obtained from 147 CHD patients, through the application of the sociodemographic/clinical characterization instrument, and of the Brazilian versions of the Morisky Self-Reported Measure of Medication Adherence Scale, the General Perceived Self-Efficacy Scale, and the Self-efficacy for Appropriate Medication Adherence Scale. Results: the Brazilian version of the Self-efficacy for Appropriate Medication Adherence Scale presented evidence of semantic-idiomatic, conceptual and cultural equivalencies, with high acceptability and practicality. The floor effect was evidenced for the total score and for the domains of the scale studied. The findings evidenced the measure's reliability. The domains of the Brazilian version of the Self-efficacy for Appropriate Medication Adherence Scale presented significant inverse correlations of moderate to strong magnitude between the scores of the Morisky scale, indicating convergent validity, although correlations with the measure of general self-efficacy were not evidenced. The validity of known groups was supported, as the scale discriminated between "adherents" and "non-adherents" to the medications, as well as to "sufficient dose" and "insufficient dose". Conclusion: the Brazilian version of the Self-efficacy for Appropriate Medication Adherence Scale presented evidence of reliability and validity in coronary heart disease outpatients. PMID:27192417

  18. Predictive validity of callous-unemotional traits measured in early adolescence with respect to multiple antisocial outcomes.

    PubMed

    McMahon, Robert J; Witkiewitz, Katie; Kotler, Julie S

    2010-11-01

    This study investigated the predictive validity of youth callous-unemotional (CU) traits, as measured in early adolescence (Grade 7) by the Antisocial Process Screening Device (APSD; Frick & Hare, 2001), in a longitudinal sample (N = 754). Antisocial outcomes, assessed in adolescence and early adulthood, included self-reported general delinquency from 7th grade through 2 years post-high school, self-reported serious crimes through 2 years post-high school, juvenile and adult arrest records through 1 year post-high school, and antisocial personality disorder symptoms and diagnosis at 2 years post-high school. CU traits measured in 7th grade were highly predictive of 5 of the 6 antisocial outcomes-general delinquency, juvenile and adult arrests, and early adult antisocial personality disorder criterion count and diagnosis-over and above prior and concurrent conduct problem behavior (i.e., criterion counts of oppositional defiant disorder and conduct disorder) and attention-deficit/hyperactivity disorder (criterion count). Incorporating a CU traits specifier for those with a diagnosis of conduct disorder improved the positive prediction of antisocial outcomes, with a very low false-positive rate. There was minimal evidence of moderation by sex, race, or urban/rural status. Urban/rural status moderated one finding, with being from an urban area associated with stronger relations between CU traits and adult arrests. Findings clearly support the inclusion of CU traits as a specifier for the diagnosis of conduct disorder, at least with respect to predictive validity. PsycINFO Database Record (c) 2010 APA, all rights reserved

  19. Psychological distress screening in cancer patients: psychometric properties of tools available in Italy.

    PubMed

    Muzzatti, Barbara; Annunziata, Maria Antonietta

    2012-01-01

    The main national and international organisms recommend continuous monitoring of psychological distress in cancer patients throughout the disease trajectory. The reasons for this concern are the high prevalence of psychological distress in cancer patients and its association with a worse quality of life, poor adherence to treatment, and stronger assistance needs. Most screening tools for psychological distress were developed in English-speaking countries. To be fit for use in different cultural contexts (like the Italian), they need to undergo accurate translation and specific validation. In the present work we summarized the validation studies for psychological distress screening tools available in Italian that are most widely employed internationally, with the aim of helping clinicians choose the adequate instrument. With knowledge of the properties of the corresponding Italian versions, researchers would be better able to identify the instruments that deserve further investigation. We carried out a systematic review of the literature. Results. Twenty-nine studies of eight different instruments (five relating to psychological distress, three to its depressive component) were identified. Ten of these studies involved cancer patients and 19 referred to the general population or to non-cancer, non-psychiatric subjects. For seven of the eight tools, data on concurrent and discriminant validity were available. For five instruments data on criterion validity were available, for four there were data on construct validity, and for one tool divergent and cross-cultural validity data were provided. For six of the eight tools the literature provided data on reliability (mostly about internal consistency). Since none of the eight instruments for which we found validation studies relative to the Italian context had undergone a complete and organic validation process, their use in the clinical context must be cautious. Italian researchers should be proactive and make a valid and reliable screening tool for Italian patients available.

  20. The Role of General Anaesthesia in Special Care & Paediatric Dentistry; Inclusion Criteria and Clinical Indications.

    PubMed

    Dziedzic, Arkadiusz

    Dental practitioners dealing with children and individuals with special needs can be supported by the provision of general anaesthesia for the most challenging patients in situations where other options are insufficient. The availability of general anaesthesia will further the aim of extending access to the widest range of dental care to the greatest number of patients regardless of disability, age or phobia. The objective is to ensure patients have a pain-free and healthy mouth, and any necessary treatment in the most appropriate setting related to their specific needs. A strictly individual and holistic approach is required when evaluating the risk versus benefit of proceeding with general anaesthesia for delivery of dental treatment particularly for children and special needs individuals. It is vitally important to consider and address all relevant factors specific to this particular group of patients including assessment of capacity, validity of consent, and any specific medical, social and behavioural issues. The other sedation modalities must be always taken into consideration. This article emphasises the crucial decision-making role of dentists in the referral process for dental treatment under general anaesthesia and the need for multidisciplinary co-operation between dental practitioners, community and hospital services.

Top