A comprehensive review on the quasi-induced exposure technique.
Jiang, Xinguo; Lyles, Richard W; Guo, Runhua
2014-04-01
The goal is to comprehensively examine the state-of-the-art applications and methodological development of quasi-induced exposure and consequently pinpoint the future research directions in terms of implementation guidelines, limitations, and validity tests. The paper conducts a comprehensive review on approximately 45 published papers relevant to quasi-induced exposure regarding four key topics of interest: applications, responsibility assignment, validation of assumptions, and methodological development. Specific findings include that: (1) there is no systematic data screening procedure in place and how the eliminated crash data will impact the responsibility assignment is generally unknown; (2) there is a lack of necessary efforts to assess the validity of assumptions prior to its application and the validation efforts are mostly restricted to the aggregated levels due to the limited availability of exposure truth; and (3) there is a deficiency of quantitative analyses to evaluate the magnitude and directions of bias as a result of injury risks and crash avoidance ability. The paper points out the future research directions and insights in terms of the validity tests and implementation guidelines. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hawkins, Melanie; Elsworth, Gerald R; Osborne, Richard H
2018-07-01
Data from subjective patient-reported outcome measures (PROMs) are now being used in the health sector to make or support decisions about individuals, groups and populations. Contemporary validity theorists define validity not as a statistical property of the test but as the extent to which empirical evidence supports the interpretation of test scores for an intended use. However, validity testing theory and methodology are rarely evident in the PROM validation literature. Application of this theory and methodology would provide structure for comprehensive validation planning to support improved PROM development and sound arguments for the validity of PROM score interpretation and use in each new context. This paper proposes the application of contemporary validity theory and methodology to PROM validity testing. The validity testing principles will be applied to a hypothetical case study with a focus on the interpretation and use of scores from a translated PROM that measures health literacy (the Health Literacy Questionnaire or HLQ). Although robust psychometric properties of a PROM are a pre-condition to its use, a PROM's validity lies in the sound argument that a network of empirical evidence supports the intended interpretation and use of PROM scores for decision making in a particular context. The health sector is yet to apply contemporary theory and methodology to PROM development and validation. The theoretical and methodological processes in this paper are offered as an advancement of the theory and practice of PROM validity testing in the health sector.
The Spiritual Dimensions of Psychopolitical Validity: The Case of the Clergy Sexual Abuse Crisis
ERIC Educational Resources Information Center
Jones, Diana L.; Dokecki, Paul R.
2008-01-01
In this article, the authors explore the spiritual dimensions of psychopolitical validity and use it as a lens to analyze clergy sexual abuse. The psychopolitical approach suggests a comprehensive human science methodology that invites exploration of phenomena such as spirituality and religious experience and the use of methods from a wide variety…
Determination of Trace Elements in Uranium by HPLC-ID-ICP-MS: NTNFC Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manard, Benjamin Thomas; Wylie, Ernest Miller II; Xu, Ning
This report covers the FY 16 effort for the HPLC-ID-ICP-MS methodology 1) sub-method validation for the group I&II elements, 2) sub-method stood-up and validation for REE, 3) sub-method development for the transition element, and 4) completion of a comprehensive SOP for three families of elements.
ERIC Educational Resources Information Center
Qian, Gaoyin
Some methodological issues in the study of levels of knowledge are reviewed, and needs for further research are explored, drawing on an analysis of 12 studies reported since the late 1970s. In the 12 studies, 16 quantitative experiments were conducted. These were assessed for internal and external validity. Analysis revealed some shortcomings in…
Validating agent oriented methodology (AOM) for netlogo modelling and simulation
NASA Astrophysics Data System (ADS)
WaiShiang, Cheah; Nissom, Shane; YeeWai, Sim; Sharbini, Hamizan
2017-10-01
AOM (Agent Oriented Modeling) is a comprehensive and unified agent methodology for agent oriented software development. AOM methodology was proposed to aid developers with the introduction of technique, terminology, notation and guideline during agent systems development. Although AOM methodology is claimed to be capable of developing a complex real world system, its potential is yet to be realized and recognized by the mainstream software community and the adoption of AOM is still at its infancy. Among the reason is that there are not much case studies or success story of AOM. This paper presents two case studies on the adoption of AOM for individual based modelling and simulation. It demonstrate how the AOM is useful for epidemiology study and ecological study. Hence, it further validate the AOM in a qualitative manner.
MVP-CA Methodology for the Expert System Advocate's Advisor (ESAA)
DOT National Transportation Integrated Search
1997-11-01
The Multi-Viewpoint Clustering Analysis (MVP-CA) tool is a semi-automated tool to provide a valuable aid for comprehension, verification, validation, maintenance, integration, and evolution of complex knowledge-based software systems. In this report,...
A Comprehensive Validation Approach Using The RAVEN Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J
2015-06-01
The RAVEN computer code , developed at the Idaho National Laboratory, is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is a multi-purpose probabilistic and uncertainty quantification platform, capable to communicate with any system code. A natural extension of the RAVEN capabilities is the imple- mentation of an integrated validation methodology, involving several different metrics, that represent an evolution of the methods currently used in the field. The state-of-art vali- dation approaches use neither exploration of the input space through sampling strategies, nor a comprehensive variety of metrics neededmore » to interpret the code responses, with respect experimental data. The RAVEN code allows to address both these lacks. In the following sections, the employed methodology, and its application to the newer developed thermal-hydraulic code RELAP-7, is reported.The validation approach has been applied on an integral effect experiment, representing natu- ral circulation, based on the activities performed by EG&G Idaho. Four different experiment configurations have been considered and nodalized.« less
Training effectiveness assessment: Methodological problems and issues
NASA Technical Reports Server (NTRS)
Cross, Kenneth D.
1992-01-01
The U.S. military uses a large number of simulators to train and sustain the flying skills of helicopter pilots. Despite the enormous resources required to purchase, maintain, and use those simulators, little effort has been expended in assessing their training effectiveness. One reason for this is the lack of an evaluation methodology that yields comprehensive and valid data at a practical cost. Some of these methodological problems and issues that arise in assessing simulator training effectiveness, as well as problems with the classical transfer-of-learning paradigm were discussed.
A Comprehensive Validation Methodology for Sparse Experimental Data
NASA Technical Reports Server (NTRS)
Norman, Ryan B.; Blattnig, Steve R.
2010-01-01
A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-25
... improve quality of life; (3) decrease the number of Americans with undiagnosed diabetes; (4) Among people... and resources that support behavior change, improved quality of life, and better diabetes outcomes; (3..., including the validity of the methodology and assumptions used; (3) Ways to enhance the quality, utility...
On Selecting Commercial Information Systems
Möhr, J.R.; Sawinski, R.; Kluge, A.; Alle, W.
1984-01-01
As more commercial information systems become available, the methodology for their selection gains importance. An instances where the method employed for the selection of laboratory information systems was multilevel assessment. The method is described and the experience gained in the project is summarized and discussed. Evidence is provided that the employed method is comprehensive, reproducible, valid and economic.
Vending machine assessment methodology. A systematic review.
Matthews, Melissa A; Horacek, Tanya M
2015-07-01
The nutritional quality of food and beverage products sold in vending machines has been implicated as a contributing factor to the development of an obesogenic food environment. How comprehensive, reliable, and valid are the current assessment tools for vending machines to support or refute these claims? A systematic review was conducted to summarize, compare, and evaluate the current methodologies and available tools for vending machine assessment. A total of 24 relevant research studies published between 1981 and 2013 met inclusion criteria for this review. The methodological variables reviewed in this study include assessment tool type, study location, machine accessibility, product availability, healthfulness criteria, portion size, price, product promotion, and quality of scientific practice. There were wide variations in the depth of the assessment methodologies and product healthfulness criteria utilized among the reviewed studies. Of the reviewed studies, 39% evaluated machine accessibility, 91% evaluated product availability, 96% established healthfulness criteria, 70% evaluated portion size, 48% evaluated price, 52% evaluated product promotion, and 22% evaluated the quality of scientific practice. Of all reviewed articles, 87% reached conclusions that provided insight into the healthfulness of vended products and/or vending environment. Product healthfulness criteria and complexity for snack and beverage products was also found to be variable between the reviewed studies. These findings make it difficult to compare results between studies. A universal, valid, and reliable vending machine assessment tool that is comprehensive yet user-friendly is recommended. Copyright © 2015 Elsevier Ltd. All rights reserved.
Validation of sterilizing grade filtration.
Jornitz, M W; Meltzer, T H
2003-01-01
Validation consideration of sterilizing grade filters, namely 0.2 micron, changed when FDA voiced concerns about the validity of Bacterial Challenge tests performed in the past. Such validation exercises are nowadays considered to be filter qualification. Filter validation requires more thorough analysis, especially Bacterial Challenge testing with the actual drug product under process conditions. To do so, viability testing is a necessity to determine the Bacterial Challenge test methodology. Additionally to these two compulsory tests, other evaluations like extractable, adsorption and chemical compatibility tests should be considered. PDA Technical Report # 26, Sterilizing Filtration of Liquids, describes all parameters and aspects required for the comprehensive validation of filters. The report is a most helpful tool for validation of liquid filters used in the biopharmaceutical industry. It sets the cornerstones of validation requirements and other filtration considerations.
2015-11-01
collected. We determined that the methodologies were valid and the data were reliable for our purposes. In addition, we interviewed DOD officials and...of contracts and cost of labor involved in preparing program documentation, to arrive at the estimates for savings. To validate the data used in the...studies to be reasonable, and the data were sufficiently reliable for our purposes. We interviewed officials from DOD and 5 of the 12 Test Program
Portell, Mariona; Anguera, M Teresa; Hernández-Mendo, Antonio; Jonsson, Gudberg K
2015-01-01
Contextual factors are crucial for evaluative research in psychology, as they provide insights into what works, for whom, in what circumstances, in what respects, and why. Studying behavior in context, however, poses numerous methodological challenges. Although a comprehensive framework for classifying methods seeking to quantify biopsychosocial aspects in everyday contexts was recently proposed, this framework does not contemplate contributions from observational methodology. The aim of this paper is to justify and propose a more general framework that includes observational methodology approaches. Our analysis is rooted in two general concepts: ecological validity and methodological complementarity. We performed a narrative review of the literature on research methods and techniques for studying daily life and describe their shared properties and requirements (collection of data in real time, on repeated occasions, and in natural settings) and classification criteria (eg, variables of interest and level of participant involvement in the data collection process). We provide several examples that illustrate why, despite their higher costs, studies of behavior and experience in everyday contexts offer insights that complement findings provided by other methodological approaches. We urge that observational methodology be included in classifications of research methods and techniques for studying everyday behavior and advocate a renewed commitment to prioritizing ecological validity in behavioral research seeking to quantify biopsychosocial aspects. PMID:26089708
Development of the Comprehensive Cervical Dystonia Rating Scale: Methodology
Comella, Cynthia L.; Fox, Susan H.; Bhatia, Kailash P.; Perlmutter, Joel S.; Jinnah, Hyder A.; Zurowski, Mateusz; McDonald, William M.; Marsh, Laura; Rosen, Ami R.; Waliczek, Tracy; Wright, Laura J.; Galpern, Wendy R.; Stebbins, Glenn T.
2016-01-01
We present the methodology utilized for development and clinimetric testing of the Comprehensive Cervical Dystonia (CD) Rating scale, or CCDRS. The CCDRS includes a revision of the Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS-2), a newly developed psychiatric screening tool (TWSTRS-PSYCH), and the previously validated Cervical Dystonia Impact Profile (CDIP-58). For the revision of the TWSTRS, the original TWSTRS was examined by a committee of dystonia experts at a dystonia rating scales workshop organized by the Dystonia Medical Research Foundation. During this workshop, deficiencies in the standard TWSTRS were identified and recommendations for revision of the severity and pain subscales were incorporated into the TWSTRS-2. Given that no scale currently evaluates the psychiatric features of cervical dystonia (CD), we used a modified Delphi methodology and a reiterative process of item selection to develop the TWSTRS-PSYCH. We also included the CDIP-58 to capture the impact of CD on quality of life. The three scales (TWSTRS2, TWSTRS-PSYCH, and CDIP-58) were combined to construct the CCDRS. Clinimetric testing of reliability and validity of the CCDRS are described. The CCDRS was designed to be used in a modular fashion that can measure the full spectrum of CD. This scale will provide rigorous assessment for studies of natural history as well as novel symptom-based or disease-modifying therapies. PMID:27088112
Development of the Comprehensive Cervical Dystonia Rating Scale: Methodology.
Comella, Cynthia L; Fox, Susan H; Bhatia, Kailash P; Perlmutter, Joel S; Jinnah, Hyder A; Zurowski, Mateusz; McDonald, William M; Marsh, Laura; Rosen, Ami R; Waliczek, Tracy; Wright, Laura J; Galpern, Wendy R; Stebbins, Glenn T
2015-06-01
We present the methodology utilized for development and clinimetric testing of the Comprehensive Cervical Dystonia (CD) Rating scale, or CCDRS. The CCDRS includes a revision of the Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS-2), a newly developed psychiatric screening tool (TWSTRS-PSYCH), and the previously validated Cervical Dystonia Impact Profile (CDIP-58). For the revision of the TWSTRS, the original TWSTRS was examined by a committee of dystonia experts at a dystonia rating scales workshop organized by the Dystonia Medical Research Foundation. During this workshop, deficiencies in the standard TWSTRS were identified and recommendations for revision of the severity and pain subscales were incorporated into the TWSTRS-2. Given that no scale currently evaluates the psychiatric features of cervical dystonia (CD), we used a modified Delphi methodology and a reiterative process of item selection to develop the TWSTRS-PSYCH. We also included the CDIP-58 to capture the impact of CD on quality of life. The three scales (TWSTRS2, TWSTRS-PSYCH, and CDIP-58) were combined to construct the CCDRS. Clinimetric testing of reliability and validity of the CCDRS are described. The CCDRS was designed to be used in a modular fashion that can measure the full spectrum of CD. This scale will provide rigorous assessment for studies of natural history as well as novel symptom-based or disease-modifying therapies.
NASA Technical Reports Server (NTRS)
Unal, Resit; Keating, Charles; Conway, Bruce; Chytka, Trina
2004-01-01
A comprehensive expert-judgment elicitation methodology to quantify input parameter uncertainty and analysis tool uncertainty in a conceptual launch vehicle design analysis has been developed. The ten-phase methodology seeks to obtain expert judgment opinion for quantifying uncertainties as a probability distribution so that multidisciplinary risk analysis studies can be performed. The calibration and aggregation techniques presented as part of the methodology are aimed at improving individual expert estimates, and provide an approach to aggregate multiple expert judgments into a single probability distribution. The purpose of this report is to document the methodology development and its validation through application to a reference aerospace vehicle. A detailed summary of the application exercise, including calibration and aggregation results is presented. A discussion of possible future steps in this research area is given.
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2014-01-01
Unknown risks are introduced into failure critical systems when probability of detection (POD) capabilities are accepted without a complete understanding of the statistical method applied and the interpretation of the statistical results. The presence of this risk in the nondestructive evaluation (NDE) community is revealed in common statements about POD. These statements are often interpreted in a variety of ways and therefore, the very existence of the statements identifies the need for a more comprehensive understanding of POD methodologies. Statistical methodologies have data requirements to be met, procedures to be followed, and requirements for validation or demonstration of adequacy of the POD estimates. Risks are further enhanced due to the wide range of statistical methodologies used for determining the POD capability. Receiver/Relative Operating Characteristics (ROC) Display, simple binomial, logistic regression, and Bayes' rule POD methodologies are widely used in determining POD capability. This work focuses on Hit-Miss data to reveal the framework of the interrelationships between Receiver/Relative Operating Characteristics Display, simple binomial, logistic regression, and Bayes' Rule methodologies as they are applied to POD. Knowledge of these interrelationships leads to an intuitive and global understanding of the statistical data, procedural and validation requirements for establishing credible POD estimates.
Assessing validity of observational intervention studies - the Benchmarking Controlled Trials.
Malmivaara, Antti
2016-09-01
Benchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations. To create and pilot test a checklist for appraising methodological validity of a BCT. The checklist was created by extracting the most essential elements from the comprehensive set of criteria in the previous paper on BCTs. Also checklists and scientific papers on observational studies and respective systematic reviews were utilized. Ten BCTs published in the Lancet and in the New England Journal of Medicine were used to assess feasibility of the created checklist. The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies. The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies. However, the piloted checklist should be validated in further studies. Key messages Benchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations. This paper presents a checklist for appraising methodological validity of BCTs and pilot-tests the checklist with ten BCTs published in leading medical journals. The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies. The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies.
Kilimnik, Chelsea D; Pulverman, Carey S; Meston, Cindy M
2018-04-01
Childhood sexual abuse (CSA) has been a topic of interest in sexual health research for decades, yet literature on the sexual health correlates of CSA has been hindered by methodologic inconsistencies that have resulted in discrepant samples and mixed results. To review the major methodologic inconsistencies in the field, explore the scientific and clinical impact of these inconsistencies, and propose methodologic approaches to increase consistency and generalizability to the general population of women with CSA histories. A comprehensive literature review was conducted to assess the methodologic practices used in examining CSA and sexual health outcomes. Methodologic decisions of researchers examining sexual health outcomes of CSA. There are a number of inconsistencies in the methods used to examine CSA in sexual health research across the domains of CSA operationalization, recruitment language, and measurement approaches to CSA experiences. The examination of CSA and sexual health correlates is an important research endeavor that needs rigorous methodologic approaches. We propose recommendations to increase the utility of CSA research in sexual health. We recommend the use of a developmentally informed operationalization of childhood and adolescence, rather than age cutoffs. Researchers are encouraged to use a broad operationalization of sexual abuse such that different abuse characteristics can be measured, reported, and examined in the role of sexual health outcomes. We recommend inclusive recruitment approaches to capture the full range of CSA experiences and transparency in reporting these methods. The field also could benefit from the validation of existing self-report instruments for assessing CSA and detailed reporting of the instruments used in research studies. The use of more consistent research practices could improve the state of knowledge on the relation between CSA and sexual health. Kilimnik CD, Pulverman CS, Meston CM. Methodologic Considerations for the Study of Childhood Sexual Abuse in Sexual Health Outcome Research: A Comprehensive Review. Sex Med Rev 2018;6:176-187. Copyright © 2018 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.
Delavigne, V; Carretier, J; Brusco, Sylvie; Leichtnam-Dugarin, L; Déchelette, M; Philip, Thierry
2007-04-01
In response to the evolution of the information-seeking behaviour of patients and concerns from health professionals regarding cancer patient information, the French National Federation of Comprehensive Cancer Centres (FNCLCC) introduced, in 1998, an information and education program dedicated to patients and relatives, the SOR SAVOIR PATIENT program. Lexonco project is a dictionary on oncology adapted for patients and relatives and validated by medical experts and cancer patients. This paper describes the methodological aspects which take into account patients and experts' perspectives to produce the defi nitions.
Pediatric Cancer Survivorship Research: Experience of the Childhood Cancer Survivor Study
Leisenring, Wendy M.; Mertens, Ann C.; Armstrong, Gregory T.; Stovall, Marilyn A.; Neglia, Joseph P.; Lanctot, Jennifer Q.; Boice, John D.; Whitton, John A.; Yasui, Yutaka
2009-01-01
The Childhood Cancer Survivor Study (CCSS) is a comprehensive multicenter study designed to quantify and better understand the effects of pediatric cancer and its treatment on later health, including behavioral and sociodemographic outcomes. The CCSS investigators have published more than 100 articles in the scientific literature related to the study. As with any large cohort study, high standards for methodologic approaches are imperative for valid and generalizable results. In this article we describe methodological issues of study design, exposure assessment, outcome validation, and statistical analysis. Methods for handling missing data, intrafamily correlation, and competing risks analysis are addressed; each with particular relevance to pediatric cancer survivorship research. Our goal in this article is to provide a resource and reference for other researchers working in the area of long-term cancer survivorship. PMID:19364957
Mani, Suresh; Sharma, Shobha; Omar, Baharudin; Paungmali, Aatit; Joseph, Leonard
2017-04-01
Purpose The purpose of this review is to systematically explore and summarise the validity and reliability of telerehabilitation (TR)-based physiotherapy assessment for musculoskeletal disorders. Method A comprehensive systematic literature review was conducted using a number of electronic databases: PubMed, EMBASE, PsycINFO, Cochrane Library and CINAHL, published between January 2000 and May 2015. The studies examined the validity, inter- and intra-rater reliabilities of TR-based physiotherapy assessment for musculoskeletal conditions were included. Two independent reviewers used the Quality Appraisal Tool for studies of diagnostic Reliability (QAREL) and the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool to assess the methodological quality of reliability and validity studies respectively. Results A total of 898 hits were achieved, of which 11 articles based on inclusion criteria were reviewed. Nine studies explored the concurrent validity, inter- and intra-rater reliabilities, while two studies examined only the concurrent validity. Reviewed studies were moderate to good in methodological quality. The physiotherapy assessments such as pain, swelling, range of motion, muscle strength, balance, gait and functional assessment demonstrated good concurrent validity. However, the reported concurrent validity of lumbar spine posture, special orthopaedic tests, neurodynamic tests and scar assessments ranged from low to moderate. Conclusion TR-based physiotherapy assessment was technically feasible with overall good concurrent validity and excellent reliability, except for lumbar spine posture, orthopaedic special tests, neurodynamic testa and scar assessment.
Violent Crime in Post-Civil War Guatemala: Causes and Policy Implications
2015-03-01
on field research and case studies in Honduras, Bolivia, and Argentina. Bailey’s Security Trap theory is comprehensive in nature and derived from... research question. The second phase uses empirical data and comparative case studies to validate or challenge selected arguments that potentially...Contextual relevancy, historical inference, Tools: Empirics and case conclusions empirical data studies Figme2. Sample Research Methodology E
Mrazek, Michael D.; Phillips, Dawa T.; Franklin, Michael S.; Broadway, James M.; Schooler, Jonathan W.
2013-01-01
Mind-wandering is the focus of extensive investigation, yet until recently there has been no validated scale to directly measure trait levels of task-unrelated thought. Scales commonly used to assess mind-wandering lack face validity, measuring related constructs such as daydreaming or behavioral errors. Here we report four studies validating a Mind-Wandering Questionnaire (MWQ) across college, high school, and middle school samples. The 5-item scale showed high internal consistency, as well as convergent validity with existing measures of mind-wandering and related constructs. Trait levels of mind-wandering, as measured by the MWQ, were correlated with task-unrelated thought measured by thought sampling during a test of reading comprehension. In both middle school and high school samples, mind-wandering during testing was associated with worse reading comprehension. By contrast, elevated trait levels of mind-wandering predicted worse mood, less life-satisfaction, greater stress, and lower self-esteem. By extending the use of thought sampling to measure mind-wandering among adolescents, our findings also validate the use of this methodology with younger populations. Both the MWQ and thought sampling indicate that mind-wandering is a pervasive—and problematic—influence on the performance and well-being of adolescents. PMID:23986739
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Farooq, Mohammad U.
1986-01-01
The definition of proposed research addressing the development and validation of a methodology for the design and evaluation of user interfaces for interactive information systems is given. The major objectives of this research are: the development of a comprehensive, objective, and generalizable methodology for the design and evaluation of user interfaces for information systems; the development of equations and/or analytical models to characterize user behavior and the performance of a designed interface; the design of a prototype system for the development and administration of user interfaces; and the design and use of controlled experiments to support the research and test/validate the proposed methodology. The proposed design methodology views the user interface as a virtual machine composed of three layers: an interactive layer, a dialogue manager layer, and an application interface layer. A command language model of user system interactions is presented because of its inherent simplicity and structured approach based on interaction events. All interaction events have a common structure based on common generic elements necessary for a successful dialogue. It is shown that, using this model, various types of interfaces could be designed and implemented to accommodate various categories of users. The implementation methodology is discussed in terms of how to store and organize the information.
Assessing validity of observational intervention studies – the Benchmarking Controlled Trials
Malmivaara, Antti
2016-01-01
Abstract Background: Benchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations. Aims: To create and pilot test a checklist for appraising methodological validity of a BCT. Methods: The checklist was created by extracting the most essential elements from the comprehensive set of criteria in the previous paper on BCTs. Also checklists and scientific papers on observational studies and respective systematic reviews were utilized. Ten BCTs published in the Lancet and in the New England Journal of Medicine were used to assess feasibility of the created checklist. Results: The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies. Conclusions: The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies. However, the piloted checklist should be validated in further studies.Key messagesBenchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations.This paper presents a checklist for appraising methodological validity of BCTs and pilot-tests the checklist with ten BCTs published in leading medical journals. The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies.The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies. PMID:27238631
González-Melado, F J; Di Ciommo, V M; Di Pietro, M; Chiarini Testa, M B; Cutrera, R
2013-10-01
The purpose of this research was to show the translation and linguistic validation of the PedsQL™ 4.0 Generic Core Infant Scales Parents Report for Infants (ages 13-24 months) from its original English version to Italian language. The linguistic validation consists in three steps: a) different forward translations from the original US English instrument to Italian; this step includes the drawing of a "reconciliation" version (version 1); b) backward translations from the Italian reconciliation version to US English; c) patient testing: the second version of the questionnaire (obtained after the backward translations) has to be tested on a panel of a minimum of 5 respondents, throughout cognitive interviewing methodology, in order to obtain the final italian version of the PedsQL™ Parents Report for Infants (ages 13-24 months). In this report we summarize the third step of this process. To study the content validity, the applicability and comprehension of our questionnarie translation, we tested it through a qualitative methodology in a sample of parents whose children were hospitalized in Bambino Gesù Children's Hospital with two different kinds of interview: 4 parents responded to the questions posed through a "thinkaloud interview" and 3 parents responded to the questionnaire and to a "respondent debriefing" interview. We modified the main question of each section and also one of the possible answer in order to maintain the Italian translation that appeared in others PedsQL™. We did not modify the questions of each section because respondents expressed that are clearly comprehensible and easy to understand.
Mihura, Joni L; Meyer, Gregory J; Dumitrascu, Nicolae; Bombel, George
2016-01-01
We respond to Tibon Czopp and Zeligman's (2016) critique of our systematic reviews and meta-analyses of 65 Rorschach Comprehensive System (CS) variables published in Psychological Bulletin (2013). The authors endorsed our supportive findings but critiqued the same methodology when used for the 13 unsupported variables. Unfortunately, their commentary was based on significant misunderstandings of our meta-analytic method and results, such as thinking we used introspectively assessed criteria in classifying levels of support and reporting only a subset of our externally assessed criteria. We systematically address their arguments that our construct label and criterion variable choices were inaccurate and, therefore, meta-analytic validity for these 13 CS variables was artificially low. For example, the authors created new construct labels for these variables that they called "the customary CS interpretation," but did not describe their methodology nor provide evidence that their labels would result in better validity than ours. They cite studies they believe we should have included; we explain how these studies did not fit our inclusion criteria and that including them would have actually reduced the relevant CS variables' meta-analytic validity. Ultimately, criticisms alone cannot change meta-analytic support from negative to positive; Tibon Czopp and Zeligman would need to conduct their own construct validity meta-analyses.
Knowledge and skills of the lamaze certified childbirth educator: results of a job task analysis.
Budin, Wendy C; Gross, Leon; Lothian, Judith A; Mendelson, Jeanne
2014-01-01
Content validity of certification examinations is demonstrated over time with comprehensive job analyses conducted and analyzed by experts, with data gathered from stakeholders. In November 2011, the Lamaze International Certification Council conducted a job analysis update of the 2002 job analysis survey. This article presents the background, methodology, and findings of the job analysis. Changes in the test blueprint based on these findings are presented.
Mertz, Marcel; Sofaer, Neema; Strech, Daniel
2014-09-27
The systematic review of reasons is a new way to obtain comprehensive information about specific ethical topics. One such review was carried out for the question of why post-trial access to trial drugs should or need not be provided. The objective of this study was to empirically validate this review using an author check method. The article also reports on methodological challenges faced by our study. We emailed a questionnaire to the 64 corresponding authors of those papers that were assessed in the review of reasons on post-trial access. The questionnaire consisted of all quotations ("reason mentions") that were identified by the review to represent a reason in a given author's publication, together with a set of codings for the quotations. The authors were asked to rate the correctness of the codings. We received 19 responses, from which only 13 were completed questionnaires. In total, 98 quotations and their related codes in the 13 questionnaires were checked by the addressees. For 77 quotations (79%), all codings were deemed correct, for 21 quotations (21%), some codings were deemed to need correction. Most corrections were minor and did not imply a complete misunderstanding of the citation. This first attempt to validate a review of reasons leads to four crucial methodological questions relevant to the future conduct of such validation studies: 1) How can a description of a reason be deemed incorrect? 2) Do the limited findings of this author check study enable us to determine whether the core results of the analysed SRR are valid? 3) Why did the majority of surveyed authors refrain from commenting on our understanding of their reasoning? 4) How can the method for validating reviews of reasons be improved?
Zucchetti, Giulia; Rossi, Francesca; Chamorro Vina, Carolina; Bertorello, Nicoletta; Fagioli, Franca
2018-05-01
An exercise program (EP) during cancer treatment seems to be a valid strategy against physiological and quality-of-life impairments, but scientific evidence of benefits among pediatric patients is still limited. This review summarizes the literature focused on randomized controlled trials of EP offered to patients during leukemia and lymphoma treatment. Studies published up to June 2017 were selected from multiple databases and assessed by three independent reviewers for methodological validity. The review identified eight studies, but several types of bias have to be avoided to provide evidence-based recommendations accessible to patients, families, and professionals. © 2018 Wiley Periodicals, Inc.
Lavoie Smith, Ellen M.; Haupt, Rylie; Kelly, James P.; Lee, Deborah; Kanzawa-Lee, Grace; Knoerl, Robert; Bridges, Celia; Alberti, Paola; Prasertsri, Nusara; Donohoe, Clare
2018-01-01
Purpose/Objectives To test the content validity of a 16-item version of the European Organisation for Research and Treatment of Cancer (EORTC) Quality of Life Questionnaire–Chemotherapy-Induced Peripheral Neuropathy (QLQ-CIPN20). Research Approach Cross-sectional, prospective, qualitative design. Setting Six outpatient oncology clinics within the University of Michigan Health System’s comprehensive cancer center in Ann Arbor. Participants 25 adults with multiple myeloma or breast, gynecologic, gastrointestinal, or head and neck malignancies experiencing peripheral neuropathy caused by neurotoxic chemotherapy. Methodologic Approach Cognitive interviewing methodology was used to evaluate the content validity of a 16-item version of the QLQ-CIPN20 instrument. Findings Minor changes were made to three questions to enhance readability. Twelve questions were revised to define unfamiliar terminology, clarify the location of neuropathy, and emphasize important aspects. One question was deleted because of clinical and conceptual redundancy with other items, as well as concerns regarding generalizability and social desirability. Interpretation Cognitive interviewing methodology revealed inconsistencies between patients’ understanding and researchers’ intent, along with points that required clarification to avoid misunderstanding. Implications for Nursing Patients’ interpretations of the instrument’s items were inconsistent with the intended meanings of the questions. One item was dropped and others were revised, resulting in greater consistency in how patients, clinicians, and researchers interpreted the items’ meanings and improving the instrument’s content validity. Following additional revision and psychometric testing, the QLQ-CIPN20 could evolve into a gold-standard CIPN patient-reported outcome measure. PMID:28820525
Ziatabar Ahmadi, Seyyede Zohreh; Jalaie, Shohreh; Ashayeri, Hassan
2015-09-01
Theory of mind (ToM) or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children. We searched MEDLINE (PubMed interface), Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library) databases from 1990 to June 2015. Search strategy was Latin transcription of 'Theory of Mind' AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks) for ToM assessment and/or had no description about structure, validity or reliability of their tests. METHODological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP). In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric characteristics, validity and reliability.
Ziatabar Ahmadi, Seyyede Zohreh; Jalaie, Shohreh; Ashayeri, Hassan
2015-01-01
Objective: Theory of mind (ToM) or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children. Method: We searched MEDLINE (PubMed interface), Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library) databases from 1990 to June 2015. Search strategy was Latin transcription of ‘Theory of Mind’ AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks) for ToM assessment and/or had no description about structure, validity or reliability of their tests. Methodological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP). Result: In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. Conclusion: There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric characteristics, validity and reliability. PMID:27006666
Pyrolysis Model Development for a Multilayer Floor Covering
McKinnon, Mark B.; Stoliarov, Stanislav I.
2015-01-01
Comprehensive pyrolysis models that are integral to computational fire codes have improved significantly over the past decade as the demand for improved predictive capabilities has increased. High fidelity pyrolysis models may improve the design of engineered materials for better fire response, the design of the built environment, and may be used in forensic investigations of fire events. A major limitation to widespread use of comprehensive pyrolysis models is the large number of parameters required to fully define a material and the lack of effective methodologies for measurement of these parameters, especially for complex materials. The work presented here details a methodology used to characterize the pyrolysis of a low-pile carpet tile, an engineered composite material that is common in commercial and institutional occupancies. The studied material includes three distinct layers of varying composition and physical structure. The methodology utilized a comprehensive pyrolysis model (ThermaKin) to conduct inverse analyses on data collected through several experimental techniques. Each layer of the composite was individually parameterized to identify its contribution to the overall response of the composite. The set of properties measured to define the carpet composite were validated against mass loss rate curves collected at conditions outside the range of calibration conditions to demonstrate the predictive capabilities of the model. The mean error between the predicted curve and the mean experimental mass loss rate curve was calculated as approximately 20% on average for heat fluxes ranging from 30 to 70 kW·m−2, which is within the mean experimental uncertainty. PMID:28793556
Schiffman, Eric L; Truelove, Edmond L; Ohrbach, Richard; Anderson, Gary C; John, Mike T; List, Thomas; Look, John O
2010-01-01
The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. The aim of this article is to provide an overview of the project's methodology, descriptive statistics, and data for the study participant sample. This article also details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. The Axis I reference standards were based on the consensus of two criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion examination reliability was also assessed within study sites. Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas > or = 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion examiner agreement with reference standards was excellent (k > or = 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods.
Schiffman, Eric L.; Truelove, Edmond L.; Ohrbach, Richard; Anderson, Gary C.; John, Mike T.; List, Thomas; Look, John O.
2011-01-01
AIMS The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. An overview is presented, including Axis I and II methodology and descriptive statistics for the study participant sample. This paper details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. Validity testing for the Axis II biobehavioral instruments was based on previously validated reference standards. METHODS The Axis I reference standards were based on the consensus of 2 criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion exam reliability was also assessed within study sites. RESULTS Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas ≥ 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion exam agreement with reference standards was excellent (k ≥ 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). CONCLUSION The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods. PMID:20213028
Lami, Francesca; Egberts, Kristine; Ure, Alexandra; Conroy, Rowena; Williams, Katrina
2018-03-01
To systematically review the measurement properties of instruments assessing participation in young people with autism spectrum disorder (ASD). A search was performed in MEDLINE, PsycINFO, and PubMed combining three constructs ('ASD', 'test of participation', 'measurement properties'). Results were restricted to articles including people aged 6 to 29 years. The 2539 identified articles were independently screened by two reviewers. For the included articles, data were extracted using standard forms and their risk of bias was assessed. Nine studies (8 cross-sectional) met the inclusion criteria, providing information on seven different instruments. The total sample included 634 participants, with sex available for 600 (males=494; females=106) and age available for 570, with mean age for these participants 140.58 months (SD=9.11; range=36-624). Included instruments were the school function assessment, vocational index, children's assessment of participation and enjoyment/preferences for activities of children, experience sampling method, Pediatric Evaluation of Disability Inventory, Computer Adaptive Test, adolescent and young adult activity card sort, and Patient-Reported Outcomes Measurement Information System parent-proxy peer relationships. Seven studies assessed reliability and validity; good properties were reported for half of the instruments considered. Most studies (n=6) had high risk of bias. Overall the quality of the evidence for each tool was limited. Validation of these instruments, or others that comprehensively assess participation, is needed. Future studies should follow recommended methodological standards. Seven instruments have been used to assess participation in young people with autism. One instrument, with excellent measurement properties in one study, does not comprehensively assess participation. Studies of three instruments that incorporate a more comprehensive assessment of participation have methodological limitations. Overall, limited evidence exists regarding measurement properties of participation assessments for young people with autism. © 2017 Mac Keith Press.
Validation of highly reliable, real-time knowledge-based systems
NASA Technical Reports Server (NTRS)
Johnson, Sally C.
1988-01-01
Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.
Modeling Amorphous Microporous Polymers for CO2 Capture and Separations.
Kupgan, Grit; Abbott, Lauren J; Hart, Kyle E; Colina, Coray M
2018-06-13
This review concentrates on the advances of atomistic molecular simulations to design and evaluate amorphous microporous polymeric materials for CO 2 capture and separations. A description of atomistic molecular simulations is provided, including simulation techniques, structural generation approaches, relaxation and equilibration methodologies, and considerations needed for validation of simulated samples. The review provides general guidelines and a comprehensive update of the recent literature (since 2007) to promote the acceleration of the discovery and screening of amorphous microporous polymers for CO 2 capture and separation processes.
Solar Dynamics Observatory (SDO) HGAS Induced Jitter
NASA Technical Reports Server (NTRS)
Liu, Alice; Blaurock, Carl; Liu, Kuo-Chia; Mule, Peter
2008-01-01
This paper presents the results of a comprehensive assessment of High Gain Antenna System induced jitter on the Solar Dynamics Observatory. The jitter prediction is created using a coupled model of the structural dynamics, optical response, control systems, and stepper motor actuator electromechanical dynamics. The paper gives an overview of the model components, presents the verification processes used to evaluate the models, describes validation and calibration tests and model-to-measurement comparison results, and presents the jitter analysis methodology and results.
The National Criminal Justice Treatment Practices survey: Multilevel survey methods and procedures⋆
Taxman, Faye S.; Young, Douglas W.; Wiersema, Brian; Rhodes, Anne; Mitchell, Suzanne
2007-01-01
The National Criminal Justice Treatment Practices (NCJTP) survey provides a comprehensive inquiry into the nature of programs and services provided to adult and juvenile offenders involved in the justice system in the United States. The multilevel survey design covers topics such as the mission and goals of correctional and treatment programs; organizational climate and culture for providing services; organizational capacity and needs; opinions of administrators and staff regarding rehabilitation, punishment, and services provided to offenders; treatment policies and procedures; and working relationships between correctional and other agencies. The methodology generates national estimates of the availability of programs and services for offenders. This article details the methodology and sampling frame for the NCJTP survey, response rates, and survey procedures. Prevalence estimates of juvenile and adult offenders under correctional control are provided with externally validated comparisons to illustrate the veracity of the methodology. Limitations of the survey methods are also discussed. PMID:17383548
Semi-Empirical Prediction of Aircraft Low-Speed Aerodynamic Characteristics
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2015-01-01
This paper lays out a comprehensive methodology for computing a low-speed, high-lift polar, without requiring additional details about the aircraft design beyond what is typically available at the conceptual design stage. Introducing low-order, physics-based aerodynamic analyses allows the methodology to be more applicable to unconventional aircraft concepts than traditional, fully-empirical methods. The methodology uses empirical relationships for flap lift effectiveness, chord extension, drag-coefficient increment and maximum lift coefficient of various types of flap systems as a function of flap deflection, and combines these increments with the characteristics of the unflapped airfoils. Once the aerodynamic characteristics of the flapped sections are known, a vortex-lattice analysis calculates the three-dimensional lift, drag and moment coefficients of the whole aircraft configuration. This paper details the results of two validation cases: a supercritical airfoil model with several types of flaps; and a 12-foot, full-span aircraft model with slats and double-slotted flaps.
CFD Validation Experiment of a Mach 2.5 Axisymmetric Shock-Wave/Boundary-Layer Interaction
NASA Technical Reports Server (NTRS)
Davis, David O.
2015-01-01
Experimental investigations of specific flow phenomena, e.g., Shock Wave Boundary-Layer Interactions (SWBLI), provide great insight to the flow behavior but often lack the necessary details to be useful as CFD validation experiments. Reasons include: 1.Undefined boundary conditions Inconsistent results 2.Undocumented 3D effects (CL only measurements) 3.Lack of uncertainty analysis While there are a number of good subsonic experimental investigations that are sufficiently documented to be considered test cases for CFD and turbulence model validation, the number of supersonic and hypersonic cases is much less. This was highlighted by Settles and Dodsons [1] comprehensive review of available supersonic and hypersonic experimental studies. In all, several hundred studies were considered for their database.Of these, over a hundred were subjected to rigorous acceptance criteria. Based on their criteria, only 19 (12 supersonic, 7 hypersonic) were considered of sufficient quality to be used for validation purposes. Aeschliman and Oberkampf [2] recognized the need to develop a specific methodology for experimental studies intended specifically for validation purposes.
[Cancer pain management: Systematic review and critical appraisal of clinical practice guidelines].
Martínez-Nicolás, I; Ángel-García, D; Saturno, P J; López-Soriano, F
2016-01-01
Although several clinical practice guidelines have been developed in the last decades, cancer pain management is still deficient. The purpose of this work was to carry out a comprehensive and systematic literature review of current clinical practice guidelines on cancer pain management, and critically appraise their methodology and content in order to evaluate their quality and validity to cope with this public health issue. A systematic review was performed in the main databases, using English, French and Spanish as languages, from 2008 to 2013. Reporting and methodological quality was rated with the Appraisal of Guidelines, Research and Evaluation II (AGREE-II) tool, including an inter-rater reliability analysis. Guideline recommendations were extracted and classified into several categories and levels of evidence, aiming to analyse guidelines variability and evidence-based content comprehensiveness. Six guidelines were included. A wide variability was found in both reporting and methodological quality of guidelines, as well as in the content and the level of evidence of their recommendations. The Scottish Intercollegiate Guidelines Network guideline was the best rated using AGREE-II, while the Sociedad Española de Oncología Médica guideline was the worst rated. The Ministry of Health Malaysia guideline was the most comprehensive, and the Scottish Intercollegiate Guidelines Network guideline was the second one. The current guidelines on cancer pain management have limited quality and content. We recommend Ministry of Health Malaysia and Scottish Intercollegiate Guidelines Network guidelines, whilst Sociedad Española de Oncología Médica guideline still needs to improve. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.
Mohammad Abdulghani, Hamza; G Ponnamperuma, Gominda; Ahmad, Farah; Amin, Zubair
2014-03-01
To evaluate assessment system of the 'Research Methodology Course' using utility criteria (i.e. validity, reliability, acceptability, educational impact, and cost-effectiveness). This study demonstrates comprehensive evaluation of assessment system and suggests a framework for similar courses. Qualitative and quantitative methods used for evaluation of the course assessment components (50 MCQ, 3 Short Answer Questions (SAQ) and research project) using the utility criteria. RESULTS of multiple evaluation methods for all the assessment components were collected and interpreted together to arrive at holistic judgments, rather than judgments based on individual methods or individual assessment. Face validity, evaluated using a self-administered questionnaire (response rate-88.7%) disclosed that the students perceived that there was an imbalance in the contents covered by the assessment. This was confirmed by the assessment blueprint. Construct validity was affected by the low correlation between MCQ and SAQ scores (r=0.326). There was a higher correlation between the project and MCQ (r=0.466)/SAQ (r=0.463) scores. Construct validity was also affected by the presence of recall type of MCQs (70%; 35/50), item construction flaws and non-functioning distractors. High discriminating indices (>0.35) were found in MCQs with moderate difficulty indices (0.3-0.7). Reliability of the MCQs was 0.75 which could be improved up to 0.8 by increasing the number of MCQs to at least 70. A positive educational impact was found in the form of the research project assessment driving students to present/publish their work in conferences/peer reviewed journals. Cost per student to complete the course was US$164.50. The multi-modal evaluation of an assessment system is feasible and provides thorough and diagnostic information. Utility of the assessment system could be further improved by modifying the psychometrically inappropriate assessment items.
Mohammad Abdulghani, Hamza; G. Ponnamperuma, Gominda; Ahmad, Farah; Amin, Zubair
2014-01-01
Objective: To evaluate assessment system of the 'Research Methodology Course' using utility criteria (i.e. validity, reliability, acceptability, educational impact, and cost-effectiveness). This study demonstrates comprehensive evaluation of assessment system and suggests a framework for similar courses. Methods: Qualitative and quantitative methods used for evaluation of the course assessment components (50 MCQ, 3 Short Answer Questions (SAQ) and research project) using the utility criteria. Results of multiple evaluation methods for all the assessment components were collected and interpreted together to arrive at holistic judgments, rather than judgments based on individual methods or individual assessment. Results: Face validity, evaluated using a self-administered questionnaire (response rate-88.7%) disclosed that the students perceived that there was an imbalance in the contents covered by the assessment. This was confirmed by the assessment blueprint. Construct validity was affected by the low correlation between MCQ and SAQ scores (r=0.326). There was a higher correlation between the project and MCQ (r=0.466)/SAQ (r=0.463) scores. Construct validity was also affected by the presence of recall type of MCQs (70%; 35/50), item construction flaws and non-functioning distractors. High discriminating indices (>0.35) were found in MCQs with moderate difficulty indices (0.3-0.7). Reliability of the MCQs was 0.75 which could be improved up to 0.8 by increasing the number of MCQs to at least 70. A positive educational impact was found in the form of the research project assessment driving students to present/publish their work in conferences/peer reviewed journals. Cost per student to complete the course was US$164.50. Conclusions: The multi-modal evaluation of an assessment system is feasible and provides thorough and diagnostic information. Utility of the assessment system could be further improved by modifying the psychometrically inappropriate assessment items. PMID:24772117
Instruments Measuring Integrated Care: A Systematic Review of Measurement Properties.
Bautista, Mary Ann C; Nurjono, Milawaty; Lim, Yee Wei; Dessers, Ezra; Vrijhoef, Hubertus Jm
2016-12-01
Policy Points: Investigations on systematic methodologies for measuring integrated care should coincide with the growing interest in this field of research. A systematic review of instruments provides insights into integrated care measurement, including setting the research agenda for validating available instruments and informing the decision to develop new ones. This study is the first systematic review of instruments measuring integrated care with an evidence synthesis of the measurement properties. We found 209 index instruments measuring different constructs related to integrated care; the strength of evidence on the adequacy of the majority of their measurement properties remained largely unassessed. Integrated care is an important strategy for increasing health system performance. Despite its growing significance, detailed evidence on the measurement properties of integrated care instruments remains vague and limited. Our systematic review aims to provide evidence on the state of the art in measuring integrated care. Our comprehensive systematic review framework builds on the Rainbow Model for Integrated Care (RMIC). We searched MEDLINE/PubMed for published articles on the measurement properties of instruments measuring integrated care and identified eligible articles using a standard set of selection criteria. We assessed the methodological quality of every validation study reported using the COSMIN checklist and extracted data on study and instrument characteristics. We also evaluated the measurement properties of each examined instrument per validation study and provided a best evidence synthesis on the adequacy of measurement properties of the index instruments. From the 300 eligible articles, we assessed the methodological quality of 379 validation studies from which we identified 209 index instruments measuring integrated care constructs. The majority of studies reported on instruments measuring constructs related to care integration (33%) and patient-centered care (49%); fewer studies measured care continuity/comprehensive care (15%) and care coordination/case management (3%). We mapped 84% of the measured constructs to the clinical integration domain of the RMIC, with fewer constructs related to the domains of professional (3.7%), organizational (3.4%), and functional (0.5%) integration. Only 8% of the instruments were mapped to a combination of domains; none were mapped exclusively to the system or normative integration domains. The majority of instruments were administered to either patients (60%) or health care providers (20%). Of the measurement properties, responsiveness (4%), measurement error (7%), and criterion (12%) and cross-cultural validity (14%) were less commonly reported. We found <50% of the validation studies to be of good or excellent quality for any of the measurement properties. Only a minority of index instruments showed strong evidence of positive findings for internal consistency (15%), content validity (19%), and structural validity (7%); with moderate evidence of positive findings for internal consistency (14%) and construct validity (14%). Our results suggest that the quality of measurement properties of instruments measuring integrated care is in need of improvement with the less-studied constructs and domains to become part of newly developed instruments. © 2016 Milbank Memorial Fund.
Instruments Measuring Integrated Care: A Systematic Review of Measurement Properties
BAUTISTA, MARY ANN C.; NURJONO, MILAWATY; DESSERS, EZRA; VRIJHOEF, HUBERTUS JM
2016-01-01
Policy Points: Investigations on systematic methodologies for measuring integrated care should coincide with the growing interest in this field of research.A systematic review of instruments provides insights into integrated care measurement, including setting the research agenda for validating available instruments and informing the decision to develop new ones.This study is the first systematic review of instruments measuring integrated care with an evidence synthesis of the measurement properties.We found 209 index instruments measuring different constructs related to integrated care; the strength of evidence on the adequacy of the majority of their measurement properties remained largely unassessed. Context Integrated care is an important strategy for increasing health system performance. Despite its growing significance, detailed evidence on the measurement properties of integrated care instruments remains vague and limited. Our systematic review aims to provide evidence on the state of the art in measuring integrated care. Methods Our comprehensive systematic review framework builds on the Rainbow Model for Integrated Care (RMIC). We searched MEDLINE/PubMed for published articles on the measurement properties of instruments measuring integrated care and identified eligible articles using a standard set of selection criteria. We assessed the methodological quality of every validation study reported using the COSMIN checklist and extracted data on study and instrument characteristics. We also evaluated the measurement properties of each examined instrument per validation study and provided a best evidence synthesis on the adequacy of measurement properties of the index instruments. Findings From the 300 eligible articles, we assessed the methodological quality of 379 validation studies from which we identified 209 index instruments measuring integrated care constructs. The majority of studies reported on instruments measuring constructs related to care integration (33%) and patient‐centered care (49%); fewer studies measured care continuity/comprehensive care (15%) and care coordination/case management (3%). We mapped 84% of the measured constructs to the clinical integration domain of the RMIC, with fewer constructs related to the domains of professional (3.7%), organizational (3.4%), and functional (0.5%) integration. Only 8% of the instruments were mapped to a combination of domains; none were mapped exclusively to the system or normative integration domains. The majority of instruments were administered to either patients (60%) or health care providers (20%). Of the measurement properties, responsiveness (4%), measurement error (7%), and criterion (12%) and cross‐cultural validity (14%) were less commonly reported. We found <50% of the validation studies to be of good or excellent quality for any of the measurement properties. Only a minority of index instruments showed strong evidence of positive findings for internal consistency (15%), content validity (19%), and structural validity (7%); with moderate evidence of positive findings for internal consistency (14%) and construct validity (14%). Conclusions Our results suggest that the quality of measurement properties of instruments measuring integrated care is in need of improvement with the less‐studied constructs and domains to become part of newly developed instruments. PMID:27995711
2014-01-01
Background The systematic review of reasons is a new way to obtain comprehensive information about specific ethical topics. One such review was carried out for the question of why post-trial access to trial drugs should or need not be provided. The objective of this study was to empirically validate this review using an author check method. The article also reports on methodological challenges faced by our study. Methods We emailed a questionnaire to the 64 corresponding authors of those papers that were assessed in the review of reasons on post-trial access. The questionnaire consisted of all quotations (“reason mentions”) that were identified by the review to represent a reason in a given author’s publication, together with a set of codings for the quotations. The authors were asked to rate the correctness of the codings. Results We received 19 responses, from which only 13 were completed questionnaires. In total, 98 quotations and their related codes in the 13 questionnaires were checked by the addressees. For 77 quotations (79%), all codings were deemed correct, for 21 quotations (21%), some codings were deemed to need correction. Most corrections were minor and did not imply a complete misunderstanding of the citation. Conclusions This first attempt to validate a review of reasons leads to four crucial methodological questions relevant to the future conduct of such validation studies: 1) How can a description of a reason be deemed incorrect? 2) Do the limited findings of this author check study enable us to determine whether the core results of the analysed SRR are valid? 3) Why did the majority of surveyed authors refrain from commenting on our understanding of their reasoning? 4) How can the method for validating reviews of reasons be improved? PMID:25262532
Carpenter, Timothy S.; McNerney, M. Windy; Be, Nicholas A.; ...
2016-02-16
Membrane permeability is a key property to consider in drug design, especially when the drugs in question need to cross the blood-brain barrier (BBB). A comprehensive in vivo assessment of the BBB permeability of a drug takes considerable time and financial resources. A current, simplified in vitro model to investigate drug permeability is a Parallel Artificial Membrane Permeability Assay (PAMPA) that generally provides higher throughput and initial quantification of a drug's passive permeability. Computational methods can also be used to predict drug permeability. Our methods are highly advantageous as they do not require the synthesis of the desired drug, andmore » can be implemented rapidly using high-performance computing. In this study, we have used umbrella sampling Molecular Dynamics (MD) methods to assess the passive permeability of a range of compounds through a lipid bilayer. Furthermore, the permeability of these compounds was comprehensively quantified using the PAMPA assay to calibrate and validate the MD methodology. And after demonstrating a firm correlation between the two approaches, we then implemented our MD method to quantitatively predict the most permeable potential drug from a series of potential scaffolds. This permeability was then confirmed by the in vitro PAMPA methodology. Therefore, in this work we have illustrated the potential that these computational methods hold as useful tools to help predict a drug's permeability in a faster and more cost-effective manner. Release number: LLNL-ABS-677757.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, Timothy S.; McNerney, M. Windy; Be, Nicholas A.
Membrane permeability is a key property to consider in drug design, especially when the drugs in question need to cross the blood-brain barrier (BBB). A comprehensive in vivo assessment of the BBB permeability of a drug takes considerable time and financial resources. A current, simplified in vitro model to investigate drug permeability is a Parallel Artificial Membrane Permeability Assay (PAMPA) that generally provides higher throughput and initial quantification of a drug's passive permeability. Computational methods can also be used to predict drug permeability. Our methods are highly advantageous as they do not require the synthesis of the desired drug, andmore » can be implemented rapidly using high-performance computing. In this study, we have used umbrella sampling Molecular Dynamics (MD) methods to assess the passive permeability of a range of compounds through a lipid bilayer. Furthermore, the permeability of these compounds was comprehensively quantified using the PAMPA assay to calibrate and validate the MD methodology. And after demonstrating a firm correlation between the two approaches, we then implemented our MD method to quantitatively predict the most permeable potential drug from a series of potential scaffolds. This permeability was then confirmed by the in vitro PAMPA methodology. Therefore, in this work we have illustrated the potential that these computational methods hold as useful tools to help predict a drug's permeability in a faster and more cost-effective manner. Release number: LLNL-ABS-677757.« less
Validation of a Three-Dimensional Method for Counting and Sizing Podocytes in Whole Glomeruli
van der Wolde, James W.; Schulze, Keith E.; Short, Kieran M.; Wong, Milagros N.; Bensley, Jonathan G.; Cullen-McEwen, Luise A.; Caruana, Georgina; Hokke, Stacey N.; Li, Jinhua; Firth, Stephen D.; Harper, Ian S.; Nikolic-Paterson, David J.; Bertram, John F.
2016-01-01
Podocyte depletion is sufficient for the development of numerous glomerular diseases and can be absolute (loss of podocytes) or relative (reduced number of podocytes per volume of glomerulus). Commonly used methods to quantify podocyte depletion introduce bias, whereas gold standard stereologic methodologies are time consuming and impractical. We developed a novel approach for assessing podocyte depletion in whole glomeruli that combines immunofluorescence, optical clearing, confocal microscopy, and three-dimensional analysis. We validated this method in a transgenic mouse model of selective podocyte depletion, in which we determined dose-dependent alterations in several quantitative indices of podocyte depletion. This new approach provides a quantitative tool for the comprehensive and time-efficient analysis of podocyte depletion in whole glomeruli. PMID:26975438
Knowledge-based system verification and validation
NASA Technical Reports Server (NTRS)
Johnson, Sally C.
1990-01-01
The objective of this task is to develop and evaluate a methodology for verification and validation (V&V) of knowledge-based systems (KBS) for space station applications with high reliability requirements. The approach consists of three interrelated tasks. The first task is to evaluate the effectiveness of various validation methods for space station applications. The second task is to recommend requirements for KBS V&V for Space Station Freedom (SSF). The third task is to recommend modifications to the SSF to support the development of KBS using effectiveness software engineering and validation techniques. To accomplish the first task, three complementary techniques will be evaluated: (1) Sensitivity Analysis (Worchester Polytechnic Institute); (2) Formal Verification of Safety Properties (SRI International); and (3) Consistency and Completeness Checking (Lockheed AI Center). During FY89 and FY90, each contractor will independently demonstrate the user of his technique on the fault detection, isolation, and reconfiguration (FDIR) KBS or the manned maneuvering unit (MMU), a rule-based system implemented in LISP. During FY91, the application of each of the techniques to other knowledge representations and KBS architectures will be addressed. After evaluation of the results of the first task and examination of Space Station Freedom V&V requirements for conventional software, a comprehensive KBS V&V methodology will be developed and documented. Development of highly reliable KBS's cannot be accomplished without effective software engineering methods. Using the results of current in-house research to develop and assess software engineering methods for KBS's as well as assessment of techniques being developed elsewhere, an effective software engineering methodology for space station KBS's will be developed, and modification of the SSF to support these tools and methods will be addressed.
Children's comprehension of an unfamiliar speaker accent: a review.
Harte, Jennifer; Oliveira, Ana; Frizelle, Pauline; Gibbon, Fiona
2016-05-01
The effect of speaker accent on listeners' comprehension has become a key focus of research given the increasing cultural diversity of society and the increased likelihood of an individual encountering a clinician with an unfamiliar accent. To review the studies exploring the effect of an unfamiliar accent on language comprehension in typically developing (TD) children and in children with speech and language difficulties. This review provides a methodological analysis of the relevant studies by exploring the challenges facing this field of research and highlighting the current gaps in the literature. A total of nine studies were identified using a systematic search and organized under studies investigating the effect of speaker accent on language comprehension in (1) TD children and (2) children with speech and/or language difficulties. This review synthesizes the evidence that an unfamiliar speaker accent may lead to a breakdown in language comprehension in TD children and in children with speech difficulties. Moreover, it exposes the inconsistencies found in this field of research and highlights the lack of studies investigating the effect of speaker accent in children with language deficits. Overall, research points towards a developmental trend in children's ability to comprehend accent-related variations in speech. Vocabulary size, language exposure, exposure to different accents and adequate processing resources (e.g. attention) seem to play a key role in children's ability to understand unfamiliar accents. This review uncovered some inconsistencies in the literature that highlight the methodological issues that must be considered when conducting research in this field. It explores how such issues may be controlled in order to increase the validity and reliability of future research. Key clinical implications are also discussed. © 2016 Royal College of Speech and Language Therapists.
Lessons Learned From Methodological Validation Research in E-Epidemiology.
Kesse-Guyot, Emmanuelle; Assmann, Karen; Andreeva, Valentina; Castetbon, Katia; Méjean, Caroline; Touvier, Mathilde; Salanave, Benoît; Deschamps, Valérie; Péneau, Sandrine; Fezeu, Léopold; Julia, Chantal; Allès, Benjamin; Galan, Pilar; Hercberg, Serge
2016-10-18
Traditional epidemiological research methods exhibit limitations leading to high logistics, human, and financial burden. The continued development of innovative digital tools has the potential to overcome many of the existing methodological issues. Nonetheless, Web-based studies remain relatively uncommon, partly due to persistent concerns about validity and generalizability. The objective of this viewpoint is to summarize findings from methodological studies carried out in the NutriNet-Santé study, a French Web-based cohort study. On the basis of the previous findings from the NutriNet-Santé e-cohort (>150,000 participants are currently included), we synthesized e-epidemiological knowledge on sample representativeness, advantageous recruitment strategies, and data quality. Overall, the reported findings support the usefulness of Web-based studies in overcoming common methodological deficiencies in epidemiological research, in particular with regard to data quality (eg, the concordance for body mass index [BMI] classification was 93%), reduced social desirability bias, and access to a wide range of participant profiles, including the hard-to-reach subgroups such as young (12.30% [15,118/122,912], <25 years) and old people (6.60% [8112/122,912], ≥65 years), unemployed or homemaker (12.60% [15,487/122,912]), and low educated (38.50% [47,312/122,912]) people. However, some selection bias remained (78.00% (95,871/122,912) of the participants were women, and 61.50% (75,590/122,912) had postsecondary education), which is an inherent aspect of cohort study inclusion; other specific types of bias may also have occurred. Given the rapidly growing access to the Internet across social strata, the recruitment of participants with diverse socioeconomic profiles and health risk exposures was highly feasible. Continued efforts concerning the identification of specific biases in e-cohorts and the collection of comprehensive and valid data are still needed. This summary of methodological findings from the NutriNet-Santé cohort may help researchers in the development of the next generation of high-quality Web-based epidemiological studies.
Heuer, Sabine; Hallowell, Brooke
2015-01-01
Numerous authors report that people with aphasia have greater difficulty allocating attention than people without neurological disorders. Studying how attention deficits contribute to language deficits is important. However, existing methods for indexing attention allocation in people with aphasia pose serious methodological challenges. Eye-tracking methods have great potential to address such challenges. We developed and assessed the validity of a new dual-task method incorporating eye tracking to assess attention allocation. Twenty-six adults with aphasia and 33 control participants completed auditory sentence comprehension and visual search tasks. To test whether the new method validly indexes well-documented patterns in attention allocation, demands were manipulated by varying task complexity in single- and dual-task conditions. Differences in attention allocation were indexed via eye-tracking measures. For all participants significant increases in attention allocation demands were observed from single- to dual-task conditions and from simple to complex stimuli. Individuals with aphasia had greater difficulty allocating attention with greater task demands. Relationships between eye-tracking indices of comprehension during single and dual tasks and standardized testing were examined. Results support the validity of the novel eye-tracking method for assessing attention allocation in people with and without aphasia. Clinical and research implications are discussed. PMID:25913549
The Development and Validation of a Rapid Assessment Tool of Primary Care in China
Mei, Jie; Liang, Yuan; Shi, LeiYu; Zhao, JingGe; Wang, YuTan; Kuang, Li
2016-01-01
Introduction. With Chinese health care reform increasingly emphasizing the importance of primary care, the need for a tool to evaluate primary care performance and service delivery is clear. This study presents a methodology for a rapid assessment of primary care organizations and service delivery in China. Methods. The study translated and adapted the Primary Care Assessment Tool-Adult Edition (PCAT-AE) into a Chinese version to measure core dimensions of primary care, namely, first contact, continuity, comprehensiveness, and coordination. A cross-sectional survey was conducted to assess the validity and reliability of the Chinese Rapid Primary Care Assessment Tool (CR-PCAT). Eight community health centers in Guangdong province have been selected to participate in the survey. Results. A total of 1465 effective samples were included for data analysis. Eight items were eliminated following principal component analysis and reliability testing. The principal component analysis extracted five multiple-item scales (first contact utilization, first contact accessibility, ongoing care, comprehensiveness, and coordination). The tests of scaling assumptions were basically met. Conclusion. The standard psychometric evaluation indicates that the scales have achieved relatively good reliability and validity. The CR-PCAT provides a rapid and reliable measure of four core dimensions of primary care, which could be applied in various scenarios. PMID:26885509
Pu, Xia; Ye, Yuanqing; Wu, Xifeng
2014-01-01
Despite the advances made in cancer management over the past few decades, improvements in cancer diagnosis and prognosis are still poor, highlighting the need for individualized strategies. Toward this goal, risk prediction models and molecular diagnostic tools have been developed, tailoring each step of risk assessment from diagnosis to treatment and clinical outcomes based on the individual's clinical, epidemiological, and molecular profiles. These approaches hold increasing promise for delivering a new paradigm to maximize the efficiency of cancer surveillance and efficacy of treatment. However, they require stringent study design, methodology development, comprehensive assessment of biomarkers and risk factors, and extensive validation to ensure their overall usefulness for clinical translation. In the current study, the authors conducted a systematic review using breast cancer as an example and provide general guidelines for risk prediction models and molecular diagnostic tools, including development, assessment, and validation. © 2013 American Cancer Society.
NASA Astrophysics Data System (ADS)
Nishida, R. T.; Beale, S. B.; Pharoah, J. G.; de Haart, L. G. J.; Blum, L.
2018-01-01
This work is among the first where the results of an extensive experimental research programme are compared to performance calculations of a comprehensive computational fluid dynamics model for a solid oxide fuel cell stack. The model, which combines electrochemical reactions with momentum, heat, and mass transport, is used to obtain results for an established industrial-scale fuel cell stack design with complex manifolds. To validate the model, comparisons with experimentally gathered voltage and temperature data are made for the Jülich Mark-F, 18-cell stack operating in a test furnace. Good agreement is obtained between the model and experiment results for cell voltages and temperature distributions, confirming the validity of the computational methodology for stack design. The transient effects during ramp up of current in the experiment may explain a lower average voltage than model predictions for the power curve.
Zhang, Xin; Wu, Yuxia; Ren, Pengwei; Liu, Xueting; Kang, Deying
2015-10-30
To explore the relationship between the external validity and the internal validity of hypertension RCTs conducted in China. Comprehensive literature searches were performed in Medline, Embase, Cochrane Central Register of Controlled Trials (CCTR), CBMdisc (Chinese biomedical literature database), CNKI (China National Knowledge Infrastructure/China Academic Journals Full-text Database) and VIP (Chinese scientific journals database) as well as advanced search strategies were used to locate hypertension RCTs. The risk of bias in RCTs was assessed by a modified scale, Jadad scale respectively, and then studies with 3 or more grading scores were included for the purpose of evaluating of external validity. A data extract form including 4 domains and 25 items was used to explore relationship of the external validity and the internal validity. Statistic analyses were performed by using SPSS software, version 21.0 (SPSS, Chicago, IL). 226 hypertension RCTs were included for final analysis. RCTs conducted in university affiliated hospitals (P < 0.001) or secondary/tertiary hospitals (P < 0.001) were scored at higher internal validity. Multi-center studies (median = 4.0, IQR = 2.0) were scored higher internal validity score than single-center studies (median = 3.0, IQR = 1.0) (P < 0.001). Funding-supported trials had better methodological quality (P < 0.001). In addition, the reporting of inclusion criteria also leads to better internal validity (P = 0.004). Multivariate regression indicated sample size, industry-funding, quality of life (QOL) taken as measure and the university affiliated hospital as trial setting had statistical significance (P < 0.001, P < 0.001, P = 0.001, P = 0.006 respectively). Several components relate to the external validity of RCTs do associate with the internal validity, that do not stand in an easy relationship to each other. Regarding the poor reporting, other possible links between two variables need to trace in the future methodological researches.
Methodological quality of meta-analyses of single-case experimental studies.
Jamshidi, Laleh; Heyvaert, Mieke; Declercq, Lies; Fernández-Castilla, Belén; Ferron, John M; Moeyaert, Mariola; Beretvas, S Natasha; Onghena, Patrick; Van den Noortgate, Wim
2017-12-28
Methodological rigor is a fundamental factor in the validity and credibility of the results of a meta-analysis. Following an increasing interest in single-case experimental design (SCED) meta-analyses, the current study investigates the methodological quality of SCED meta-analyses. We assessed the methodological quality of 178 SCED meta-analyses published between 1985 and 2015 through the modified Revised-Assessment of Multiple Systematic Reviews (R-AMSTAR) checklist. The main finding of the current review is that the methodological quality of the SCED meta-analyses has increased over time, but is still low according to the R-AMSTAR checklist. A remarkable percentage of the studies (93.80% of the included SCED meta-analyses) did not even reach the midpoint score (22, on a scale of 0-44). The mean and median methodological quality scores were 15.57 and 16, respectively. Relatively high scores were observed for "providing the characteristics of the included studies" and "doing comprehensive literature search". The key areas of deficiency were "reporting an assessment of the likelihood of publication bias" and "using the methods appropriately to combine the findings of studies". Although the results of the current review reveal that the methodological quality of the SCED meta-analyses has increased over time, still more efforts are needed to improve their methodological quality. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.
2017-02-01
Nowadays, the vibration analysis of rotating machine signals is a well-established methodology, rooted on powerful tools offered, in particular, by the theory of cyclostationary (CS) processes. Among them, the squared envelope spectrum (SES) is probably the most popular to detect random CS components which are typical symptoms, for instance, of rolling element bearing faults. Recent researches are shifted towards the extension of existing CS tools - originally devised in constant speed conditions - to the case of variable speed conditions. Many of these works combine the SES with computed order tracking after some preprocessing steps. The principal object of this paper is to organize these dispersed researches into a structured comprehensive framework. Three original features are furnished. First, a model of rotating machine signals is introduced which sheds light on the various components to be expected in the SES. Second, a critical comparison is made of three sophisticated methods, namely, the improved synchronous average, the cepstrum prewhitening, and the generalized synchronous average, used for suppressing the deterministic part. Also, a general envelope enhancement methodology which combines the latter two techniques with a time-domain filtering operation is revisited. All theoretical findings are experimentally validated on simulated and real-world vibration signals.
Modern data science for analytical chemical data - A comprehensive review.
Szymańska, Ewa
2018-10-22
Efficient and reliable analysis of chemical analytical data is a great challenge due to the increase in data size, variety and velocity. New methodologies, approaches and methods are being proposed not only by chemometrics but also by other data scientific communities to extract relevant information from big datasets and provide their value to different applications. Besides common goal of big data analysis, different perspectives and terms on big data are being discussed in scientific literature and public media. The aim of this comprehensive review is to present common trends in the analysis of chemical analytical data across different data scientific fields together with their data type-specific and generic challenges. Firstly, common data science terms used in different data scientific fields are summarized and discussed. Secondly, systematic methodologies to plan and run big data analysis projects are presented together with their steps. Moreover, different analysis aspects like assessing data quality, selecting data pre-processing strategies, data visualization and model validation are considered in more detail. Finally, an overview of standard and new data analysis methods is provided and their suitability for big analytical chemical datasets shortly discussed. Copyright © 2018 Elsevier B.V. All rights reserved.
Evolving Relevance of Neuroproteomics in Alzheimer's Disease.
Lista, Simone; Zetterberg, Henrik; O'Bryant, Sid E; Blennow, Kaj; Hampel, Harald
2017-01-01
Substantial progress in the understanding of the biology of Alzheimer's disease (AD) has been achieved over the past decades. The early detection and diagnosis of AD and other age-related neurodegenerative diseases, however, remain a challenging scientific frontier. Therefore, the comprehensive discovery (relating to all individual, converging or diverging biochemical disease mechanisms), development, validation, and qualification of standardized biological markers with diagnostic and prognostic functions with a precise performance profile regarding specificity, sensitivity, and positive and negative predictive value are warranted.Methodological innovations in the area of exploratory high-throughput technologies, such as sequencing, microarrays, and mass spectrometry-based analyses of proteins/peptides, have led to the generation of large global molecular datasets from a multiplicity of biological systems, such as biological fluids, cells, tissues, and organs. Such methodological progress has shifted the attention to the execution of hypothesis-independent comprehensive exploratory analyses (opposed to the classical hypothesis-driven candidate approach), with the aim of fully understanding the biological systems in physiology and disease as a whole. The systems biology paradigm integrates experimental biology with accurate and rigorous computational modelling to describe and foresee the dynamic features of biological systems. The use of dynamically evolving technological platforms, including mass spectrometry, in the area of proteomics has enabled to rush the process of biomarker discovery and validation for refining significantly the diagnosis of AD. Currently, proteomics-which is part of the systems biology paradigm-is designated as one of the dominant matured sciences needed for the effective exploratory discovery of prospective biomarker candidates expected to play an effective role in aiding the early detection, diagnosis, prognosis, and therapy development in AD.
Odegaard, Justin I; Vincent, John J; Mortimer, Stefanie; Vowles, James V; Ulrich, Bryan C; Banks, Kimberly C; Fairclough, Stephen R; Zill, Oliver A; Sikora, Marcin; Mokhtari, Reza; Abdueva, Diana; Nagy, Rebecca J; Lee, Christine E; Kiedrowski, Lesli A; Paweletz, Cloud P; Eltoukhy, Helmy; Lanman, Richard B; Chudova, Darya I; Talasaz, AmirAli
2018-04-24
Purpose: To analytically and clinically validate a circulating cell-free tumor DNA sequencing test for comprehensive tumor genotyping and demonstrate its clinical feasibility. Experimental Design: Analytic validation was conducted according to established principles and guidelines. Blood-to-blood clinical validation comprised blinded external comparison with clinical droplet digital PCR across 222 consecutive biomarker-positive clinical samples. Blood-to-tissue clinical validation comprised comparison of digital sequencing calls to those documented in the medical record of 543 consecutive lung cancer patients. Clinical experience was reported from 10,593 consecutive clinical samples. Results: Digital sequencing technology enabled variant detection down to 0.02% to 0.04% allelic fraction/2.12 copies with ≤0.3%/2.24-2.76 copies 95% limits of detection while maintaining high specificity [prevalence-adjusted positive predictive values (PPV) >98%]. Clinical validation using orthogonal plasma- and tissue-based clinical genotyping across >750 patients demonstrated high accuracy and specificity [positive percent agreement (PPAs) and negative percent agreement (NPAs) >99% and PPVs 92%-100%]. Clinical use in 10,593 advanced adult solid tumor patients demonstrated high feasibility (>99.6% technical success rate) and clinical sensitivity (85.9%), with high potential actionability (16.7% with FDA-approved on-label treatment options; 72.0% with treatment or trial recommendations), particularly in non-small cell lung cancer, where 34.5% of patient samples comprised a directly targetable standard-of-care biomarker. Conclusions: High concordance with orthogonal clinical plasma- and tissue-based genotyping methods supports the clinical accuracy of digital sequencing across all four types of targetable genomic alterations. Digital sequencing's clinical applicability is further supported by high rates of technical success and biomarker target discovery. Clin Cancer Res; 1-11. ©2018 AACR. ©2018 American Association for Cancer Research.
King, Wade; Ahmed, Shihab U; Baisden, Jamie; Patel, Nileshkumar; Kennedy, David J; Duszynski, Belinda; MacVicar, John
2015-02-01
To assess the evidence on the validity of sacral lateral branch blocks and the effectiveness of sacral lateral branch thermal radiofrequency neurotomy in managing sacroiliac complex pain. Systematic review with comprehensive analysis of all published data. Six reviewers searched the literature on sacral lateral branch interventions. Each assessed the methodologies of studies found and the quality of the evidence presented. The outcomes assessed were diagnostic validity and effectiveness of treatment for sacroiliac complex pain. The evidence found was appraised in accordance with the Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) system of evaluating scientific evidence. The searches yielded two primary publications on sacral lateral branch blocks and 15 studies of the effectiveness of sacral lateral branch thermal radiofrequency neurotomy. One study showed multisite, multidepth sacral lateral branch blocks can anesthetize the posterior sacroiliac ligaments. Therapeutic studies show sacral lateral branch thermal radiofrequency neurotomy can relieve sacroiliac complex pain to some extent. The evidence of the validity of these blocks and the effectiveness of this treatment were rated as moderate in accordance with the GRADE system. The literature on sacral lateral branch interventions is sparse. One study demonstrates the face validity of multisite, multidepth sacral lateral branch blocks for diagnosis of posterior sacroiliac complex pain. Some evidence of moderate quality exists on therapeutic procedures, but it is insufficient to determine the indications and effectiveness of sacral lateral branch thermal radiofrequency neurotomy, and more research is required. Wiley Periodicals, Inc.
Validity of contents of a paediatric critical comfort scale using mixed methodology.
Bosch-Alcaraz, A; Jordan-Garcia, I; Alcolea-Monge, S; Fernández-Lorenzo, R; Carrasquer-Feixa, E; Ferrer-Orona, M; Falcó-Pegueroles, A
Critical illness in paediatric patients includes acute conditions in a healthy child as well as exacerbations of chronic disease, and therefore these situations must be clinically managed in Critical Care Units. The role of the paediatric nurse is to ensure the comfort of these critically ill patients. To that end, instruments are required that correctly assess critical comfort. To describe the process for validating the content of a paediatric critical comfort scale using mixed-method research. Initially, a cross-cultural adaptation of the Comfort Behavior Scale from English to Spanish using the translation and back-translation method was made. After that, its content was evaluated using mixed method research. This second step was divided into a quantitative stage in which an ad hoc questionnaire was used in order to assess each scale's item relevance and wording and a qualitative stage with two meetings with health professionals, patients and a family member following the Delphi Method recommendations. All scale items obtained a content validity index >0.80, except physical movement in its relevance, which obtained 0.76. Global content scale validity was 0.87 (high). During the qualitative stage, items from each of the scale domains were reformulated or eliminated in order to make the scale more comprehensible and applicable. The use of a mixed-method research methodology during the scale content validity phase allows the design of a richer and more assessment-sensitive instrument. Copyright © 2017 Sociedad Española de Enfermería Intensiva y Unidades Coronarias (SEEIUC). Publicado por Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Luo, Keqin
1999-11-01
The electroplating industry of over 10,000 planting plants nationwide is one of the major waste generators in the industry. Large quantities of wastewater, spent solvents, spent process solutions, and sludge are the major wastes generated daily in plants, which costs the industry tremendously for waste treatment and disposal and hinders the further development of the industry. It becomes, therefore, an urgent need for the industry to identify technically most effective and economically most attractive methodologies and technologies to minimize the waste, while the production competitiveness can be still maintained. This dissertation aims at developing a novel WM methodology using artificial intelligence, fuzzy logic, and fundamental knowledge in chemical engineering, and an intelligent decision support tool. The WM methodology consists of two parts: the heuristic knowledge-based qualitative WM decision analysis and support methodology and fundamental knowledge-based quantitative process analysis methodology for waste reduction. In the former, a large number of WM strategies are represented as fuzzy rules. This becomes the main part of the knowledge base in the decision support tool, WMEP-Advisor. In the latter, various first-principles-based process dynamic models are developed. These models can characterize all three major types of operations in an electroplating plant, i.e., cleaning, rinsing, and plating. This development allows us to perform a thorough process analysis on bath efficiency, chemical consumption, wastewater generation, sludge generation, etc. Additional models are developed for quantifying drag-out and evaporation that are critical for waste reduction. The models are validated through numerous industrial experiments in a typical plating line of an industrial partner. The unique contribution of this research is that it is the first time for the electroplating industry to (i) use systematically available WM strategies, (ii) know quantitatively and accurately what is going on in each tank, and (iii) identify all WM opportunities through process improvement. This work has formed a solid foundation for the further development of powerful WM technologies for comprehensive WM in the following decade.
A methodological analysis of chaplaincy research: 2000-2009.
Galek, Kathleen; Flannelly, Kevin J; Jankowski, Katherine R B; Handzo, George F
2011-01-01
The present article presents a comprehensive review and analysis of quantitative research conducted in the United States on chaplaincy and closely related topics published between 2000 and 2009. A combined search strategy identified 49 quantitative studies in 13 journals. The analysis focuses on the methodological sophistication of the studies, compared to earlier research on chaplaincy and pastoral care. Cross-sectional surveys of convenience samples still dominate the field, but sample sizes have increased somewhat over the past three decades. Reporting of the validity and reliability of measures continues to be low, although reporting of response rates has improved. Improvements in the use of inferential statistics and statistical controls were also observed, compared to previous research. The authors conclude that more experimental research is needed on chaplaincy, along with an increased use of hypothesis testing, regardless of the research designs that are used.
Timmer, M A; Gouw, S C; Feldman, B M; Zwagemaker, A; de Kleijn, P; Pisters, M F; Schutgens, R E G; Blanchette, V; Srivastava, A; David, J A; Fischer, K; van der Net, J
2018-03-01
Monitoring clinical outcome in persons with haemophilia (PWH) is essential in order to provide optimal treatment for individual patients and compare effectiveness of treatment strategies. Experience with measurement of activities and participation in haemophilia is limited and consensus on preferred tools is lacking. The aim of this study was to give a comprehensive overview of the measurement properties of a selection of commonly used tools developed to assess activities and participation in PWH. Electronic databases were searched for articles that reported on reliability, validity or responsiveness of predetermined measurement tools (5 self-reported and 4 performance based measurement tools). Methodological quality of the studies was assessed according to the COSMIN checklist. Best evidence synthesis was used to summarize evidence on the measurement properties. The search resulted in 3453 unique hits. Forty-two articles were included. The self-reported Haemophilia Acitivity List (HAL), Pediatric HAL (PedHAL) and the performance based Functional Independence Score in Haemophilia (FISH) were studied most extensively. Methodological quality of the studies was limited. Measurement error, cross-cultural validity and responsiveness have been insufficiently evaluated. Albeit based on limited evidence, the measurement properties of the PedHAL, HAL and FISH are currently considered most satisfactory. Further research needs to focus on measurement error, responsiveness, interpretability and cross-cultural validity of the self-reported tools and validity of performance based tools which are able to assess limitations in sports and leisure activities. © 2018 The Authors. Haemophilia Published by John Wiley & Sons Ltd.
Methodological triangulation: an approach to understanding data.
Bekhet, Abir K; Zauszniewski, Jaclene A
2012-01-01
To describe the use of methodological triangulation in a study of how people who had moved to retirement communities were adjusting. Methodological triangulation involves using more than one kind of method to study a phenomenon. It has been found to be beneficial in providing confirmation of findings, more comprehensive data, increased validity and enhanced understanding of studied phenomena. While many researchers have used this well-established technique, there are few published examples of its use. The authors used methodological triangulation in their study of people who had moved to retirement communities in Ohio, US. A blended qualitative and quantitative approach was used. The collected qualitative data complemented and clarified the quantitative findings by helping to identify common themes. Qualitative data also helped in understanding interventions for promoting 'pulling' factors and for overcoming 'pushing' factors of participants. The authors used focused research questions to reflect the research's purpose and four evaluative criteria--'truth value', 'applicability', 'consistency' and 'neutrality'--to ensure rigour. This paper provides an example of how methodological triangulation can be used in nursing research. It identifies challenges associated with methodological triangulation, recommends strategies for overcoming them, provides a rationale for using triangulation and explains how to maintain rigour. Methodological triangulation can be used to enhance the analysis and the interpretation of findings. As data are drawn from multiple sources, it broadens the researcher's insight into the different issues underlying the phenomena being studied.
Evaluation of the National Solar Radiation Database (NSRDB) Using Ground-Based Measurements
NASA Astrophysics Data System (ADS)
Xie, Y.; Sengupta, M.; Habte, A.; Lopez, A.
2017-12-01
Solar resource is essential for a wide spectrum of applications including renewable energy, climate studies, and solar forecasting. Solar resource information can be obtained from ground-based measurement stations and/or from modeled data sets. While measurements provide data for the development and validation of solar resource models and other applications modeled data expands the ability to address the needs for increased accuracy and spatial and temporal resolution. The National Renewable Energy Laboratory (NREL) has developed and regular updates modeled solar resource through the National Solar Radiation Database (NSRDB). The recent NSRDB dataset was developed using the physics-based Physical Solar Model (PSM) and provides gridded solar irradiance (global horizontal irradiance (GHI), direct normal irradiance (DNI), and diffuse horizontal irradiance) at a 4-km by 4-km spatial and half-hourly temporal resolution covering 18 years from 1998-2015. A comprehensive validation of the performance of the NSRDB (1998-2015) was conducted to quantify the accuracy of the spatial and temporal variability of the solar radiation data. Further, the study assessed the ability of NSRDB (1998-2015) to accurately capture inter-annual variability, which is essential information for solar energy conversion projects and grid integration studies. Comparisons of the NSRDB (1998-2015) with nine selected ground-measured data were conducted under both clear- and cloudy-sky conditions. These locations provide a high quality data covering a variety of geographical locations and climates. The comparison of the NSRDB to the ground-based data demonstrated that biases were within +/- 5% for GHI and +/-10% for DNI. A comprehensive uncertainty estimation methodology was established to analyze the performance of the gridded NSRDB and includes all sources of uncertainty at various time-averaged periods, a method that is not often used in model evaluation. Further, the study analyzed the inter-annual and mean-anomaly of the 18 years of solar radiation data. This presentation will outline the validation methodology and provide detailed results of the comparison.
Enviroplan—a summary methodology for comprehensive environmental planning and design
Robert Allen Jr.; George Nez; Fred Nicholson; Larry Sutphin
1979-01-01
This paper will discuss a comprehensive environmental assessment methodology that includes a numerical method for visual management and analysis. This methodology employs resource and human activity units as a means to produce a visual form unit which is the fundamental unit of the perceptual environment. The resource unit is based on the ecosystem as the fundamental...
Kim, Hee-Ju; Abraham, Ivo
2017-01-01
Evidence is needed on the clinicometric properties of single-item or short measures as alternatives to comprehensive measures. We examined whether two single-item fatigue measures (i.e., Likert scale, numeric rating scale) or a short fatigue measure were comparable to a comprehensive measure in reliability (i.e., internal consistency and test-retest reliability) and validity (i.e., convergent, concurrent, and predictive validity) in Korean young adults. For this quantitative study, we selected the Functional Assessment of Chronic Illness Therapy-Fatigue for the comprehensive measure and the Profile of Mood States-Brief, Fatigue subscale for the short measure; and constructed two single-item measures. A total of 368 students from four nursing colleges in South Korea participated. We used Cronbach's alpha and item-total correlation for internal consistency reliability and intraclass correlation coefficient for test-retest reliability. We assessed Pearson's correlation with a comprehensive measure for convergent validity, with perceived stress level and sleep quality for concurrent validity and the receiver operating characteristic curve for predictive validity. The short measure was comparable to the comprehensive measure in internal consistency reliability (Cronbach's alpha=0.81 vs. 0.88); test-retest reliability (intraclass correlation coefficient=0.66 vs. 0.61); convergent validity (r with comprehensive measure=0.79); concurrent validity (r with perceived stress=0.55, r with sleep quality=0.39) and predictive validity (area under curve=0.88). Single-item measures were not comparable to the comprehensive measure. A short fatigue measure exhibited similar levels of reliability and validity to the comprehensive measure in Korean young adults. Copyright © 2016 Elsevier Ltd. All rights reserved.
Burns, C
1991-01-01
Pediatric nurse practitioners (PNPs) need an integrated, comprehensive classification that includes nursing, disease, and developmental diagnoses to effectively describe their practice. No such classification exists. Further, methodologic studies to help evaluate the content validity of any nursing taxonomy are unavailable. A conceptual framework was derived. Then 178 diagnoses from the North American Nursing Diagnosis Association (NANDA) 1986 list, selected diagnoses from the International Classification of Diseases, the Diagnostic and Statistical Manual, Third Revision, and others were selected. This framework identified and listed, with definitions, three domains of diagnoses: Developmental Problems, Diseases, and Daily Living Problems. The diagnoses were ranked using a 4-point scale (4 = highly related to 1 = not related) and were placed into the three domains. The rating scale was assigned by a panel of eight expert pediatric nurses. Diagnoses that were assigned to the Daily Living Problems domain were then sorted into the 11 Functional Health patterns described by Gordon (1987). Reliability was measured using proportions of agreement and Kappas. Content validity of the groups created was measured using indices of content validity and average congruency percentages. The experts used a new method to sort the diagnoses in a new way that decreased overlaps among the domains. The Developmental and Disease domains were judged reliable and valid. The Daily Living domain of nursing diagnoses showed marginally acceptable validity with acceptable reliability. Six Functional Health Patterns were judged reliable and valid, mixed results were determined for four categories, and the Coping/Stress Tolerance category was judged reliable but not valid using either test. There were considerable differences between the panel's, Gordon's (1987), and NANDA's clustering of NANDA diagnoses. This study defines the diagnostic practice of nurses from a holistic, patient-centered perspective. It is the first study to use quantitative methods to test a diagnostic classification system for nursing. The classification model could also be adapted for other nurse specialties.
Multi-viewpoint clustering analysis
NASA Technical Reports Server (NTRS)
Mehrotra, Mala; Wild, Chris
1993-01-01
In this paper, we address the feasibility of partitioning rule-based systems into a number of meaningful units to enhance the comprehensibility, maintainability and reliability of expert systems software. Preliminary results have shown that no single structuring principle or abstraction hierarchy is sufficient to understand complex knowledge bases. We therefore propose the Multi View Point - Clustering Analysis (MVP-CA) methodology to provide multiple views of the same expert system. We present the results of using this approach to partition a deployed knowledge-based system that navigates the Space Shuttle's entry. We also discuss the impact of this approach on verification and validation of knowledge-based systems.
Kristman, Vicki L; Borg, Jörgen; Godbolt, Alison K; Salmi, L Rachid; Cancelliere, Carol; Carroll, Linda J; Holm, Lena W; Nygren-de Boussard, Catharina; Hartvigsen, Jan; Abara, Uko; Donovan, James; Cassidy, J David
2014-03-01
The International Collaboration on Mild Traumatic Brain Injury (MTBI) Prognosis performed a comprehensive search and critical review of the literature from 2001 to 2012 to update the 2002 best-evidence synthesis conducted by the World Health Organization Collaborating Centre for Neurotrauma, Prevention, Management and Rehabilitation Task Force on the prognosis of MTBI. Of 299 relevant studies, 101 were accepted as scientifically admissible. The methodological quality of the research literature on MTBI prognosis has not improved since the 2002 Task Force report. There are still many methodological concerns and knowledge gaps in the literature. Here we report and make recommendations on how to avoid methodological flaws found in prognostic studies of MTBI. Additionally, we discuss issues of MTBI definition and identify topic areas in need of further research to advance the understanding of prognosis after MTBI. Priority research areas include but are not limited to the use of confirmatory designs, studies of measurement validity, focus on the elderly, attention to litigation/compensation issues, the development of validated clinical prediction rules, the use of MTBI populations other than hospital admissions, continued research on the effects of repeated concussions, longer follow-up times with more measurement periods in longitudinal studies, an assessment of the differences between adults and children, and an account for reverse causality and differential recall bias. Well-conducted studies in these areas will aid our understanding of MTBI prognosis and assist clinicians in educating and treating their patients with MTBI. Copyright © 2014 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Tsimicalis, Argerie; Le May, Sylvie; Stinson, Jennifer; Rennick, Janet; Vachon, Marie-France; Louli, Julie; Bérubé, Sarah; Treherne, Stephanie; Yoon, Sunmoo; Nordby Bøe, Trude; Ruland, Cornelia
Sisom is an interactive tool designed to help children communicate their cancer symptoms. Important design issues relevant to other cancer populations remain unexplored. This single-site, descriptive, qualitative study was conducted to linguistically validate Sisom with a group of French-speaking children with cancer, their parents, and health care professionals. The linguistic validation process included 6 steps: (1) forward translation, (2) backward translation, (3) patient testing, (4) production of a Sisom French version, (5) patient testing this version, and (6) production of the final Sisom French prototype. Five health care professionals and 10 children and their parents participated in the study. Health care professionals oversaw the translation process providing clinically meaningful suggestions. Two rounds of patient testing, which included parental participation, resulted in the following themes: (1) comprehension, (2) suggestions for improving the translations, (3) usability, (4) parental engagement, and (5) overall impression. Overall, Sisom was well received by participants who were forthcoming with input and suggestions for improving the French translations. Our proposed methodology may be replicated for the linguistic validation of other e-health tools.
Trujillo-Orrego, N; Pineda, D A; Uribe, L H
2012-03-01
The diagnostic criteria for the attentional deficit hyperactivity disorder (ADHD), were defined by the American Psychiatric Association in the Diagnostic and Statistical Manual of Mental Disorders fourth version (DSM-IV) and World Health Organization in the ICD-10. The American Psychiatric Association used an internal validity analysis to select specific behavioral symptoms associated with the disorder and to build five cross-cultural criteria for its use in the categorical diagnosis. The DSM has been utilized for clinicians and researchers as a valid and stable approach since 1968. We did a systematic review of scientific literature in Spanish and English, aimed to identify the historical origin that supports ADHD as a psychiatric construct. This comprehensive review started exploring the concept of minimal brain dysfunction, hyper-activity, inattention, impulsivity since 1932 to 2011. This paper summarize all the DSM versions that include the definition of ADHD or its equivalent, and it point out the statistical and methodological approach implemented for defining ADHD as a valid epidemiological and psychometric construct. Finally the paper discusses some considerations and suggestions for the new versions of the manual.
Load Model Verification, Validation and Calibration Framework by Statistical Analysis on Field Data
NASA Astrophysics Data System (ADS)
Jiao, Xiangqing; Liao, Yuan; Nguyen, Thai
2017-11-01
Accurate load models are critical for power system analysis and operation. A large amount of research work has been done on load modeling. Most of the existing research focuses on developing load models, while little has been done on developing formal load model verification and validation (V&V) methodologies or procedures. Most of the existing load model validation is based on qualitative rather than quantitative analysis. In addition, not all aspects of model V&V problem have been addressed by the existing approaches. To complement the existing methods, this paper proposes a novel load model verification and validation framework that can systematically and more comprehensively examine load model's effectiveness and accuracy. Statistical analysis, instead of visual check, quantifies the load model's accuracy, and provides a confidence level of the developed load model for model users. The analysis results can also be used to calibrate load models. The proposed framework can be used as a guidance to systematically examine load models for utility engineers and researchers. The proposed method is demonstrated through analysis of field measurements collected from a utility system.
A psychometric evaluation of the Rorschach comprehensive system's perceptual thinking index.
Dao, Tam K; Prevatt, Frances
2006-04-01
In this study, we investigated evidence for reliability and validity of the Perceptual Thinking Index (PTI; Exner, 2000a, 2000b) among an adult inpatient population. We conducted reliability and validity analyses on 107 patients who met the Diagnostic and Statistical Manual of Mental Disorders (4th ed., text revision; American Psychiatric Association, 2000) criteria for a schizophrenia-spectrum disorder (SSD) or mood disorder with no psychotic features (MD). Results provided support for interrater reliability as well as internal consistency of the PTI. Furthermore, the PTI was an effective index in differentiating SSD patients from patients diagnosed with an MD. Finally, the PTI demonstrated adequate diagnostic statistics that can be useful in the classification of patients diagnosed with SSD and MD. We discuss methodological issues, implications for assessment practice, and directions for future research.
McHugh, R Kathryn; Behar, Evelyn
2012-12-01
In his commentary on our previously published article "Readability of Self-Report Measures of Depression and Anxiety," J. Schinka (2012) argued for the importance of considering readability of patient materials and highlighted limitations of existing methodologies for this assessment. Schinka's commentary articulately described the weaknesses of readability assessment and emphasized the importance of the development of improved strategies for assessing readability to maximize the validity of self-report measures in applied settings. In our reply, we support and extend Schinka's argument, highlighting the importance of consideration of the range of factors (e.g., use of reverse-scored items) that may increase respondent difficulty with comprehension. Consideration of the readability of self-report symptom measures is critical to the validity of these measures in both clinical practice and research settings.
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Starnes, James H., Jr.; Newman, James C., Jr.
1995-01-01
NASA is developing a 'tool box' that includes a number of advanced structural analysis computer codes which, taken together, represent the comprehensive fracture mechanics capability required to predict the onset of widespread fatigue damage. These structural analysis tools have complementary and specialized capabilities ranging from a finite-element-based stress-analysis code for two- and three-dimensional built-up structures with cracks to a fatigue and fracture analysis code that uses stress-intensity factors and material-property data found in 'look-up' tables or from equations. NASA is conducting critical experiments necessary to verify the predictive capabilities of the codes, and these tests represent a first step in the technology-validation and industry-acceptance processes. NASA has established cooperative programs with aircraft manufacturers to facilitate the comprehensive transfer of this technology by making these advanced structural analysis codes available to industry.
[Some critical remarks on standardised assessment instruments in nursing].
Bartholomeyczik, Sabine
2007-08-01
The use of standardised instruments in nursing has rapidly grown and can be seen as a symptom of the necessary comprehensive nursing diagnostics. However, these instruments comprise the risk of misuse, if they are not critically evaluated. Published statements about tests of reliability and validity of an instrument are insufficient. First, the critical evaluation has to ask for the instrument's theoretical and content base: Is the instrument relevant for nursing, suitable for practice and leading to nursing actions? Two examples of well known instruments and different kinds of their utilization in nursing are discussed. Next, the instruments have to be questioned as "bodies with numbers". Studies on reliability and validity have to be as carefully evaluated as other empirical research. The sample, the suitability of agreement indicators (interraterreliability), kind and reason of tests have to be questioned. The same has to be done with tests of validity which comprise an even greater challenge. Methodological studies about these questions are missing; guidelines for test user qualifications need to be developed.
López, Diego M; Blobel, Bernd; Gonzalez, Carolina
2010-01-01
Requirement analysis, design, implementation, evaluation, use, and maintenance of semantically interoperable Health Information Systems (HIS) have to be based on eHealth standards. HIS-DF is a comprehensive approach for HIS architectural development based on standard information models and vocabulary. The empirical validity of HIS-DF has not been demonstrated so far. Through an empirical experiment, the paper demonstrates that using HIS-DF and HL7 information models, semantic quality of HIS architecture can be improved, compared to architectures developed using traditional RUP process. Semantic quality of the architecture has been measured in terms of model's completeness and validity metrics. The experimental results demonstrated an increased completeness of 14.38% and an increased validity of 16.63% when using the HIS-DF and HL7 information models in a sample HIS development project. Quality assurance of the system architecture in earlier stages of HIS development presumes an increased quality of final HIS systems, which supposes an indirect impact on patient care.
Lessons Learned From Methodological Validation Research in E-Epidemiology
Assmann, Karen; Andreeva, Valentina; Castetbon, Katia; Méjean, Caroline; Touvier, Mathilde; Salanave, Benoît; Deschamps, Valérie; Péneau, Sandrine; Fezeu, Léopold; Julia, Chantal; Allès, Benjamin; Galan, Pilar; Hercberg, Serge
2016-01-01
Background Traditional epidemiological research methods exhibit limitations leading to high logistics, human, and financial burden. The continued development of innovative digital tools has the potential to overcome many of the existing methodological issues. Nonetheless, Web-based studies remain relatively uncommon, partly due to persistent concerns about validity and generalizability. Objective The objective of this viewpoint is to summarize findings from methodological studies carried out in the NutriNet-Santé study, a French Web-based cohort study. Methods On the basis of the previous findings from the NutriNet-Santé e-cohort (>150,000 participants are currently included), we synthesized e-epidemiological knowledge on sample representativeness, advantageous recruitment strategies, and data quality. Results Overall, the reported findings support the usefulness of Web-based studies in overcoming common methodological deficiencies in epidemiological research, in particular with regard to data quality (eg, the concordance for body mass index [BMI] classification was 93%), reduced social desirability bias, and access to a wide range of participant profiles, including the hard-to-reach subgroups such as young (12.30% [15,118/122,912], <25 years) and old people (6.60% [8112/122,912], ≥65 years), unemployed or homemaker (12.60% [15,487/122,912]), and low educated (38.50% [47,312/122,912]) people. However, some selection bias remained (78.00% (95,871/122,912) of the participants were women, and 61.50% (75,590/122,912) had postsecondary education), which is an inherent aspect of cohort study inclusion; other specific types of bias may also have occurred. Conclusions Given the rapidly growing access to the Internet across social strata, the recruitment of participants with diverse socioeconomic profiles and health risk exposures was highly feasible. Continued efforts concerning the identification of specific biases in e-cohorts and the collection of comprehensive and valid data are still needed. This summary of methodological findings from the NutriNet-Santé cohort may help researchers in the development of the next generation of high-quality Web-based epidemiological studies. PMID:27756715
Standards and Methodologies for Characterizing Radiobiological Impact of High-Z Nanoparticles
Subiel, Anna; Ashmore, Reece; Schettino, Giuseppe
2016-01-01
Research on the application of high-Z nanoparticles (NPs) in cancer treatment and diagnosis has recently been the subject of growing interest, with much promise being shown with regards to a potential transition into clinical practice. In spite of numerous publications related to the development and application of nanoparticles for use with ionizing radiation, the literature is lacking coherent and systematic experimental approaches to fully evaluate the radiobiological effectiveness of NPs, validate mechanistic models and allow direct comparison of the studies undertaken by various research groups. The lack of standards and established methodology is commonly recognised as a major obstacle for the transition of innovative research ideas into clinical practice. This review provides a comprehensive overview of radiobiological techniques and quantification methods used in in vitro studies on high-Z nanoparticles and aims to provide recommendations for future standardization for NP-mediated radiation research. PMID:27446499
Study on a new chaotic bitwise dynamical system and its FPGA implementation
NASA Astrophysics Data System (ADS)
Wang, Qian-Xue; Yu, Si-Min; Guyeux, C.; Bahi, J.; Fang, Xiao-Le
2015-06-01
In this paper, the structure of a new chaotic bitwise dynamical system (CBDS) is described. Compared to our previous research work, it uses various random bitwise operations instead of only one. The chaotic behavior of CBDS is mathematically proven according to the Devaney's definition, and its statistical properties are verified both for uniformity and by a comprehensive, reputed and stringent battery of tests called TestU01. Furthermore, a systematic methodology developing the parallel computations is proposed for FPGA platform-based realization of this CBDS. Experiments finally validate the proposed systematic methodology. Project supported by China Postdoctoral Science Foundation (Grant No. 2014M552175), the Scientific Research Foundation for the Returned Overseas Chinese Scholars, Chinese Education Ministry, the National Natural Science Foundation of China (Grant No. 61172023), and the Specialized Research Foundation of Doctoral Subjects of Chinese Education Ministry (Grant No. 20114420110003).
Estimating the Prevalence of Childhood Obesity in Alaska Using Partial, Nonrandom Measurement Data
Boles, Myde; Fink, Karol; Topol, Rebecca; Fenaughty, Andrea
2016-01-01
Although monitoring childhood obesity prevalence is critical for state public health programs to assess trends and the effectiveness of interventions, few states have comprehensive body mass index measurement systems in place. In some states, however, assorted school districts collect measurements on student height and weight as part of annual health screenings. To estimate childhood obesity prevalence in Alaska, we created a logistic regression model using such annual measurements along with public data on demographics and socioeconomic status. Our mixed-effects model-generated prevalence estimates validated well against weighted estimates, with 95% confidence intervals overlapping between methodologies among 7 of 8 participating school districts. Our methodology accounts for variation in school-level and student-level demographic factors across the state, and the approach we describe can be applied by other states that have existing nonrandom student measurement data to estimate childhood obesity prevalence. PMID:27010843
Sydora, Beate C; Fast, Hilary; Campbell, Sandy; Yuksel, Nese; Lewis, Jacqueline E; Ross, Sue
2016-09-01
The Menopause-Specific Quality of Life (MENQOL) questionnaire was developed as a validated research tool to measure condition-specific QOL in early postmenopausal women. We conducted a comprehensive scoping review to explore the extent of MENQOL's use in research and clinical practice to assess its value in providing effective, adequate, and comparable participant assessment information. Thirteen biomedical and clinical databases were systematically searched with "menqol" as a search term to find articles using MENQOL or its validated derivative MENQOL-Intervention as investigative or clinical tools from 1996 to November 2014 inclusive. Review articles, conference abstracts, proceedings, dissertations, and incomplete trials were excluded. Additional articles were collected from references within key articles. Three independent reviewers extracted data reflecting study design, intervention, sample characteristics, MENQOL questionnaire version, modifications and language, recall period, and analysis detail. Data analyses included categorization and descriptive statistics. The review included 220 eligible papers of various study designs, covering 39 countries worldwide and using MENQOL translated into more than 25 languages. A variety of modifications to the original questionnaire were identified, including omission or addition of items and alterations to the validated methodological analysis. No papers were found that described MENQOL's use in clinical practice. Our study found an extensive and steadily increasing use of MENQOL in clinical and epidemiological research over 18 years postpublication. Our results stress the importance of proper reporting and validation of translations and variations to ensure outcome comparison and transparency of MENQOL's use. The value of MENQOL in clinical practice remains unknown.
Validation and Comprehension of Text Information: Two Sides of the Same Coin
ERIC Educational Resources Information Center
Richter, Tobias
2015-01-01
In psychological research, the comprehension of linguistic information and the knowledge-based assessment of its validity are often regarded as two separate stages of information processing. Recent findings in psycholinguistics and text comprehension research call this two-stage model into question. In particular, validation can affect…
Penrose's law: Methodological challenges and call for data.
Kalapos, Miklós Péter
The investigation of the relationship between the sizes of the mental health population and the prison population, outlined in Penrose's Law, has received renewed interest in recent decades. The problems that arise in the course of the deinstitutionalization have repeatedly drawn attention to this issue. This article presents methodological challenges to the examination of Penrose's Law and retrospectively reviews historical data from empirical studies. A critical element of surveys is the sampling method; longitudinal studies seem appropriate here. The relationship between the numbers of psychiatric beds and the size of the prison population is inverse in most cases. However, a serious failure is that almost all of the data were collected in countries historically belonging to a Christian or Jewish cultural community. Only very limited conclusions can be drawn from these sparse and non-comprehensive data: a reduction in the number of psychiatric beds seems to be accompanied by increases in the numbers of involuntary admissions and forensic treatments and an accumulation of mentally ill persons in prisons. A kind of transinstitutionalization is currently ongoing. A pragmatic balance between academic epidemiological numbers and cultural narratives should be found in order to confirm or refute the validity of Penrose's Law. Unless comprehensive research is undertaken, it is impossible to draw any real conclusion. Copyright © 2016 Elsevier Ltd. All rights reserved.
Methods to Develop the Eye-tem Bank to Measure Ophthalmic Quality of Life.
Khadka, Jyoti; Fenwick, Eva; Lamoureux, Ecosse; Pesudovs, Konrad
2016-12-01
There is an increasing demand for high-standard, comprehensive, and reliable patient-reported outcome (PRO) instruments in all the disciplines of health care including in ophthalmology and optometry. Over the past two decades, a plethora of PRO instruments have been developed to assess the impact of eye diseases and their treatments. Despite this large number of instruments, significant shortcomings exist for the measurement of ophthalmic quality of life (QoL). Most PRO instruments are short-form instruments designed for clinical use, but this limits their content coverage often poorly targeting any study population other than that which they were developed for. Also, existing instruments are static paper and pencil based and unable to be updated easily leading to outdated and irrelevant item content. Scores obtained from different PRO instruments may not be directly comparable. These shortcomings can be addressed using item banking implemented with computer-adaptive testing (CAT). Therefore, we designed a multicenter project (The Eye-tem Bank project) to develop and validate such PROs to enable comprehensive measurement of ophthalmic QoL in eye diseases. Development of the Eye-tem Bank follows four phases: Phase I, Content Development; Phase II, Pilot Testing and Item Calibration; Phase III, Validation; and Phase IV, Evaluation. This project will deliver technologically advanced comprehensive QoL PROs in the form of item banking implemented via a CAT system in eye diseases. Here, we present a detailed methodological framework of this project.
RRegrs: an R package for computer-aided model selection with multiple regression models.
Tsiliki, Georgia; Munteanu, Cristian R; Seoane, Jose A; Fernandez-Lozano, Carlos; Sarimveis, Haralambos; Willighagen, Egon L
2015-01-01
Predictive regression models can be created with many different modelling approaches. Choices need to be made for data set splitting, cross-validation methods, specific regression parameters and best model criteria, as they all affect the accuracy and efficiency of the produced predictive models, and therefore, raising model reproducibility and comparison issues. Cheminformatics and bioinformatics are extensively using predictive modelling and exhibit a need for standardization of these methodologies in order to assist model selection and speed up the process of predictive model development. A tool accessible to all users, irrespectively of their statistical knowledge, would be valuable if it tests several simple and complex regression models and validation schemes, produce unified reports, and offer the option to be integrated into more extensive studies. Additionally, such methodology should be implemented as a free programming package, in order to be continuously adapted and redistributed by others. We propose an integrated framework for creating multiple regression models, called RRegrs. The tool offers the option of ten simple and complex regression methods combined with repeated 10-fold and leave-one-out cross-validation. Methods include Multiple Linear regression, Generalized Linear Model with Stepwise Feature Selection, Partial Least Squares regression, Lasso regression, and Support Vector Machines Recursive Feature Elimination. The new framework is an automated fully validated procedure which produces standardized reports to quickly oversee the impact of choices in modelling algorithms and assess the model and cross-validation results. The methodology was implemented as an open source R package, available at https://www.github.com/enanomapper/RRegrs, by reusing and extending on the caret package. The universality of the new methodology is demonstrated using five standard data sets from different scientific fields. Its efficiency in cheminformatics and QSAR modelling is shown with three use cases: proteomics data for surface-modified gold nanoparticles, nano-metal oxides descriptor data, and molecular descriptors for acute aquatic toxicity data. The results show that for all data sets RRegrs reports models with equal or better performance for both training and test sets than those reported in the original publications. Its good performance as well as its adaptability in terms of parameter optimization could make RRegrs a popular framework to assist the initial exploration of predictive models, and with that, the design of more comprehensive in silico screening applications.Graphical abstractRRegrs is a computer-aided model selection framework for R multiple regression models; this is a fully validated procedure with application to QSAR modelling.
Eye-Tracking as a Tool in Process-Oriented Reading Test Validation
ERIC Educational Resources Information Center
Solheim, Oddny Judith; Uppstad, Per Henning
2011-01-01
The present paper addresses the continuous need for methodological reflection on how to validate inferences made on the basis of test scores. Validation is a process that requires many lines of evidence. In this article we discuss the potential of eye tracking methodology in process-oriented reading test validation. Methodological considerations…
DOT National Transportation Integrated Search
2010-12-01
This report documents the Safety Measurement System (SMS) methodology developed to support the Comprehensive Safety Analysis 2010 (CSA 2010) Initiative for the Federal Motor Carrier Safety Administration (FMCSA). The SMS is one of the major tools for...
Toward a Formal Model of the Design and Evolution of Software
1988-12-20
should have the flezibiity to support a variety of design methodologies, be compinhenaive enough to encompass the gamut of software lifecycle...the future. It should have the flezibility to support a variety of design methodologies, be comprehensive enough to encompass the gamut of software...variety of design methodologies, be comprehensive enough to encompass the gamut of software lifecycle activities, and be precise enough to provide the
Sjögren, P; Ordell, S; Halling, A
2003-12-01
The aim was to describe and systematically review the methodology and reporting of validation in publications describing epidemiological registration methods for dental caries. BASIC RESEARCH METHODOLOGY: Literature searches were conducted in six scientific databases. All publications fulfilling the predetermined inclusion criteria were assessed for methodology and reporting of validation using a checklist including items described previously as well as new items. The frequency of endorsement of the assessed items was analysed. Moreover, the type and strength of evidence, was evaluated. Reporting of predetermined items relating to methodology of validation and the frequency of endorsement of the assessed items were of primary interest. Initially 588 publications were located. 74 eligible publications were obtained, 23 of which fulfilled the inclusion criteria and remained throughout the analyses. A majority of the studies reported the methodology of validation. The reported methodology of validation was generally inadequate, according to the recommendations of evidence-based medicine. The frequencies of reporting the assessed items (frequencies of endorsement) ranged from four to 84 per cent. A majority of the publications contributed to a low strength of evidence. There seems to be a need to improve the methodology and the reporting of validation in publications describing professionally registered caries epidemiology. Four of the items assessed in this study are potentially discriminative for quality assessments of reported validation.
Rubble masonry response under cyclic actions: The experience of L’Aquila city (Italy)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fonti, Roberta, E-mail: roberta.fonti@tum.de; Barthel, Rainer, E-mail: r.barthel@lrz.tu-muenchen.de; Formisano, Antonio, E-mail: antoform@unina.it
2015-12-31
Several methods of analysis are available in engineering practice to study old masonry constructions. Two commonly used approaches in the field of seismic engineering are global and local analyses. Despite several years of research in this field, the various methodologies suffer from a lack of comprehensive experimental validation. This is mainly due to the difficulty in simulating the many different kinds of masonry and, accordingly, the non-linear response under horizontal actions. This issue can be addressed by examining the local response of isolated panels under monotonic and/or alternate actions. Different testing methodologies are commonly used to identify the local responsemore » of old masonry. These range from simplified pull-out tests to sophisticated in-plane monotonic tests. However, there is a lack of both knowledge and critical comparison between experimental validations and numerical simulations. This is mainly due to the difficulties in implementing irregular settings within both simplified and advanced numerical analyses. Similarly, the simulation of degradation effects within laboratory tests is difficult with respect to old masonry in-situ boundary conditions. Numerical models, particularly on rubble masonry, are commonly simplified. They are mainly based on a kinematic chain of rigid blocks able to perform different “modes of damage” of structures subjected to horizontal actions. This paper presents an innovative methodology for testing; its aim is to identify a simplified model for out-of-plane response of rubbleworks with respect to the experimental evidence. The case study of L’Aquila district is discussed.« less
Psychometric properties of the parent́s perception uncertainty in illness scale, spanish version.
Suarez-Acuña, C E; Carvajal-Carrascal, G; Serrano-Gómez, M E
2018-03-27
To analyze the psychometric properties of the Parents' Perception of Uncertainty in Illness Scale, parents/children, adapted to Spanish. A descriptive methodological study involving the translation into Spanish of the Parents' Perception of Uncertainty in Illness Scale, parents/children, and analysis of their face validity, content validity, construct validity and internal consistency. The original version of the scale in English was translated into Spanish, and approved by its author. Six face validity items with comprehension difficulty were reported; which were reviewed and adapted, keeping its structure. The global content validity index with expert appraisal was 0.94. In the exploratory analysis of factors, 3 dimensions were identified: ambiguity and lack of information, unpredictability and lack of clarity, with a KMO=0.846, which accumulated 91.5% of the explained variance. The internal consistency of the scale yielded a Cronbach alpha of 0.86 demonstrating a good level of correlation between items. The Spanish version of "Parent's Perception of Uncertainty in Illness Scale" is a valid and reliable tool that can be used to determine the level of uncertainty of parents facing the illness of their children. Copyright © 2018 Sociedad Española de Enfermería Intensiva y Unidades Coronarias (SEEIUC). Publicado por Elsevier España, S.L.U. All rights reserved.
Churilov, Leonid; Liu, Daniel; Ma, Henry; Christensen, Soren; Nagakane, Yoshinari; Campbell, Bruce; Parsons, Mark W; Levi, Christopher R; Davis, Stephen M; Donnan, Geoffrey A
2013-04-01
The appropriateness of a software platform for rapid MRI assessment of the amount of salvageable brain tissue after stroke is critical for both the validity of the Extending the Time for Thrombolysis in Emergency Neurological Deficits (EXTEND) Clinical Trial of stroke thrombolysis beyond 4.5 hours and for stroke patient care outcomes. The objective of this research is to develop and implement a methodology for selecting the acute stroke imaging software platform most appropriate for the setting of a multi-centre clinical trial. A multi-disciplinary decision making panel formulated the set of preferentially independent evaluation attributes. Alternative Multi-Attribute Value Measurement methods were used to identify the best imaging software platform followed by sensitivity analysis to ensure the validity and robustness of the proposed solution. Four alternative imaging software platforms were identified. RApid processing of PerfusIon and Diffusion (RAPID) software was selected as the most appropriate for the needs of the EXTEND trial. A theoretically grounded generic multi-attribute selection methodology for imaging software was developed and implemented. The developed methodology assured both a high quality decision outcome and a rational and transparent decision process. This development contributes to stroke literature in the area of comprehensive evaluation of MRI clinical software. At the time of evaluation, RAPID software presented the most appropriate imaging software platform for use in the EXTEND clinical trial. The proposed multi-attribute imaging software evaluation methodology is based on sound theoretical foundations of multiple criteria decision analysis and can be successfully used for choosing the most appropriate imaging software while ensuring both robust decision process and outcomes. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.
NASA Technical Reports Server (NTRS)
Foster, John V.; Ross, Holly M.; Ashley, Patrick A.
1993-01-01
Designers of the next-generation fighter and attack airplanes are faced with the requirements of good high-angle-of-attack maneuverability as well as efficient high speed cruise capability with low radar cross section (RCS) characteristics. As a result, they are challenged with the task of making critical design trades to achieve the desired levels of maneuverability and performance. This task has highlighted the need for comprehensive, flight-validated lateral-directional control power design guidelines for high angles of attack. A joint NASA/U.S. Navy study has been initiated to address this need and to investigate the complex flight dynamics characteristics and controls requirements for high-angle-of-attack lateral-directional maneuvering. A multi-year research program is underway which includes ground-based piloted simulation and flight validation. This paper will give a status update of this program that will include a program overview, description of test methodology and preliminary results.
NASA Technical Reports Server (NTRS)
Foster, John V.; Ross, Holly M.; Ashley, Patrick A.
1993-01-01
Designers of the next-generation fighter and attack airplanes are faced with the requirements of good high angle-of-attack maneuverability as well as efficient high speed cruise capability with low radar cross section (RCS) characteristics. As a result, they are challenged with the task of making critical design trades to achieve the desired levels of maneuverability and performance. This task has highlighted the need for comprehensive, flight-validated lateral-directional control power design guidelines for high angles of attack. A joint NASA/U.S. Navy study has been initiated to address this need and to investigate the complex flight dynamics characteristics and controls requirements for high angle-of-attack lateral-directional maneuvering. A multi-year research program is underway which includes groundbased piloted simulation and flight validation. This paper will give a status update of this program that will include a program overview, description of test methodology and preliminary results.
XV-15 Low-Noise Terminal Area Operations Testing
NASA Technical Reports Server (NTRS)
Edwards, B. D.
1998-01-01
Test procedures related to XV-15 noise tests conducted by NASA-Langley and Bell Helicopter Textron, Inc. are discussed. The tests. which took place during October and November 1995, near Waxahachie, Texas, documented the noise signature of the XV-15 tilt-rotor aircraft at a wide variety of flight conditions. The stated objectives were to: -provide a comprehensive acoustic database for NASA and U.S. Industry -validate noise prediction methodologies, and -develop and demonstrate low-noise flight profiles. The test consisted of two distinct phases. Phase 1 provided an acoustic database for validating analytical noise prediction techniques; Phase 2 directly measured noise contour information at a broad range of operating profiles, with emphasis on minimizing 'approach' noise. This report is limited to a documentation of the test procedures, flight conditions, microphone locations, meteorological conditions, and test personnel used in the test. The acoustic results are not included.
The EuroPrevall outpatient clinic study on food allergy: background and methodology.
Fernández-Rivas, M; Barreales, L; Mackie, A R; Fritsche, P; Vázquez-Cortés, S; Jedrzejczak-Czechowicz, M; Kowalski, M L; Clausen, M; Gislason, D; Sinaniotis, A; Kompoti, E; Le, T-M; Knulst, A C; Purohit, A; de Blay, F; Kralimarkova, T; Popov, T; Asero, R; Belohlavkova, S; Seneviratne, S L; Dubakiene, R; Lidholm, J; Hoffmann-Sommergruber, K; Burney, P; Crevel, R; Brill, M; Fernández-Pérez, C; Vieths, S; Clare Mills, E N; van Ree, R; Ballmer-Weber, B K
2015-05-01
The EuroPrevall project aimed to develop effective management strategies in food allergy through a suite of interconnected studies and a multidisciplinary integrated approach. To address some of the gaps in food allergy diagnosis, allergen risk management and socio-economic impact and to complement the EuroPrevall population-based surveys, a cross-sectional study in 12 outpatient clinics across Europe was conducted. We describe the study protocol. Patients referred for immediate food adverse reactions underwent a consistent and standardized allergy work-up that comprised collection of medical history; assessment of sensitization to 24 foods, 14 inhalant allergens and 55 allergenic molecules; and confirmation of clinical reactivity and food thresholds by standardized double-blind placebo-controlled food challenges (DBPCFCs) to milk, egg, fish, shrimp, peanut, hazelnut, celeriac, apple and peach. A standardized methodology for a comprehensive evaluation of food allergy was developed and implemented in 12 outpatient clinics across Europe. A total of 2121 patients (22.6% <14 years) reporting 8257 reactions to foods were studied, and 516 DBPCFCs were performed. This is the largest multicentre European case series in food allergy, in which subjects underwent a comprehensive, uniform and standardized evaluation including DBPCFC, by a methodology which is made available for further studies in food allergy. The analysis of this population will provide information on the different phenotypes of food allergy across Europe, will allow to validate novel in vitro diagnostic tests, to establish threshold values for major allergenic foods and to analyse the socio-economic impact of food allergy. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Lewis, Cara C; Stanick, Cameo F; Martinez, Ruben G; Weiner, Bryan J; Kim, Mimi; Barwick, Melanie; Comtois, Katherine A
2015-01-08
Identification of psychometrically strong instruments for the field of implementation science is a high priority underscored in a recent National Institutes of Health working meeting (October 2013). Existing instrument reviews are limited in scope, methods, and findings. The Society for Implementation Research Collaboration Instrument Review Project's objectives address these limitations by identifying and applying a unique methodology to conduct a systematic and comprehensive review of quantitative instruments assessing constructs delineated in two of the field's most widely used frameworks, adopt a systematic search process (using standard search strings), and engage an international team of experts to assess the full range of psychometric criteria (reliability, construct and criterion validity). Although this work focuses on implementation of psychosocial interventions in mental health and health-care settings, the methodology and results will likely be useful across a broad spectrum of settings. This effort has culminated in a centralized online open-access repository of instruments depicting graphical head-to-head comparisons of their psychometric properties. This article describes the methodology and preliminary outcomes. The seven stages of the review, synthesis, and evaluation methodology include (1) setting the scope for the review, (2) identifying frameworks to organize and complete the review, (3) generating a search protocol for the literature review of constructs, (4) literature review of specific instruments, (5) development of an evidence-based assessment rating criteria, (6) data extraction and rating instrument quality by a task force of implementation experts to inform knowledge synthesis, and (7) the creation of a website repository. To date, this multi-faceted and collaborative search and synthesis methodology has identified over 420 instruments related to 34 constructs (total 48 including subconstructs) that are relevant to implementation science. Despite numerous constructs having greater than 20 available instruments, which implies saturation, preliminary results suggest that few instruments stem from gold standard development procedures. We anticipate identifying few high-quality, psychometrically sound instruments once our evidence-based assessment rating criteria have been applied. The results of this methodology may enhance the rigor of implementation science evaluations by systematically facilitating access to psychometrically validated instruments and identifying where further instrument development is needed.
Prisciandaro, James J.; Roberts, John E.
2011-01-01
Background Although psychiatric diagnostic systems have conceptualized mania as a discrete phenomenon, appropriate latent structure investigations testing this conceptualization are lacking. In contrast to these diagnostic systems, several influential theories of mania have suggested a continuous conceptualization. The present study examined whether mania has a continuous or discrete latent structure using a comprehensive approach including taxometric, information-theoretic latent distribution modeling (ITLDM), and predictive validity methodologies in the Epidemiologic Catchment Area (ECA) study. Methods Eight dichotomous manic symptom items were submitted to a variety of latent structural analyses; including factor analyses, taxometric procedures, and ITLDM; in 10,105 ECA community participants. Additionally, a variety of continuous and discrete models of mania were compared in terms of their relative abilities to predict outcomes (i.e., health service utilization, internalizing and externalizing disorders, and suicidal behavior). Results Taxometric and ITLDM analyses consistently supported a continuous conceptualization of mania. In ITLDM analyses, a continuous model of mania demonstrated 6:52:1 odds over the best fitting latent class model of mania. Factor analyses suggested that the continuous structure of mania was best represented by a single latent factor. Predictive validity analyses demonstrated a consistent superior ability of continuous models of mania relative to discrete models. Conclusions The present study provided three independent lines of support for a continuous conceptualization of mania. The implications of a continuous model of mania are discussed. PMID:20507671
Validity of Highlighting on Text Comprehension
NASA Astrophysics Data System (ADS)
So, Joey C. Y.; Chan, Alan H. S.
2009-10-01
In this study, 38 university students were tested with a Chinese reading task on an LED display under different task conditions for determining the effects of the highlighting and its validity on comprehension performance on light-emitting diodes (LED) display for Chinese reading. Four levels of validity (0%, 33%, 67% and 100%) and a control condition with no highlighting were tested. Each subject was required to perform the five experimental conditions in which different passages were read and comprehended. The results showed that the condition with 100% validity of highlighting was found to have better comprehension performance than other validity levels and conditions with no highlighting. The comprehension score of the condition without highlighting effect was comparatively lower than those highlighting conditions with distracters, though not significant.
Development of an interprofessional lean facilitator assessment scale.
Bravo-Sanchez, Cindy; Dorazio, Vincent; Denmark, Robert; Heuer, Albert J; Parrott, J Scott
2018-05-01
High reliability is important for optimising quality and safety in healthcare organisations. Reliability efforts include interprofessional collaborative practice (IPCP) and Lean quality/process improvement strategies, which require skilful facilitation. Currently, no validated Lean facilitator assessment tool for interprofessional collaboration exists. This article describes the development and pilot evaluation of such a tool; the Interprofessional Lean Facilitator Assessment Scale (ILFAS), which measures both technical and 'soft' skills, which have not been measured in other instruments. The ILFAS was developed using methodologies and principles from Lean/Shingo, IPCP, metacognition research and Bloom's Taxonomy of Learning Domains. A panel of experts confirmed the initial face validity of the instrument. Researchers independently assessed five facilitators, during six Lean sessions. Analysis included quantitative evaluation of rater agreement. Overall inter-rater agreement of the assessment of facilitator performance was high (92%), and discrepancies in the agreement statistics were analysed. Face and content validity were further established, and usability was evaluated, through primary stakeholder post-pilot feedback, uncovering minor concerns, leading to tool revision. The ILFAS appears comprehensive in the assessment of facilitator knowledge, skills, abilities, and may be useful in the discrimination between facilitators of different skill levels. Further study is needed to explore instrument performance and validity.
The Research and Evaluation of Serious Games: Toward a Comprehensive Methodology
ERIC Educational Resources Information Center
Mayer, Igor; Bekebrede, Geertje; Harteveld, Casper; Warmelink, Harald; Zhou, Qiqi; van Ruijven, Theo; Lo, Julia; Kortmann, Rens; Wenzler, Ivo
2014-01-01
The authors present the methodological background to and underlying research design of an ongoing research project on the scientific evaluation of serious games and/or computer-based simulation games (SGs) for advanced learning. The main research questions are: (1) what are the requirements and design principles for a comprehensive social…
Validation and Comprehension: An Integrated Overview
ERIC Educational Resources Information Center
Kendeou, Panayiota
2014-01-01
In this article, I review and discuss the work presented in this special issue while focusing on a number of issues that warrant further investigation in validation research. These issues pertain to the nature of the validation processes, the processes and mechanisms that support validation during comprehension, the factors that influence…
Lobo, Daniel; Morokuma, Junji; Levin, Michael
2016-09-01
Automated computational methods can infer dynamic regulatory network models directly from temporal and spatial experimental data, such as genetic perturbations and their resultant morphologies. Recently, a computational method was able to reverse-engineer the first mechanistic model of planarian regeneration that can recapitulate the main anterior-posterior patterning experiments published in the literature. Validating this comprehensive regulatory model via novel experiments that had not yet been performed would add in our understanding of the remarkable regeneration capacity of planarian worms and demonstrate the power of this automated methodology. Using the Michigan Molecular Interactions and STRING databases and the MoCha software tool, we characterized as hnf4 an unknown regulatory gene predicted to exist by the reverse-engineered dynamic model of planarian regeneration. Then, we used the dynamic model to predict the morphological outcomes under different single and multiple knock-downs (RNA interference) of hnf4 and its predicted gene pathway interactors β-catenin and hh Interestingly, the model predicted that RNAi of hnf4 would rescue the abnormal regenerated phenotype (tailless) of RNAi of hh in amputated trunk fragments. Finally, we validated these predictions in vivo by performing the same surgical and genetic experiments with planarian worms, obtaining the same phenotypic outcomes predicted by the reverse-engineered model. These results suggest that hnf4 is a regulatory gene in planarian regeneration, validate the computational predictions of the reverse-engineered dynamic model, and demonstrate the automated methodology for the discovery of novel genes, pathways and experimental phenotypes. michael.levin@tufts.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Diago, Maria P.; Fernández-Novales, Juan; Gutiérrez, Salvador; Marañón, Miguel; Tardaguila, Javier
2018-01-01
Assessing water status and optimizing irrigation is of utmost importance in most winegrowing countries, as the grapevine vegetative growth, yield, and grape quality can be impaired under certain water stress situations. Conventional plant-based methods for water status monitoring are either destructive or time and labor demanding, therefore unsuited to detect the spatial variation of moisten content within a vineyard plot. In this context, this work aims at the development and comprehensive validation of a novel, non-destructive methodology to assess the vineyard water status distribution using on-the-go, contactless, near infrared (NIR) spectroscopy. Likewise, plant water status prediction models were built and intensely validated using the stem water potential (ψs) as gold standard. Predictive models were developed making use of a vast number of measurements, acquired on 15 dates with diverse environmental conditions, at two different spatial scales, on both sides of vertical shoot positioned canopies, over two consecutive seasons. Different cross-validation strategies were also tested and compared. Predictive models built from east-acquired spectra yielded the best performance indicators in both seasons, with determination coefficient of prediction (RP2) ranging from 0.68 to 0.85, and sensitivity (expressed as prediction root mean square error) between 0.131 and 0.190 MPa, regardless the spatial scale. These predictive models were implemented to map the spatial variability of the vineyard water status at two different dates, and provided useful, practical information to help delineating specific irrigation schedules. The performance and the large amount of data that this on-the-go spectral solution provides, facilitates the exploitation of this non-destructive technology to monitor and map the vineyard water status variability with high spatial and temporal resolution, in the context of precision and sustainable viticulture. PMID:29441086
Diago, Maria P; Fernández-Novales, Juan; Gutiérrez, Salvador; Marañón, Miguel; Tardaguila, Javier
2018-01-01
Assessing water status and optimizing irrigation is of utmost importance in most winegrowing countries, as the grapevine vegetative growth, yield, and grape quality can be impaired under certain water stress situations. Conventional plant-based methods for water status monitoring are either destructive or time and labor demanding, therefore unsuited to detect the spatial variation of moisten content within a vineyard plot. In this context, this work aims at the development and comprehensive validation of a novel, non-destructive methodology to assess the vineyard water status distribution using on-the-go, contactless, near infrared (NIR) spectroscopy. Likewise, plant water status prediction models were built and intensely validated using the stem water potential (ψ s ) as gold standard. Predictive models were developed making use of a vast number of measurements, acquired on 15 dates with diverse environmental conditions, at two different spatial scales, on both sides of vertical shoot positioned canopies, over two consecutive seasons. Different cross-validation strategies were also tested and compared. Predictive models built from east-acquired spectra yielded the best performance indicators in both seasons, with determination coefficient of prediction ([Formula: see text]) ranging from 0.68 to 0.85, and sensitivity (expressed as prediction root mean square error) between 0.131 and 0.190 MPa, regardless the spatial scale. These predictive models were implemented to map the spatial variability of the vineyard water status at two different dates, and provided useful, practical information to help delineating specific irrigation schedules. The performance and the large amount of data that this on-the-go spectral solution provides, facilitates the exploitation of this non-destructive technology to monitor and map the vineyard water status variability with high spatial and temporal resolution, in the context of precision and sustainable viticulture.
Skopp, Nancy A; Smolenski, Derek J; Schwesinger, Daniel A; Johnson, Christopher J; Metzger-Abamukong, Melinda J; Reger, Mark A
2017-06-01
Accurate knowledge of the vital status of individuals is critical to the validity of mortality research. National Death Index (NDI) and NDI-Plus are comprehensive epidemiological resources for mortality ascertainment and cause of death data that require additional user validation. Currently, there is a gap in methods to guide validation of NDI search results rendered for active duty service members. The purpose of this research was to adapt and evaluate the CDC National Program of Cancer Registries (NPCR) algorithm for mortality ascertainment in a large military cohort. We adapted and applied the NPCR algorithm to a cohort of 7088 service members on active duty at the time of death at some point between 2001 and 2009. We evaluated NDI validity and NDI-Plus diagnostic agreement against the Department of Defense's Armed Forces Medical Examiner System (AFMES). The overall sensitivity of the NDI to AFMES records after the application of the NPCR algorithm was 97.1%. Diagnostic estimates of measurement agreement between the NDI-Plus and the AFMES cause of death groups were high. The NDI and NDI-Plus can be successfully used with the NPCR algorithm to identify mortality and cause of death among active duty military cohort members who die in the United States. Published by Elsevier Inc.
Improved Conceptual Models Methodology (ICoMM) for Validation of Non-Observable Systems
2015-12-01
distribution is unlimited IMPROVED CONCEPTUAL MODELS METHODOLOGY (ICoMM) FOR VALIDATION OF NON-OBSERVABLE SYSTEMS by Sang M. Sok December 2015...REPORT TYPE AND DATES COVERED Dissertation 4. TITLE AND SUBTITLE IMPROVED CONCEPTUAL MODELS METHODOLOGY (ICoMM) FOR VALIDATION OF NON-OBSERVABLE...importance of the CoM. The improved conceptual model methodology (ICoMM) is developed in support of improving the structure of the CoM for both face and
Bio-markers: traceability in food safety issues.
Raspor, Peter
2005-01-01
Research and practice are focusing on development, validation and harmonization of technologies and methodologies to ensure complete traceability process throughout the food chain. The main goals are: scale-up, implementation and validation of methods in whole food chains, assurance of authenticity, validity of labelling and application of HACCP (hazard analysis and critical control point) to the entire food chain. The current review is to sum the scientific and technological basis for ensuring complete traceability. Tracing and tracking (traceability) of foods are complex processes due to the (bio)markers, technical solutions and different circumstances in different technologies which produces various foods (processed, semi-processed, or raw). Since the food is produced for human or animal consumption we need suitable markers to be stable and traceable all along the production chain. Specific biomarkers can have a function in technology and in nutrition. Such approach would make this development faster and more comprehensive and would make possible that food effect could be monitored with same set of biomarkers in consumer. This would help to develop and implement food safety standards that would be based on real physiological function of particular food component.
Van Royen, Paul; Beyer, Martin; Chevallier, Patrick; Eilat-Tsanani, Sophia; Lionis, Christos; Peremans, Lieve; Petek, Davorina; Rurik, Imre; Soler, Jean Karl; Stoffers, Henri E J H; Topsever, Pinar; Ungan, Mehmet; Hummers-Pradier, Eva
2010-06-01
The recently published 'Research Agenda for General Practice/Family Medicine and Primary Health Care in Europe' summarizes the evidence relating to the core competencies and characteristics of the Wonca Europe definition of GP/FM, and its implications for general practitioners/family doctors, researchers and policy makers. The European Journal of General Practice publishes a series of articles based on this document. In a first article, background, objectives, and methodology were discussed. In a second article, the results for the two core competencies 'primary care management' and 'community orientation' were presented. This article reflects on the three core competencies, which deal with person related aspects of GP/FM, i.e. 'person centred care', 'comprehensive approach' and 'holistic approach'. Though there is an important body of opinion papers and (non-systematic) reviews, all person related aspects remain poorly defined and researched. Validated instruments to measure these competencies are lacking. Concerning patient-centredness, most research examined patient and doctor preferences and experiences. Studies on comprehensiveness mostly focus on prevention/care of specific diseases. For all domains, there has been limited research conducted on its implications or outcomes.
Comprehensive analysis of line-edge and line-width roughness for EUV lithography
NASA Astrophysics Data System (ADS)
Bonam, Ravi; Liu, Chi-Chun; Breton, Mary; Sieg, Stuart; Seshadri, Indira; Saulnier, Nicole; Shearer, Jeffrey; Muthinti, Raja; Patlolla, Raghuveer; Huang, Huai
2017-03-01
Pattern transfer fidelity is always a major challenge for any lithography process and needs continuous improvement. Lithographic processes in semiconductor industry are primarily driven by optical imaging on photosensitive polymeric material (resists). Quality of pattern transfer can be assessed by quantifying multiple parameters such as, feature size uniformity (CD), placement, roughness, sidewall angles etc. Roughness in features primarily corresponds to variation of line edge or line width and has gained considerable significance, particularly due to shrinking feature sizes and variations of features in the same order. This has caused downstream processes (Etch (RIE), Chemical Mechanical Polish (CMP) etc.) to reconsider respective tolerance levels. A very important aspect of this work is relevance of roughness metrology from pattern formation at resist to subsequent processes, particularly electrical validity. A major drawback of current LER/LWR metric (sigma) is its lack of relevance across multiple downstream processes which effects material selection at various unit processes. In this work we present a comprehensive assessment of Line Edge and Line Width Roughness at multiple lithographic transfer processes. To simulate effect of roughness a pattern was designed with periodic jogs on the edges of lines with varying amplitudes and frequencies. There are numerous methodologies proposed to analyze roughness and in this work we apply them to programmed roughness structures to assess each technique's sensitivity. This work also aims to identify a relevant methodology to quantify roughness with relevance across downstream processes.
Panetta, Daniele; Pelosi, Gualtiero; Viglione, Federica; Kusmic, Claudia; Terreni, Marianna; Belcari, Nicola; Guerra, Alberto Del; Athanasiou, Lambros; Exarchos, Themistoklis; Fotiadis, Dimitrios I; Filipovic, Nenad; Trivella, Maria Giovanna; Salvadori, Piero A; Parodi, Oberdan
2015-01-01
Micro-CT is an established imaging technique for high-resolution non-destructive assessment of vascular samples, which is gaining growing interest for investigations of atherosclerotic arteries both in humans and in animal models. However, there is still a lack in the definition of micro-CT image metrics suitable for comprehensive evaluation and quantification of features of interest in the field of experimental atherosclerosis (ATS). A novel approach to micro-CT image processing for profiling of coronary ATS is described, providing comprehensive visualization and quantification of contrast agent-free 3D high-resolution reconstruction of full-length artery walls. Accelerated coronary ATS has been induced by high fat cholesterol-enriched diet in swine and left coronary artery (LCA) harvested en bloc for micro-CT scanning and histologic processing. A cylindrical coordinate system has been defined on the image space after curved multiplanar reformation of the coronary vessel for the comprehensive visualization of the main vessel features such as wall thickening and calcium content. A novel semi-automatic segmentation procedure based on 2D histograms has been implemented and the quantitative results validated by histology. The potentiality of attenuation-based micro-CT at low kV to reliably separate arterial wall layers from adjacent tissue as well as identify wall and plaque contours and major tissue components has been validated by histology. Morphometric indexes from histological data corresponding to several micro-CT slices have been derived (double observer evaluation at different coronary ATS stages) and highly significant correlations (R2 > 0.90) evidenced. Semi-automatic morphometry has been validated by double observer manual morphometry of micro-CT slices and highly significant correlations were found (R2 > 0.92). The micro-CT methodology described represents a handy and reliable tool for quantitative high resolution and contrast agent free full length coronary wall profiling, able to assist atherosclerotic vessels morphometry in a preclinical experimental model of coronary ATS and providing a link between in vivo imaging and histology.
Child maltreatment prevention: a systematic review of reviews.
Mikton, Christopher; Butchart, Alexander
2009-05-01
To synthesize recent evidence from systematic and comprehensive reviews on the effectiveness of universal and selective child maltreatment prevention interventions, evaluate the methodological quality of the reviews and outcome evaluation studies they are based on, and map the geographical distribution of the evidence. A systematic review of reviews was conducted. The quality of the systematic reviews was evaluated with a tool for the assessment of multiple systematic reviews (AMSTAR), and the quality of the outcome evaluations was assessed using indicators of internal validity and of the construct validity of outcome measures. The review focused on seven main types of interventions: home visiting, parent education, child sex abuse prevention, abusive head trauma prevention, multi-component interventions, media-based interventions, and support and mutual aid groups. Four of the seven - home-visiting, parent education, abusive head trauma prevention and multi-component interventions - show promise in preventing actual child maltreatment. Three of them - home visiting, parent education and child sexual abuse prevention - appear effective in reducing risk factors for child maltreatment, although these conclusions are tentative due to the methodological shortcomings of the reviews and outcome evaluation studies they draw on. An analysis of the geographical distribution of the evidence shows that outcome evaluations of child maltreatment prevention interventions are exceedingly rare in low- and middle-income countries and make up only 0.6% of the total evidence base. Evidence for the effectiveness of four of the seven main types of interventions for preventing child maltreatment is promising, although it is weakened by methodological problems and paucity of outcome evaluations from low- and middle-income countries.
Methods of assessment of the post-exercise cardiac autonomic recovery: A methodological review.
Peçanha, Tiago; Bartels, Rhenan; Brito, Leandro C; Paula-Ribeiro, Marcelle; Oliveira, Ricardo S; Goldberger, Jeffrey J
2017-01-15
The analysis of post-exercise cardiac autonomic recovery is a practical clinical tool for the assessment of cardiovascular health. A reduced heart rate recovery - an indicator of autonomic dysfunction - has been found in a broad range of cardiovascular diseases and has been associated with increased risks of both cardiac and all-cause mortality. For this reason, over the last several years, non-invasive methods for the assessment of cardiac autonomic recovery after exercise - either based on heart rate recovery or heart rate variability indices - have been proposed. However, for the proper implementation of such methods in daily clinical practice, the discussion of their clinical validity, physiologic meaning, mathematical formulation and reproducibility should be better addressed. Therefore, the aim of this methodological review is to present some of the most employed methods of post-exercise cardiac autonomic recovery in the literature and comprehensively discuss their strengths and weaknesses. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Maturity Models of Healthcare Information Systems and Technologies: a Literature Review.
Carvalho, João Vidal; Rocha, Álvaro; Abreu, António
2016-06-01
The maturity models are instruments to facilitate organizational management, including the management of its information systems function. These instruments are used also in hospitals. The objective of this article is to identify and compare the maturity models for management of information systems and technologies (IST) in healthcare. For each maturity model, it is identified the methodology of development and validation, as well as the scope, stages and their characteristics by dimensions or influence factors. This study resulted in the need to develop a maturity model based on a holistic approach. It will include a comprehensive set of influencing factors to reach all areas and subsystems of health care organizations.
Speciation of adsorbates on surface of solids by infrared spectroscopy and chemometrics.
Vilmin, Franck; Bazin, Philippe; Thibault-Starzyk, Frédéric; Travert, Arnaud
2015-09-03
Speciation, i.e. identification and quantification, of surface species on heterogeneous surfaces by infrared spectroscopy is important in many fields but remains a challenging task when facing strongly overlapped spectra of multiple adspecies. Here, we propose a new methodology, combining state of the art instrumental developments for quantitative infrared spectroscopy of adspecies and chemometrics tools, mainly a novel data processing algorithm, called SORB-MCR (SOft modeling by Recursive Based-Multivariate Curve Resolution) and multivariate calibration. After formal transposition of the general linear mixture model to adsorption spectral data, the main issues, i.e. validity of Beer-Lambert law and rank deficiency problems, are theoretically discussed. Then, the methodology is exposed through application to two case studies, each of them characterized by a specific type of rank deficiency: (i) speciation of physisorbed water species over a hydrated silica surface, and (ii) speciation (chemisorption and physisorption) of a silane probe molecule over a dehydrated silica surface. In both cases, we demonstrate the relevance of this approach which leads to a thorough surface speciation based on comprehensive and fully interpretable multivariate quantitative models. Limitations and drawbacks of the methodology are also underlined. Copyright © 2015 Elsevier B.V. All rights reserved.
Burris, Silas E.; Brown, Danielle D.
2014-01-01
Narratives, also called stories, can be found in conversations, children's play interactions, reading material, and television programs. From infancy to adulthood, narrative comprehension processes interpret events and inform our understanding of physical and social environments. These processes have been extensively studied to ascertain the multifaceted nature of narrative comprehension. From this research we know that three overlapping processes (i.e., knowledge integration, goal structure understanding, and causal inference generation) proposed by the constructionist paradigm are necessary for narrative comprehension, narrative comprehension has a predictive relationship with children's later reading performance, and comprehension processes are generalizable to other contexts. Much of the previous research has emphasized internal and predictive validity; thus, limiting the generalizability of previous findings. We are concerned these limitations may be excluding underrepresented populations from benefits and implications identified by early comprehension processes research. This review identifies gaps in extant literature regarding external validity and argues for increased emphasis on externally valid research. We highlight limited research on narrative comprehension processes in children from low-income and minority populations, and argue for changes in comprehension assessments. Specifically, we argue both on- and off-line assessments should be used across various narrative types (e.g., picture books, televised narratives) with traditionally underserved and underrepresented populations. We propose increasing the generalizability of narrative comprehension processes research can inform persistent reading achievement gaps, and have practical implications for how children learn from narratives. PMID:24659973
A Low Vision Reading Comprehension Test.
ERIC Educational Resources Information Center
Watson, G. R.; And Others
1996-01-01
Fifty adults (ages 28-86) with macular degeneration were given the Low Vision Reading Comprehension Assessment (LVRCA) to test its reliability and validity in evaluating the reading comprehension of those with vision impairments. The LVRCA was found to take only nine minutes to administer and was a valid and reliable tool. (CR)
Monitoring Local Comprehension Monitoring in Sentence Reading
ERIC Educational Resources Information Center
Vorstius, Christian; Radach, Ralph; Mayer, Michael B.; Lonigan, Christopher J.
2013-01-01
on ways to improve children's reading comprehension. However, processes and mechanisms underlying this skill are currently not well understood. This article describes one of the first attempts to study comprehension monitoring using eye-tracking methodology. Students in fifth…
Lee, Lay Wah
2008-06-01
Malay is an alphabetic language with transparent orthography. A Malay reading-related assessment battery which was conceptualised based on the International Dyslexia Association definition of dyslexia was developed and validated for the purpose of dyslexia assessment. The battery consisted of ten tests: Letter Naming, Word Reading, Non-word Reading, Spelling, Passage Reading, Reading Comprehension, Listening Comprehension, Elision, Rapid Letter Naming and Digit Span. Content validity was established by expert judgment. Concurrent validity was obtained using the schools' language tests as criterion. Evidence of predictive and construct validity was obtained through regression analyses and factor analyses. Phonological awareness was the most significant predictor of word-level literacy skills in Malay, with rapid naming making independent secondary contributions. Decoding and listening comprehension made separate contributions to reading comprehension, with decoding as the more prominent predictor. Factor analysis revealed four factors: phonological decoding, phonological naming, comprehension and verbal short-term memory. In conclusion, despite differences in orthography, there are striking similarities in the theoretical constructs of reading-related tasks in Malay and in English.
Definition and Demonstration of a Methodology for Validating Aircraft Trajectory Predictors
NASA Technical Reports Server (NTRS)
Vivona, Robert A.; Paglione, Mike M.; Cate, Karen T.; Enea, Gabriele
2010-01-01
This paper presents a new methodology for validating an aircraft trajectory predictor, inspired by the lessons learned from a number of field trials, flight tests and simulation experiments for the development of trajectory-predictor-based automation. The methodology introduces new techniques and a new multi-staged approach to reduce the effort in identifying and resolving validation failures, avoiding the potentially large costs associated with failures during a single-stage, pass/fail approach. As a case study, the validation effort performed by the Federal Aviation Administration for its En Route Automation Modernization (ERAM) system is analyzed to illustrate the real-world applicability of this methodology. During this validation effort, ERAM initially failed to achieve six of its eight requirements associated with trajectory prediction and conflict probe. The ERAM validation issues have since been addressed, but to illustrate how the methodology could have benefited the FAA effort, additional techniques are presented that could have been used to resolve some of these issues. Using data from the ERAM validation effort, it is demonstrated that these new techniques could have identified trajectory prediction error sources that contributed to several of the unmet ERAM requirements.
Physician supply forecast: better than peering in a crystal ball?
Roberfroid, Dominique; Leonard, Christian; Stordeur, Sabine
2009-01-01
Background Anticipating physician supply to tackle future health challenges is a crucial but complex task for policy planners. A number of forecasting tools are available, but the methods, advantages and shortcomings of such tools are not straightforward and not always well appraised. Therefore this paper had two objectives: to present a typology of existing forecasting approaches and to analyse the methodology-related issues. Methods A literature review was carried out in electronic databases Medline-Ovid, Embase and ERIC. Concrete examples of planning experiences in various countries were analysed. Results Four main forecasting approaches were identified. The supply projection approach defines the necessary inflow to maintain or to reach in the future an arbitrary predefined level of service offer. The demand-based approach estimates the quantity of health care services used by the population in the future to project physician requirements. The needs-based approach involves defining and predicting health care deficits so that they can be addressed by an adequate workforce. Benchmarking health systems with similar populations and health profiles is the last approach. These different methods can be combined to perform a gap analysis. The methodological challenges of such projections are numerous: most often static models are used and their uncertainty is not assessed; valid and comprehensive data to feed into the models are often lacking; and a rapidly evolving environment affects the likelihood of projection scenarios. As a result, the internal and external validity of the projections included in our review appeared limited. Conclusion There is no single accepted approach to forecasting physician requirements. The value of projections lies in their utility in identifying the current and emerging trends to which policy-makers need to respond. A genuine gap analysis, an effective monitoring of key parameters and comprehensive workforce planning are key elements to improving the usefulness of physician supply projections. PMID:19216772
ERIC Educational Resources Information Center
Nunnery, John A.; Ross, Steven M.; Bol, Linda
2008-01-01
This study reports the results of a validation study of the Comprehensive School Restructuring Teacher Questionnaire (CSRTQ) and the School Observation Measure (SOM), which are intended for use in evaluating comprehensive school reform efforts. The CSRTQ, which putatively measures five factors related to school restructuring (internal focus,…
Santos, Sandra; Cadime, Irene; Viana, Fernanda L; Chaves-Sousa, Séli; Gayo, Elena; Maia, José; Ribeiro, Iolanda
2017-02-01
Reading comprehension assessment should rely on valid instruments that enable adequate conclusions to be taken regarding students' reading comprehension performance. In this article, two studies were conducted to collect validity evidence for the vertically scaled forms of two Tests of Reading Comprehension for Portuguese elementary school students in the second to fourth grades, one with narrative texts (TRC-n) and another with expository ones (TRC-e). Two samples of 950 and 990 students participated in Study 1, the study of the dimensionality of the TRC-n and TRC-e forms, respectively. Confirmatory factor analyses provided evidence of an acceptable fit for the one-factor solution for all test forms. Study 2 included 218 students to collect criterion-related validity. The scores obtained in each of the test forms were significantly correlated with the ones obtained in other reading comprehension measures and with the results obtained in oral reading fluency, vocabulary and working memory tests. Evidence suggests that the test forms are valid measures of reading comprehension. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Kon, Alexander A.; Klug, Michael
2010-01-01
Ethicists recommend that investigators assess subjects’ comprehension prior to accepting their consent as valid. Because children represent an at-risk population, ensuring adequate comprehension in pediatric research is vital. We surveyed all corresponding authors of research articles published over a six-month period in five leading adult and pediatric journals. Our goal was to assess how often subject’s comprehension or decisional capacity was assessed in the consent process, whether there was any difference between adult and pediatric research projects, and the rate at which investigators use formal or validated tools to assess capacity. Responses from 102 authors were analyzed (response rate 56%). Approximately two-thirds of respondents stated that they assessed comprehension or decisional capacity prior to accepting consent, and we found no difference between adult and pediatric researchers. Nine investigators used a formal questionnaire, and three used a validated tool. These findings suggest that fewer than expected investigators assess comprehension and decisional capacity, and that the use of standardized and validated tools is the exception rather than the rule. PMID:19385838
Knowledge Activation, Integration, and Validation during Narrative Text Comprehension
ERIC Educational Resources Information Center
Cook, Anne E.; O'Brien, Edward J.
2014-01-01
Previous text comprehension studies using the contradiction paradigm primarily tested assumptions of the activation mechanism involved in reading. However, the nature of the contradiction in such studies relied on validation of information in readers' general world knowledge. We directly tested this validation process by varying the strength of…
Evaluative methodology for comprehensive water quality management planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyer, H. L.
Computer-based evaluative methodologies have been developed to provide for the analysis of coupled phenomena associated with natural resource comprehensive planning requirements. Provisions for planner/computer interaction have been included. Each of the simulation models developed is described in terms of its coded procedures. An application of the models for water quality management planning is presented; and the data requirements for each of the models are noted.
Vasanth, Muthuraman; Muralidhar, Moturi; Saraswathy, Ramamoorthy; Nagavel, Arunachalam; Dayal, Jagabattula Syama; Jayanthi, Marappan; Lalitha, Natarajan; Kumararaja, Periyamuthu; Vijayan, Koyadan Kizhakkedath
2016-12-01
Global warming/climate change is the greatest environmental threat of our time. Rapidly developing aquaculture sector is an anthropogenic activity, the contribution of which to global warming is little understood, and estimation of greenhouse gases (GHGs) emission from the aquaculture ponds is a key practice in predicting the impact of aquaculture on global warming. A comprehensive methodology was developed for sampling and simultaneous analysis of GHGs, carbon dioxide (CO 2 ), methane (CH 4 ), and nitrous oxide (N 2 O) from the aquaculture ponds. The GHG fluxes were collected using cylindrical acrylic chamber, air pump, and tedlar bags. A cylindrical acrylic floating chamber was fabricated to collect the GHGs emanating from the surface of aquaculture ponds. The sampling methodology was standardized and in-house method validation was established by achieving linearity, accuracy, precision, and specificity. GHGs flux was found to be stable at 10 ± 2 °C of storage for 3 days. The developed methodology was used to quantify GHGs in the Pacific white shrimp Penaeus vannamei and black tiger shrimp Penaeus monodon culture ponds for a period of 4 months. The rate of emission of carbon dioxide was found to be much greater when compared to other two GHGs. Average GHGs emission in gha -1 day -1 during the culture was comparatively high in P.vannamei culture ponds.
Entropy Filtered Density Function for Large Eddy Simulation of Turbulent Reacting Flows
NASA Astrophysics Data System (ADS)
Safari, Mehdi
Analysis of local entropy generation is an effective means to optimize the performance of energy and combustion systems by minimizing the irreversibilities in transport processes. Large eddy simulation (LES) is employed to describe entropy transport and generation in turbulent reacting flows. The entropy transport equation in LES contains several unclosed terms. These are the subgrid scale (SGS) entropy flux and entropy generation caused by irreversible processes: heat conduction, mass diffusion, chemical reaction and viscous dissipation. The SGS effects are taken into account using a novel methodology based on the filtered density function (FDF). This methodology, entitled entropy FDF (En-FDF), is developed and utilized in the form of joint entropy-velocity-scalar-turbulent frequency FDF and the marginal scalar-entropy FDF, both of which contain the chemical reaction effects in a closed form. The former constitutes the most comprehensive form of the En-FDF and provides closure for all the unclosed filtered moments. This methodology is applied for LES of a turbulent shear layer involving transport of passive scalars. Predictions show favor- able agreements with the data generated by direct numerical simulation (DNS) of the same layer. The marginal En-FDF accounts for entropy generation effects as well as scalar and entropy statistics. This methodology is applied to a turbulent nonpremixed jet flame (Sandia Flame D) and predictions are validated against experimental data. In both flows, sources of irreversibility are predicted and analyzed.
Surgical interventions for gastric cancer: a review of systematic reviews.
He, Weiling; Tu, Jian; Huo, Zijun; Li, Yuhuang; Peng, Jintao; Qiu, Zhenwen; Luo, Dandong; Ke, Zunfu; Chen, Xinlin
2015-01-01
To evaluate methodological quality and the extent of concordance among meta-analysis and/or systematic reviews on surgical interventions for gastric cancer (GC). A comprehensive search of PubMed, Medline, EMBASE, the Cochrane library and the DARE database was conducted to identify the reviews comparing different surgical interventions for GC prior to April 2014. After applying included criteria, available data were summarized and appraised by the Oxman and Guyatt scale. Fifty six reviews were included. Forty five reviews (80.4%) were well conducted, with scores of adapted Oxman and Guyatt scale ≥ 14. The reviews differed in criteria for avoiding bias and assessing the validity of the primary studies. Many primary studies displayed major methodological flaws, such as randomization, allocation concealment, and dropouts and withdrawals. According to the concordance assessment, laparoscopy-assisted gastrectomy (LAG) was superior to open gastrectomy, and laparoscopy-assisted distal gastrectomy was superior to open distal gastrectomy in short-term outcomes. However, the concordance regarding other surgical interventions, such as D1 vs. D2 lymphadenectomy, and robotic gastrectomy vs. LAG were absent. Systematic reviews on surgical interventions for GC displayed relatively high methodological quality. The improvement of methodological quality and reporting was necessary for primary studies. The superiority of laparoscopic over open surgery was demonstrated. But concordance on other surgical interventions was rare, which needed more well-designed RCTs and systematic reviews.
Calibration of a stochastic health evolution model using NHIS data
NASA Astrophysics Data System (ADS)
Gupta, Aparna; Li, Zhisheng
2011-10-01
This paper presents and calibrates an individual's stochastic health evolution model. In this health evolution model, the uncertainty of health incidents is described by a stochastic process with a finite number of possible outcomes. We construct a comprehensive health status index (HSI) to describe an individual's health status, as well as a health risk factor system (RFS) to classify individuals into different risk groups. Based on the maximum likelihood estimation (MLE) method and the method of nonlinear least squares fitting, model calibration is formulated in terms of two mixed-integer nonlinear optimization problems. Using the National Health Interview Survey (NHIS) data, the model is calibrated for specific risk groups. Longitudinal data from the Health and Retirement Study (HRS) is used to validate the calibrated model, which displays good validation properties. The end goal of this paper is to provide a model and methodology, whose output can serve as a crucial component of decision support for strategic planning of health related financing and risk management.
Treves-Kagan, Sarah; Naidoo, Evasen; Gilvydis, Jennifer M; Raphela, Elsie; Barnhart, Scott; Lippman, Sheri A
2017-09-01
Successful HIV prevention programming requires engaging communities in the planning process and responding to the social environmental factors that shape health and behaviour in a specific local context. We conducted two community-based situational analyses to inform a large, comprehensive HIV prevention programme in two rural districts of North West Province South Africa in 2012. The methodology includes: initial partnership building, goal setting and background research; 1 week of field work; in-field and subsequent data analysis; and community dissemination and programmatic incorporation of results. We describe the methodology and a case study of the approach in rural South Africa; assess if the methodology generated data with sufficient saturation, breadth and utility for programming purposes; and evaluate if this process successfully engaged the community. Between the two sites, 87 men and 105 women consented to in-depth interviews; 17 focus groups were conducted; and 13 health facilities and 7 NGOs were assessed. The methodology succeeded in quickly collecting high-quality data relevant to tailoring a comprehensive HIV programme and created a strong foundation for community engagement and integration with local health services. This methodology can be an accessible tool in guiding community engagement and tailoring future combination HIV prevention and care programmes.
Daniero, James J.; Hovis, Kristen L.; Sathe, Nila; Jacobson, Barbara; Penson, David F.; Feurer, Irene D.; McPheeters, Melissa L.
2017-01-01
Purpose The purpose of this study was to perform a comprehensive systematic review of the literature on voice-related patient-reported outcome (PRO) measures in adults and to evaluate each instrument for the presence of important measurement properties. Method MEDLINE, the Cumulative Index of Nursing and Allied Health Literature, and the Health and Psychosocial Instrument databases were searched using relevant vocabulary terms and key terms related to PRO measures and voice. Inclusion and exclusion criteria were developed in consultation with an expert panel. Three independent investigators assessed study methodology using criteria developed a priori. Measurement properties were examined and entered into evidence tables. Results A total of 3,744 studies assessing voice-related constructs were identified. This list was narrowed to 32 PRO measures on the basis of predetermined inclusion and exclusion criteria. Questionnaire measurement properties varied widely. Important thematic deficiencies were apparent: (a) lack of patient involvement in the item development process, (b) lack of robust construct validity, and (c) lack of clear interpretability and scaling. Conclusions PRO measures are a principal means of evaluating treatment effectiveness in voice-related conditions. Despite their prominence, available PRO measures have disparate methodological rigor. Care must be taken to understand the psychometric and measurement properties and the applicability of PRO measures before advocating for their use in clinical or research applications. PMID:28030869
NASA Astrophysics Data System (ADS)
Brasseur, Pierre
2015-04-01
The MyOcean projects supported by the European Commission period have been developed during the 2008-2015 period to build an operational service of ocean physical state and ecosystem information to intermediate and downstream users in the areas of marine safety, marine resources, marine and coastal environment and weather, climate and seasonal forecasting. The "core" information provided to users is obtained through the combination of satellite and in situ observations, eddy-resolving modelling of the global ocean and regional european seas, biochemistry, ecosystem and sea-ice modelling, and data assimilation for global to basin scale circulation. A comprehensive R&D plan was established in 2010 to ensure the collection and provision of information of best possible quality for daily estimates of the ocean state (real-time), its short-term evolution, and its history over the past (reanalyses). A service validation methodology was further developed to ensure proper scientific evaluation and routine monitoring of the accuracy of MyOcean products. In this presentation, we will present an overview of the main scientific advances achieved in MyOcean using the NEMO modelling platform, ensemble-based assimilation schemes, coupled circulation-ecosystem, sea-ice assimilative models and probabilistic methodologies for ensemble validation. We will further highlight the key areas that will require additional innovation effort to support the Marine Copernicus service evolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epiney, A.; Canepa, S.; Zerkak, O.
The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less
Learning Methodology in the Classroom to Encourage Participation
ERIC Educational Resources Information Center
Luna, Esther; Folgueiras, Pilar
2014-01-01
Service learning is a methodology that promotes the participation of citizens in their community. This article presents a brief conceptualization of citizen participation, characteristics of service learning methodology, and validation of a programme that promotes service-learning projects. This validation highlights the suitability of this…
GRRATS: A New Approach to Inland Altimetry Processing for Major World Rivers
NASA Astrophysics Data System (ADS)
Coss, S. P.
2016-12-01
Here we present work-in-progress results aimed at generating a new radar altimetry dataset GRRATS (Global River Radar Altimetry Time Series) extracted over global ocean-draining rivers wider than 900 m. GRATTS was developed as a component of the NASA MEaSUREs project (PI: Dennis Lettenmaier, UCLA) to generate pre-SWOT data products for decadal or longer global river elevation changes from multi-mission satellite radar altimetry data. The dataset at present includes 909 time series from 39 rivers. A new method of filtering VS (virtual station) height time series is presented where, DEM based heights were used to establish limits for the ice1 retracked Jason2 and Envisat heights at present. While GRRATS is following in the footsteps of several predecessors, it contributes to one of the critical climate data records in generating a validated and comprehensive hydrologic observations in river height. The current data product includes VSs in north and south Americas, Africa and Eurasia, with the most comprehensive set of Jason-2 and Envisat RA time series available for North America and Eurasia. We present a semi-automated procedure to process returns from river locations, identified with Landsat images and updated water mask extent. Consistent methodologies for flagging ice cover are presented. DEM heights used in height filtering were retained and can be used as river height profiles. All non-validated VS have been assigned a letter grade A-D to aid end users in selection of data. Validated VS are accompanied with a suite of fit statistics. Due to the inclusiveness of the dataset, not all VS were able to undergo validation (415 of 909), but those that were demonstrate that confidence in the data product is warranted. Validation was accomplished using records from 45 in situ gauges from 12 rivers. Meta-analysis was performed to compare each gauge with each VS by relative height. Preliminary validation results are as follows. 89.3% of the data have positive Nash Sutcliff Efficiency (NES) values, and the median NSE value is 0.73. The median standard deviation of error (STDE) is .92 m. GRRATS will soon be publicly available in NetCDF format with CF compliant metadata.
Barathi, M; Kumar, A Santhana Krishna; Rajesh, N
2014-05-01
In the present work, we propose for the first time a novel ultrasound assisted methodology involving the impregnation of zirconium in a cellulose matrix. Fluoride from aqueous solution interacts with the cellulose hydroxyl groups and the cationic zirconium hydroxide. Ultrasonication ensures a green and quick alternative to the conventional time intensive method of preparation. The effectiveness of this process was confirmed by comprehensive characterization of zirconium impregnated cellulose (ZrIC) adsorbent using Fourier transform infrared spectroscopy (FT-IR), energy dispersive X-ray spectrometry (EDX) and X-ray diffraction (XRD) studies. The study of various adsorption isotherm models, kinetics and thermodynamics of the interaction validated the method. Copyright © 2013 Elsevier B.V. All rights reserved.
Estimating stream discharge from a Himalayan Glacier using coupled satellite sensor data
NASA Astrophysics Data System (ADS)
Child, S. F.; Stearns, L. A.; van der Veen, C. J.; Haritashya, U. K.; Tarpanelli, A.
2015-12-01
The 4th IPCC report highlighted our limited understanding of Himalayan glacier behavior and contribution to the region's hydrology. Seasonal snow and glacier melt in the Himalayas are important sources of water, but estimates greatly differ about the actual contribution of melted glacier ice to stream discharge. A more comprehensive understanding of the contribution of glaciers to stream discharge is needed because streams being fed by glaciers affect the livelihoods of a large part of the world's population. Most of the streams in the Himalayas are unmonitored because in situ measurements are logistically difficult and costly. This necessitates the use of remote sensing platforms to obtain estimates of river discharge for validating hydrological models. In this study, we estimate stream discharge using cost-effective methods via repeat satellite imagery from Landsat-8 and SENTINEL-1A sensors. The methodology is based on previous studies, which show that ratio values from optical satellite bands correlate well with measured stream discharge. While similar, our methodology relies on significantly higher resolution imagery (30 m) and utilizes bands that are in the blue and near-infrared spectrum as opposed to previous studies using 250 m resolution imagery and spectral bands only in the near-infrared. Higher resolution imagery is necessary for streams where the source is a glacier's terminus because the width of the stream is often only 10s of meters. We validate our methodology using two rivers in the state of Kansas, where stream gauges are plentiful. We then apply our method to the Bhagirathi River, in the North-Central Himalayas, which is fed by the Gangotri Glacier and has a well monitored stream gauge. The analysis will later be used to couple river discharge and glacier flow and mass balance through an integrated hydrologic model in the Bhagirathi Basin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrar, Charles R; Gobbato, Maurizio; Conte, Joel
2009-01-01
The extensive use of lightweight advanced composite materials in unmanned aerial vehicles (UAVs) drastically increases the sensitivity to both fatigue- and impact-induced damage of their critical structural components (e.g., wings and tail stabilizers) during service life. The spar-to-skin adhesive joints are considered one of the most fatigue sensitive subcomponents of a lightweight UAV composite wing with damage progressively evolving from the wing root. This paper presents a comprehensive probabilistic methodology for predicting the remaining service life of adhesively-bonded joints in laminated composite structural components of UAVs. Non-destructive evaluation techniques and Bayesian inference are used to (i) assess the current statemore » of damage of the system and, (ii) update the probability distribution of the damage extent at various locations. A probabilistic model for future loads and a mechanics-based damage model are then used to stochastically propagate damage through the joint. Combined local (e.g., exceedance of a critical damage size) and global (e.g.. flutter instability) failure criteria are finally used to compute the probability of component failure at future times. The applicability and the partial validation of the proposed methodology are then briefly discussed by analyzing the debonding propagation, along a pre-defined adhesive interface, in a simply supported laminated composite beam with solid rectangular cross section, subjected to a concentrated load applied at mid-span. A specially developed Eliler-Bernoulli beam finite element with interlaminar slip along the damageable interface is used in combination with a cohesive zone model to study the fatigue-induced degradation in the adhesive material. The preliminary numerical results presented are promising for the future validation of the methodology.« less
Child maltreatment prevention: a systematic review of reviews
Butchart, Alexander
2009-01-01
Abstract Objective To synthesize recent evidence from systematic and comprehensive reviews on the effectiveness of universal and selective child maltreatment prevention interventions, evaluate the methodological quality of the reviews and outcome evaluation studies they are based on, and map the geographical distribution of the evidence. Methods A systematic review of reviews was conducted. The quality of the systematic reviews was evaluated with a tool for the assessment of multiple systematic reviews (AMSTAR), and the quality of the outcome evaluations was assessed using indicators of internal validity and of the construct validity of outcome measures. Findings The review focused on seven main types of interventions: home visiting, parent education, child sex abuse prevention, abusive head trauma prevention, multi-component interventions, media-based interventions, and support and mutual aid groups. Four of the seven – home-visiting, parent education, abusive head trauma prevention and multi-component interventions – show promise in preventing actual child maltreatment. Three of them – home visiting, parent education and child sexual abuse prevention – appear effective in reducing risk factors for child maltreatment, although these conclusions are tentative due to the methodological shortcomings of the reviews and outcome evaluation studies they draw on. An analysis of the geographical distribution of the evidence shows that outcome evaluations of child maltreatment prevention interventions are exceedingly rare in low- and middle-income countries and make up only 0.6% of the total evidence base. Conclusion Evidence for the effectiveness of four of the seven main types of interventions for preventing child maltreatment is promising, although it is weakened by methodological problems and paucity of outcome evaluations from low- and middle-income countries. PMID:19551253
Methodology for Estimating ton-Miles of Goods Movements for U.S. Freight Mulitimodal Network System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling
2013-01-01
Ton-miles is a commonly used measure of freight transportation output. Estimation of ton-miles in the U.S. transportation system requires freight flow data at disaggregated level (either by link flow, path flows or origin-destination flows between small geographic areas). However, the sheer magnitude of the freight data system as well as industrial confidentiality concerns in Census survey, limit the freight data which is made available to the public. Through the years, the Center for Transportation Analysis (CTA) of the Oak Ridge National Laboratory (ORNL) has been working in the development of comprehensive national and regional freight databases and network flow models.more » One of the main products of this effort is the Freight Analysis Framework (FAF), a public database released by the ORNL. FAF provides to the general public a multidimensional matrix of freight flows (weight and dollar value) on the U.S. transportation system between states, major metropolitan areas, and remainder of states. Recently, the CTA research team has developed a methodology to estimate ton-miles by mode of transportation between the 2007 FAF regions. This paper describes the data disaggregation methodology. The method relies on the estimation of disaggregation factors that are related to measures of production, attractiveness and average shipments distances by mode service. Production and attractiveness of counties are captured by the total employment payroll. Likely mileages for shipments between counties are calculated by using a geographic database, i.e. the CTA multimodal network system. Results of validation experiments demonstrate the validity of the method. Moreover, 2007 FAF ton-miles estimates are consistent with the major freight data programs for rail and water movements.« less
Weigl, Martin; Wild, Heike
2017-09-15
To validate the International Classification of Functioning, Disability and Health Comprehensive Core Set for Osteoarthritis from the patient perspective in Europe. This multicenter cross-sectional study involved 375 patients with knee or hip osteoarthritis. Trained health professionals completed the Comprehensive Core Set, and patients completed the Short-Form 36 questionnaire. Content validity was evaluated by calculating prevalences of impairments in body function and structures, limitations in activities and participation and environmental factors, which were either barriers or facilitators. Convergent construct validity was evaluated by correlating the International Classification of Functioning, Disability and Health categories with the Short-Form 36 Physical Component Score and the SF-36 Mental Component Score in a subgroup of 259 patients. The prevalences of all body function, body structure and activities and participation categories were >40%, >32% and >20%, respectively, and all environmental factors were relevant for >16% of patients. Few categories showed relevant differences between knee and hip osteoarthritis. All body function categories and all but two activities and participation categories showed significant correlations with the Physical Component Score. Body functions from the ICF chapter Mental Functions showed higher correlations with the Mental Component Score than with the Physical Component Score. This study supports the validity of the International Classification of Functioning, Disability and Health Comprehensive Core Set for Osteoarthritis. Implications for Rehabilitation Comprehensive International Classification of Functioning, Disability and Health Core Sets were developed as practical tools for application in multidisciplinary assessments. The validity of the Comprehensive International Classification of Functioning, Disability and Health Core Set for Osteoarthritis in this study supports its application in European patients with osteoarthritis. The differences in results between this Europe validation study and a previous Singaporean validation study underscore the need to validate the International Classification of Functioning, Disability and Health Core Sets in different regions of the world.
Friend, Margaret; Schmitt, Sara A.; Simpson, Adrianne M.
2017-01-01
Until recently, the challenges inherent in measuring comprehension have impeded our ability to predict the course of language acquisition. The present research reports on a longitudinal assessment of the convergent and predictive validity of the CDI: Words and Gestures and the Computerized Comprehension Task (CCT). The CDI: WG and the CCT evinced good convergent validity however the CCT better predicted subsequent parent reports of language production. Language sample data in the third year confirm this finding: the CCT accounted for 24% of the variance in unique word use. These studies provide evidence for the utility of a behavior-based approach to predicting the course of language acquisition into production. PMID:21928878
Picking ChIP-seq peak detectors for analyzing chromatin modification experiments
Micsinai, Mariann; Parisi, Fabio; Strino, Francesco; Asp, Patrik; Dynlacht, Brian D.; Kluger, Yuval
2012-01-01
Numerous algorithms have been developed to analyze ChIP-Seq data. However, the complexity of analyzing diverse patterns of ChIP-Seq signals, especially for epigenetic marks, still calls for the development of new algorithms and objective comparisons of existing methods. We developed Qeseq, an algorithm to detect regions of increased ChIP read density relative to background. Qeseq employs critical novel elements, such as iterative recalibration and neighbor joining of reads to identify enriched regions of any length. To objectively assess its performance relative to other 14 ChIP-Seq peak finders, we designed a novel protocol based on Validation Discriminant Analysis (VDA) to optimally select validation sites and generated two validation datasets, which are the most comprehensive to date for algorithmic benchmarking of key epigenetic marks. In addition, we systematically explored a total of 315 diverse parameter configurations from these algorithms and found that typically optimal parameters in one dataset do not generalize to other datasets. Nevertheless, default parameters show the most stable performance, suggesting that they should be used. This study also provides a reproducible and generalizable methodology for unbiased comparative analysis of high-throughput sequencing tools that can facilitate future algorithmic development. PMID:22307239
Picking ChIP-seq peak detectors for analyzing chromatin modification experiments.
Micsinai, Mariann; Parisi, Fabio; Strino, Francesco; Asp, Patrik; Dynlacht, Brian D; Kluger, Yuval
2012-05-01
Numerous algorithms have been developed to analyze ChIP-Seq data. However, the complexity of analyzing diverse patterns of ChIP-Seq signals, especially for epigenetic marks, still calls for the development of new algorithms and objective comparisons of existing methods. We developed Qeseq, an algorithm to detect regions of increased ChIP read density relative to background. Qeseq employs critical novel elements, such as iterative recalibration and neighbor joining of reads to identify enriched regions of any length. To objectively assess its performance relative to other 14 ChIP-Seq peak finders, we designed a novel protocol based on Validation Discriminant Analysis (VDA) to optimally select validation sites and generated two validation datasets, which are the most comprehensive to date for algorithmic benchmarking of key epigenetic marks. In addition, we systematically explored a total of 315 diverse parameter configurations from these algorithms and found that typically optimal parameters in one dataset do not generalize to other datasets. Nevertheless, default parameters show the most stable performance, suggesting that they should be used. This study also provides a reproducible and generalizable methodology for unbiased comparative analysis of high-throughput sequencing tools that can facilitate future algorithmic development.
Wu, Xin Yin; Lam, Victor C K; Yu, Yue Feng; Ho, Robin S T; Feng, Ye; Wong, Charlene H L; Yip, Benjamin H K; Tsoi, Kelvin K F; Wong, Samuel Y S; Chung, Vincent C H
2016-11-01
Well-conducted meta-analyses (MAs) are considered as one of the best sources of clinical evidence for treatment decision. MA with methodological flaws may introduce bias and mislead evidence users. The aim of this study is to investigate the characteristics and methodological quality of MAs on diabetes mellitus (DM) treatments. Systematic review. Cochrane Database of Systematic Review and Database of Abstract of Reviews of Effects were searched for relevant MAs. Assessing methodological quality of systematic reviews (AMSTAR) tool was used to evaluate the methodological quality of included MAs. Logistic regression analysis was used to identify association between characteristics of MA and AMSTAR results. A total of 252 MAs including 4999 primary studies and 13,577,025 patients were included. Over half of the MAs (65.1%) only included type 2 DM patients and 160 MAs (63.5%) focused on pharmacological treatments. About 89.7% MAs performed comprehensive literature search and 89.3% provided characteristics of included studies. Included MAs generally had poor performance on the remaining AMSTAR items, especially in assessing publication bias (39.3%), providing lists of studies (19.0%) and declaring source of support comprehensively (7.5%). Only 62.7% MAs mentioned about harm of interventions. MAs with corresponding author from Asia performed less well in providing MA protocol than those from Europe. Methodological quality of MA on DM treatments was unsatisfactory. There is considerable room for improvement, especially in assessing publication bias, providing lists of studies and declaring source of support comprehensively. Also, there is an urgent need for MA authors to report treatment harm comprehensively. © 2016 European Society of Endocrinology.
Harris, Joshua D; Erickson, Brandon J; Cvetanovich, Gregory L; Abrams, Geoffrey D; McCormick, Frank M; Gupta, Anil K; Verma, Nikhil N; Bach, Bernard R; Cole, Brian J
2014-02-01
Condition-specific questionnaires are important components in evaluation of outcomes of surgical interventions. No condition-specific study methodological quality questionnaire exists for evaluation of outcomes of articular cartilage surgery in the knee. To develop a reliable and valid knee articular cartilage-specific study methodological quality questionnaire. Cross-sectional study. A stepwise, a priori-designed framework was created for development of a novel questionnaire. Relevant items to the topic were identified and extracted from a recent systematic review of 194 investigations of knee articular cartilage surgery. In addition, relevant items from existing generic study methodological quality questionnaires were identified. Items for a preliminary questionnaire were generated. Redundant and irrelevant items were eliminated, and acceptable items modified. The instrument was pretested and items weighed. The instrument, the MARK score (Methodological quality of ARticular cartilage studies of the Knee), was tested for validity (criterion validity) and reliability (inter- and intraobserver). A 19-item, 3-domain MARK score was developed. The 100-point scale score demonstrated face validity (focus group of 8 orthopaedic surgeons) and criterion validity (strong correlation to Cochrane Quality Assessment score and Modified Coleman Methodology Score). Interobserver reliability for the overall score was good (intraclass correlation coefficient [ICC], 0.842), and for all individual items of the MARK score, acceptable to perfect (ICC, 0.70-1.000). Intraobserver reliability ICC assessed over a 3-week interval was strong for 2 reviewers (≥0.90). The MARK score is a valid and reliable knee articular cartilage condition-specific study methodological quality instrument. This condition-specific questionnaire may be used to evaluate the quality of studies reporting outcomes of articular cartilage surgery in the knee.
ERIC Educational Resources Information Center
Cannon, Joanna E.; Hubley, Anita M.
2014-01-01
Content validation is a crucial, but often neglected, component of good test development. In the present study, content validity evidence was collected to determine the degree to which elements (e.g., grammatical structures, items, picture responses, administration, and scoring instructions) of the Comprehension of Written Grammar (CWG) test are…
Dhakar, Rajkumar; Sarath Chandran, M A; Nagar, Shivani; Visha Kumari, V
2017-11-23
A new methodology for crop-growth stage-specific assessment of agricultural drought risk under a variable sowing window is proposed for the soybean crop. It encompasses three drought indices, which include Crop-Specific Drought Index (CSDI), Vegetation Condition Index (VCI), and Standardized Precipitation Evapotranspiration Index (SPEI). The unique features of crop-growth stage-specific nature and spatial and multi-scalar coverage provide a comprehensive assessment of agricultural drought risk. This study was conducted in 10 major soybean-growing districts of Madhya Pradesh state of India. These areas contribute about 60% of the total soybean production for the country. The phenophase most vulnerable to agricultural drought was identified (germination and flowering in our case) for each district across four sowing windows. The agricultural drought risk was quantified at various severity levels (moderate, severe, and very severe) for each growth stage and sowing window. Validation of the proposed new methodology also yielded results with a high correlation coefficient between percent probability of agricultural drought risk and yield risk (r = 0.92). Assessment by proximity matrix yielded a similar statistic. Expectations for the proposed methodology are better mitigation-oriented management and improved crop contingency plans for planners and decision makers.
Methods for heat transfer and temperature field analysis of the insulated diesel, phase 3
NASA Technical Reports Server (NTRS)
Morel, Thomas; Wahiduzzaman, Syed; Fort, Edward F.; Keribar, Rifat; Blumberg, Paul N.
1988-01-01
Work during Phase 3 of a program aimed at developing a comprehensive heat transfer and thermal analysis methodology for design analysis of insulated diesel engines is described. The overall program addresses all the key heat transfer issues: (1) spatially and time-resolved convective and radiative in-cylinder heat transfer, (2) steady-state conduction in the overall structure, and (3) cyclical and load/speed temperature transients in the engine structure. These are all accounted for in a coupled way together with cycle thermodynamics. This methodology was developed during Phases 1 and 2. During Phase 3, an experimental program was carried out to obtain data on heat transfer under cooled and insulated engine conditions and also to generate a database to validate the developed methodology. A single cylinder Cummins diesel engine was instrumented for instantaneous total heat flux and heat radiation measurements. Data were acquired over a wide range of operating conditions in two engine configurations. One was a cooled baseline. The other included ceramic coated components (0.050 inches plasma sprayed zirconia)-piston, head and valves. The experiments showed that the insulated engine has a smaller heat flux than the cooled one. The model predictions were found to be in very good agreement with the data.
Measures of outdoor play and independent mobility in children and youth: A methodological review.
Bates, Bree; Stone, Michelle R
2015-09-01
Declines in children's outdoor play have been documented globally, which are partly due to heightened restrictions around children's independent mobility. Literature on outdoor play and children's independent mobility is increasing, yet no paper has summarized the various methodological approaches used. A methodological review could highlight most commonly used measures and comprehensive research designs that could result in more standardized methodological approaches. Methodological review. A standardized protocol guided a methodological review of published research on measures of outdoor play and children's independent mobility in children and youth (0-18 years). Online searches of 8 electronic databases were conducted and studies included if they contained a subjective/objective measure of outdoor play or children's independent mobility. References of included articles were scanned to identify additional articles. Twenty-four studies were included on outdoor play, and twenty-three on children's independent mobility. Study designs were diverse. Common objective measures included accelerometry, global positioning systems and direct observation; questionnaires, surveys and interviews were common subjective measures. Focus groups, activity logs, monitoring sheets, travel/activity diaries, behavioral maps and guided tours were also utilized. Questionnaires were used most frequently, yet few studies used the same questionnaire. Five studies employed comprehensive, mixed-methods designs. Outdoor play and children's independent mobility have been measured using a wide variety of techniques, with only a few studies using similar methodologies. A standardized methodological approach does not exist. Future researchers should consider including both objective measures (accelerometry and global positioning systems) and subjective measures (questionnaires, activity logs, interviews), as more comprehensive designs will enhance understanding of each multidimensional construct. Creating a standardized methodological approach would improve study comparisons. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Renom, Marta; Conrad, Andrea; Bascuñana, Helena; Cieza, Alarcos; Galán, Ingrid; Kesselring, Jürg; Coenen, Michaela
2014-11-01
The Comprehensive International Classification of Functioning, Disability and Health (ICF) Core Set for Multiple Sclerosis (MS) is a comprehensive framework to structure the information obtained in multidisciplinary clinical settings according to the biopsychosocial perspective of the International Classification of Functioning, Disability and Health (ICF) and to guide the treatment and rehabilitation process accordingly. It is now undergoing validation from the user perspective for which it has been developed in the first place. To validate the content of the Comprehensive ICF Core Set for MS from the perspective of speech and language therapists (SLTs) involved in the treatment of persons with MS (PwMS). Within a three-round e-mail-based Delphi Study 34 SLTs were asked about PwMS' problems, resources and aspects of the environment treated by SLTs. Responses were linked to ICF categories. Identified ICF categories were compared with those included in the Comprehensive ICF Core Set for MS to examine its content validity. Thirty-four SLTs named 524 problems and resources, as well as aspects of environment. Statements were linked to 129 ICF categories (60 Body-functions categories, two Body-structures categories, 42 Activities-&-participation categories, and 25 Environmental-factors categories). SLTs confirmed 46 categories in the Comprehensive ICF Core Set. Twenty-one ICF categories were identified as not-yet-included categories. This study contributes to the content validity of the Comprehensive ICF Core Set for MS from the perspective of SLTs. Study participants agreed on a few not-yet-included categories that should be further discussed for inclusion in a revised version of the Comprehensive ICF Core Set to strengthen SLTs' perspective in PwMS' neurorehabilitation. © 2014 Royal College of Speech and Language Therapists.
A comprehensive methodology for the multidimensional and synchronic data collecting in soundscape.
Kogan, Pablo; Turra, Bruno; Arenas, Jorge P; Hinalaf, María
2017-02-15
The soundscape paradigm is comprised of complex living systems where individuals interact moment-by-moment among one another and with the physical environment. The real environments provide promising conditions to reveal deep soundscape behavior, including the multiple components involved and their interrelations as a whole. However, measuring and analyzing the numerous simultaneous variables of soundscape represents a challenge that is not completely understood. This work proposes and applies a comprehensive methodology for multidimensional and synchronic data collection in soundscape. The soundscape variables were organized into three main entities: experienced environment, acoustic environment, and extra-acoustic environment, containing, in turn, subgroups of variables called components. The variables contained in these components were acquired through synchronic field techniques that include surveys, acoustic measurements, audio recordings, photography, and video. The proposed methodology was tested, optimized, and applied in diverse open environments, including squares, parks, fountains, university campuses, streets, and pedestrian areas. The systematization of this comprehensive methodology provides a framework for soundscape research, a support for urban and environment management, and a preliminary procedure for standardization in soundscape data collecting. Copyright © 2016 Elsevier B.V. All rights reserved.
Diagnosing malignant melanoma in ambulatory care: a systematic review of clinical prediction rules
Harrington, Emma; Clyne, Barbara; Wesseling, Nieneke; Sandhu, Harkiran; Armstrong, Laura; Bennett, Holly; Fahey, Tom
2017-01-01
Objectives Malignant melanoma has high morbidity and mortality rates. Early diagnosis improves prognosis. Clinical prediction rules (CPRs) can be used to stratify patients with symptoms of suspected malignant melanoma to improve early diagnosis. We conducted a systematic review of CPRs for melanoma diagnosis in ambulatory care. Design Systematic review. Data sources A comprehensive search of PubMed, EMBASE, PROSPERO, CINAHL, the Cochrane Library and SCOPUS was conducted in May 2015, using combinations of keywords and medical subject headings (MeSH) terms. Study selection and data extraction Studies deriving and validating, validating or assessing the impact of a CPR for predicting melanoma diagnosis in ambulatory care were included. Data extraction and methodological quality assessment were guided by the CHARMS checklist. Results From 16 334 studies reviewed, 51 were included, validating the performance of 24 unique CPRs. Three impact analysis studies were identified. Five studies were set in primary care. The most commonly evaluated CPRs were the ABCD, more than one or uneven distribution of Colour, or a large (greater than 6 mm) Diameter (ABCD) dermoscopy rule (at a cut-point of >4.75; 8 studies; pooled sensitivity 0.85, 95% CI 0.73 to 0.93, specificity 0.72, 95% CI 0.65 to 0.78) and the 7-point dermoscopy checklist (at a cut-point of ≥1 recommending ruling in melanoma; 11 studies; pooled sensitivity 0.77, 95% CI 0.61 to 0.88, specificity 0.80, 95% CI 0.59 to 0.92). The methodological quality of studies varied. Conclusions At their recommended cut-points, the ABCD dermoscopy rule is more useful for ruling out melanoma than the 7-point dermoscopy checklist. A focus on impact analysis will help translate melanoma risk prediction rules into useful tools for clinical practice. PMID:28264830
Guidelines for the Design and Conduct of Clinical Studies in Knee Articular Cartilage Repair
Mithoefer, Kai; Saris, Daniel B.F.; Farr, Jack; Kon, Elizaveta; Zaslav, Kenneth; Cole, Brian J.; Ranstam, Jonas; Yao, Jian; Shive, Matthew; Levine, David; Dalemans, Wilfried; Brittberg, Mats
2011-01-01
Objective: To summarize current clinical research practice and develop methodological standards for objective scientific evaluation of knee cartilage repair procedures and products. Design: A comprehensive literature review was performed of high-level original studies providing information relevant for the design of clinical studies on articular cartilage repair in the knee. Analysis of cartilage repair publications and synopses of ongoing trials were used to identify important criteria for the design, reporting, and interpretation of studies in this field. Results: Current literature reflects the methodological limitations of the scientific evidence available for articular cartilage repair. However, clinical trial databases of ongoing trials document a trend suggesting improved study designs and clinical evaluation methodology. Based on the current scientific information and standards of clinical care, detailed methodological recommendations were developed for the statistical study design, patient recruitment, control group considerations, study endpoint definition, documentation of results, use of validated patient-reported outcome instruments, and inclusion and exclusion criteria for the design and conduct of scientifically sound cartilage repair study protocols. A consensus statement among the International Cartilage Repair Society (ICRS) and contributing authors experienced in clinical trial design and implementation was achieved. Conclusions: High-quality clinical research methodology is critical for the optimal evaluation of current and new cartilage repair technologies. In addition to generally applicable principles for orthopedic study design, specific criteria and considerations apply to cartilage repair studies. Systematic application of these criteria and considerations can facilitate study designs that are scientifically rigorous, ethical, practical, and appropriate for the question(s) being addressed in any given cartilage repair research project. PMID:26069574
DOT National Transportation Integrated Search
1995-01-01
This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety cortical functions in high-speed rail or magnetic levitation ...
DOT National Transportation Integrated Search
1995-09-01
This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety critical functions in high-speed rail or magnetic levitation ...
School-Based Methylphenidate Placebo Protocols: Methodological and Practical Issues.
ERIC Educational Resources Information Center
Hyman, Irwin A.; Wojtowicz, Alexandra; Lee, Kee Duk; Haffner, Mary Elizabeth; Fiorello, Catherine A.; And Others
1998-01-01
Focuses on methodological issues involved in choosing instruments to monitor behavior, once a comprehensive evaluation has suggested trials on Ritalin. Case examples illustrate problems of teacher compliance in filling out measures, supplying adequate placebos, and obtaining physical cooperation. Emerging school-based methodologies are discussed…
Grounded Theory Methodology: Positivism, Hermeneutics, and Pragmatism
ERIC Educational Resources Information Center
Age, Lars-Johan
2011-01-01
Glaserian grounded theory methodology, which has been widely adopted as a scientific methodology in recent decades, has been variously characterised as "hermeneutic" and "positivist." This commentary therefore takes a different approach to characterising grounded theory by undertaking a comprehensive analysis of: (a) the philosophical paradigms of…
A methodology for collecting valid software engineering data
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Weiss, David M.
1983-01-01
An effective data collection method for evaluating software development methodologies and for studying the software development process is described. The method uses goal-directed data collection to evaluate methodologies with respect to the claims made for them. Such claims are used as a basis for defining the goals of the data collection, establishing a list of questions of interest to be answered by data analysis, defining a set of data categorization schemes, and designing a data collection form. The data to be collected are based on the changes made to the software during development, and are obtained when the changes are made. To insure accuracy of the data, validation is performed concurrently with software development and data collection. Validation is based on interviews with those people supplying the data. Results from using the methodology show that data validation is a necessary part of change data collection. Without it, as much as 50% of the data may be erroneous. Feasibility of the data collection methodology was demonstrated by applying it to five different projects in two different environments. The application showed that the methodology was both feasible and useful.
The need for a comprehensive expert system development methodology
NASA Technical Reports Server (NTRS)
Baumert, John; Critchfield, Anna; Leavitt, Karen
1988-01-01
In a traditional software development environment, the introduction of standardized approaches has led to higher quality, maintainable products on the technical side and greater visibility into the status of the effort on the management side. This study examined expert system development to determine whether it differed enough from traditional systems to warrant a reevaluation of current software development methodologies. Its purpose was to identify areas of similarity with traditional software development and areas requiring tailoring to the unique needs of expert systems. A second purpose was to determine whether existing expert system development methodologies meet the needs of expert system development, management, and maintenance personnel. The study consisted of a literature search and personal interviews. It was determined that existing methodologies and approaches to developing expert systems are not comprehensive nor are they easily applied, especially to cradle to grave system development. As a result, requirements were derived for an expert system development methodology and an initial annotated outline derived for such a methodology.
Methodological guidelines to investigate altered states of consciousness and anomalous experiences.
Moreira-Almeida, Alexander; Lotufo-Neto, Francisco
2017-06-01
Anomalous experiences (AE) (uncommon experiences or one that is believed to deviate from the usually accepted explanations of reality: hallucinations, synesthesia, experiences interpreted as telepathic…) and altered states of consciousness (ASC) have been described in all societies of all ages. Even so, scientists have long neglected the studies on this theme. To study AE and ASC is not necessary to share the beliefs we explore, they can be investigated as subjective experiences and correlated with other data, like any other human experience. This article presents some methodological guidelines to investigate these experiences, among them: to avoid dogmatic prejudice and to 'pathologize' the unusual; the value of a theory and a comprehensive literature review; to utilize a variety of criteria for pathology and normality; the investigation of clinical and non-clinical populations; development of new appropriate research instruments; to be careful to choose the wording to describe the AE; to distinguished the lived experience from its interpretations; to take into account the role of culture; to evaluate the validity and reliability of reports and, last but not least, creativity and diversity in choosing methods.
From "Sooo excited!!!" to "So proud": using language to study development.
Kern, Margaret L; Eichstaedt, Johannes C; Schwartz, H Andrew; Park, Gregory; Ungar, Lyle H; Stillwell, David J; Kosinski, Michal; Dziurzynski, Lukasz; Seligman, Martin E P
2014-01-01
We introduce a new method, differential language analysis (DLA), for studying human development in which computational linguistics are used to analyze the big data available through online social media in light of psychological theory. Our open vocabulary DLA approach finds words, phrases, and topics that distinguish groups of people based on 1 or more characteristics. Using a data set of over 70,000 Facebook users, we identify how word and topic use vary as a function of age and compile cohort specific words and phrases into visual summaries that are face valid and intuitively meaningful. We demonstrate how this methodology can be used to test developmental hypotheses, using the aging positivity effect (Carstensen & Mikels, 2005) as an example. While in this study we focused primarily on common trends across age-related cohorts, the same methodology can be used to explore heterogeneity within developmental stages or to explore other characteristics that differentiate groups of people. Our comprehensive list of words and topics is available on our web site for deeper exploration by the research community. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Bathke, Arne C.; Friedrich, Sarah; Pauly, Markus; Konietschke, Frank; Staffen, Wolfgang; Strobl, Nicolas; Höller, Yvonne
2018-01-01
ABSTRACT To date, there is a lack of satisfactory inferential techniques for the analysis of multivariate data in factorial designs, when only minimal assumptions on the data can be made. Presently available methods are limited to very particular study designs or assume either multivariate normality or equal covariance matrices across groups, or they do not allow for an assessment of the interaction effects across within-subjects and between-subjects variables. We propose and methodologically validate a parametric bootstrap approach that does not suffer from any of the above limitations, and thus provides a rather general and comprehensive methodological route to inference for multivariate and repeated measures data. As an example application, we consider data from two different Alzheimer’s disease (AD) examination modalities that may be used for precise and early diagnosis, namely, single-photon emission computed tomography (SPECT) and electroencephalogram (EEG). These data violate the assumptions of classical multivariate methods, and indeed classical methods would not have yielded the same conclusions with regards to some of the factors involved. PMID:29565679
ERIC Educational Resources Information Center
Mihura, Joni L.; Meyer, Gregory J.; Dumitrascu, Nicolae; Bombel, George
2013-01-01
We systematically evaluated the peer-reviewed Rorschach validity literature for the 65 main variables in the popular Comprehensive System (CS). Across 53 meta-analyses examining variables against externally assessed criteria (e.g., observer ratings, psychiatric diagnosis), the mean validity was r = 0.27 (k = 770) as compared to r = 0.08 (k = 386)…
Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui
2017-12-01
Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.
ERIC Educational Resources Information Center
Ness, Molly K.
2009-01-01
The purpose of this mixed methodology study was to identify the frequency of reading comprehension instruction in middle and high school social studies and science classrooms. An additional purpose was to explore teachers' perceptions of and beliefs about the need for reading comprehension instruction. In 2,400 minutes of direct classroom…
Wu, Xin Yin; Du, Xin Jian; Ho, Robin S T; Lee, Clarence C Y; Yip, Benjamin H K; Wong, Martin C S; Wong, Samuel Y S; Chung, Vincent C H
2017-02-01
Methodological quality of meta-analyses on hypertension treatments can affect treatment decision-making. The authors conducted a cross-sectional study to investigate the methodological quality of meta-analyses on hypertension treatments. One hundred and fifty-eight meta-analyses were identified. Overall, methodological quality was unsatisfactory in the following aspects: comprehensive reporting of financial support (1.9%), provision of included and excluded lists of studies (22.8%), inclusion of grey literature (27.2%), and inclusion of protocols (32.9%). The 126 non-Cochrane meta-analyses had poor performance on almost all the methodological items. Non-Cochrane meta-analyses focused on nonpharmacologic treatments were more likely to consider scientific quality of included studies when making conclusions. The 32 Cochrane meta-analyses generally had good methodological quality except for comprehensive reporting of the sources of support. These results highlight the need for cautious interpretation of these meta-analyses, especially among physicians and policy makers when guidelines are formulated. Future meta-analyses should pay attention to improving these methodological aspects. ©2016 Wiley Periodicals, Inc.
Feeling Abstinent? Feeling Comprehensive? Touching the Affects of Sexuality Curricula
ERIC Educational Resources Information Center
Lesko, Nancy
2010-01-01
This interpretive study draws on interdisciplinary scholarship on affect and knowledge to ask: toward what feelings do abstinence-only and comprehensive sexuality education curricula direct us? A methodology that is attuned to double exposures is discussed, and one abstinence-only sexuality education curriculum and one comprehensive sexuality…
Training Comprehensiveness: Construct Development and Relation with Role Behaviour
ERIC Educational Resources Information Center
Srivastava, Anugamini Priya; Dhar, Rajib Lochan
2015-01-01
Purpose: This study aims to develop the scale for perception of training comprehensiveness and attempts to examine the influence of perception of training comprehensiveness on role behaviour: teachers' efficacy as a mediator and job autonomy as a moderator. Design/methodology/approach: Through the steps for a generation, refinement, purification…
Teaching Listening Comprehension: Bottom-Up Approach
ERIC Educational Resources Information Center
Khuziakhmetov, Anvar N.; Porchesku, Galina V.
2016-01-01
Improving listening comprehension skills is one of the urgent contemporary educational problems in the field of second language acquisition. Understanding how L2 listening comprehension works can have a serious influence on language pedagogy. The aim of the paper is to discuss the practical and methodological value of the notion of the perception…
Systematic Review Methodology in Higher Education
ERIC Educational Resources Information Center
Bearman, Margaret; Smith, Calvin D.; Carbone, Angela; Slade, Susan; Baik, Chi; Hughes-Warrington, Marnie; Neumann, David L.
2012-01-01
Systematic review methodology can be distinguished from narrative reviews of the literature through its emphasis on transparent, structured and comprehensive approaches to searching the literature and its requirement for formal synthesis of research findings. There appears to be relatively little use of the systematic review methodology within the…
Examining the validity of self-reports on scales measuring students' strategic processing.
Samuelstuen, Marit S; Bråten, Ivar
2007-06-01
Self-report inventories trying to measure strategic processing at a global level have been much used in both basic and applied research. However, the validity of global strategy scores is open to question because such inventories assess strategy perceptions outside the context of specific task performance. The primary aim was to examine the criterion-related and construct validity of the global strategy data obtained with the Cross-Curricular Competencies (CCC) scale. Additionally, we wanted to compare the validity of these data with the validity of data obtained with a task-specific self-report inventory focusing on the same types of strategies. The sample included 269 10th-grade students from 12 different junior high schools. Global strategy use as assessed with the CCC was compared with task-specific strategy use reported in three different reading situations. Moreover, relationships between scores on the CCC and scores on measures of text comprehension were examined and compared with relationships between scores on the task-specific strategy measure and the same comprehension measures. The comparison between the CCC strategy scores and the task-specific strategy scores suggested only modest criterion-related validity for the data obtained with the global strategy inventory. The CCC strategy scores were also not related to the text comprehension measures, indicating poor construct validity. In contrast, the task-specific strategy scores were positively related to the comprehension measures, indicating good construct validity. Attempts to measure strategic processing at a global level seem to have limited validity and utility.
Measuring medicine prices in Peru: validation of key aspects of WHO/HAI survey methodology.
Madden, Jeanne M; Meza, Edson; Ewen, Margaret; Laing, Richard O; Stephens, Peter; Ross-Degnan, Dennis
2010-04-01
To assess the possibility of bias due to the limited target list and geographic sampling of the World Health Organization (WHO)/Health Action International (HAI) Medicine Prices and Availability survey used in more than 70 rapid sample surveys since 2001. A survey was conducted in Peru in 2005 using an expanded sample of medicine outlets, including remote areas. Comprehensive data were gathered on medicines in three therapeutic classes to assess the adequacy of WHO/HAI's target medicines list and the focus on only two product versions. WHO/HAI median retail prices were compared with average wholesale prices from global pharmaceutical sales data supplier IMS Health. No significant differences were found in overall availability or prices of target list medicines by retail location. The comprehensive survey of angiotensin-converting enzyme inhibitor, anti-diabetic, and anti-ulcer products revealed that some treatments not on the target list were costlier for patients and more likely to be unavailable, particularly in remote areas. WHO/HAI retail prices and IMS wholesale prices were strongly correlated for higher priced products, and weakly correlated for lower priced products (which had higher estimated retailer markups). The WHO/HAI survey approach strikes an appropriate balance between modest research costs and optimal information for policy. Focusing on commonly used medicines yields sufficient and valid results. Surveyors elsewhere should consider the limits of the survey data as well as any local circumstances, such as scarcity, that may call for extra field efforts.
ERIC Educational Resources Information Center
Afzal, Waseem
2017-01-01
Introduction: The purpose of this paper is to propose a methodology to conceptualize, operationalize, and empirically validate the concept of information need. Method: The proposed methodology makes use of both qualitative and quantitative perspectives, and includes a broad array of approaches such as literature reviews, expert opinions, focus…
ERIC Educational Resources Information Center
Cannon, Joanna E.; Hubley, Anita M.; Millhoff, Courtney; Mazlouman, Shahla
2016-01-01
The aim of the current study was to gather validation evidence for the "Comprehension of Written Grammar" (CWG; Easterbrooks, 2010) receptive test of 26 grammatical structures of English print for use with children who are deaf and hard of hearing (DHH). Reliability and validity data were collected for 98 participants (49 DHH and 49…
External Validity in the Study of Human Development: Theoretical and Methodological Issues
ERIC Educational Resources Information Center
Hultsch, David F.; Hickey, Tom
1978-01-01
An examination of the concept of external validity from two theoretical perspectives: a traditional mechanistic approach and a dialectical organismic approach. Examines the theoretical and methodological implications of these perspectives. (BD)
NASA Astrophysics Data System (ADS)
Ribeiro, Eduardo Afonso; Lopes, Eduardo Márcio de Oliveira; Bavastri, Carlos Alberto
2017-12-01
Viscoelastic materials have played an important role in passive vibration control. Nevertheless, the use of such materials in supports of rotating machines, aiming at controlling vibration, is more recent, mainly when these supports present additional complexities like multiple degrees of freedom and require accurate models to predict the dynamic behavior of viscoelastic materials working in a broad band of frequencies and temperatures. Previously, the authors propose a methodology for an optimal design of viscoelastic supports (VES) for vibration suppression in rotordynamics, which improves the dynamic prediction accuracy, the speed calculation, and the modeling of VES as complex structures. However, a comprehensive numerical study of the dynamics of rotor-VES systems, regarding the types and combinations of translational and rotational degrees of freedom (DOFs), accompanied by the corresponding experimental validation, is still lacking. This paper presents such a study considering different types and combinations of DOFs in addition to the simulation of their number of additional masses/inertias, as well as the kind and association of the applied viscoelastic materials (VEMs). The results - regarding unbalance frequency response, transmissibility and displacement due to static loads - lead to: 1) considering VES as complex structures which allow improving the efficacy in passive vibration control; 2) acknowledging the best configuration concerning DOFs and VEM choice and association for a practical application concerning passive vibration control and load resistance. The specific outcomes of the conducted experimental validation attest the accuracy of the proposed methodology.
Biomarker-Guided Adaptive Trial Designs in Phase II and Phase III: A Methodological Review
Antoniou, Miranta; Jorgensen, Andrea L; Kolamunnage-Dona, Ruwanthi
2016-01-01
Background Personalized medicine is a growing area of research which aims to tailor the treatment given to a patient according to one or more personal characteristics. These characteristics can be demographic such as age or gender, or biological such as a genetic or other biomarker. Prior to utilizing a patient’s biomarker information in clinical practice, robust testing in terms of analytical validity, clinical validity and clinical utility is necessary. A number of clinical trial designs have been proposed for testing a biomarker’s clinical utility, including Phase II and Phase III clinical trials which aim to test the effectiveness of a biomarker-guided approach to treatment; these designs can be broadly classified into adaptive and non-adaptive. While adaptive designs allow planned modifications based on accumulating information during a trial, non-adaptive designs are typically simpler but less flexible. Methods and Findings We have undertaken a comprehensive review of biomarker-guided adaptive trial designs proposed in the past decade. We have identified eight distinct biomarker-guided adaptive designs and nine variations from 107 studies. Substantial variability has been observed in terms of how trial designs are described and particularly in the terminology used by different authors. We have graphically displayed the current biomarker-guided adaptive trial designs and summarised the characteristics of each design. Conclusions Our in-depth overview provides future researchers with clarity in definition, methodology and terminology for biomarker-guided adaptive trial designs. PMID:26910238
Simões, Joana; Amado, Francisco M; Vitorino, Rui; Helguero, Luisa A
2015-01-01
The nature of the proteins complexes that regulate ERα subcellular localization and activity is still an open question in breast cancer biology. Identification of such complexes will help understand development of endocrine resistance in ER+ breast cancer. Mass spectrometry (MS) has allowed comprehensive analysis of the ERα interactome. We have compared six published works analyzing the ERα interactome of MCF-7 and HeLa cells in order to identify a shared or different pathway-related fingerprint. Overall, 806 ERα interacting proteins were identified. The cellular processes were differentially represented according to the ERα purification methodology, indicating that the methodologies used are complementary. While in MCF-7 cells, the interactome of endogenous and over-expressed ERα essentially represents the same biological processes and cellular components, the proteins identified were not over-lapping; thus, suggesting that the biological response may differ as the regulatory/participating proteins in these complexes are different. Interestingly, biological processes uniquely associated to ERα over-expressed in HeLa cell line included L-serine biosynthetic process, cellular amino acid biosynthetic process and cell redox homeostasis. In summary, all the approaches analyzed in this meta-analysis are valid and complementary; in particular, for those cases where the processes occur at low frequency with normal ERα levels, and can be identified when the receptor is over-expressed. However special effort should be put into validating these findings in cells expressing physiological ERα levels.
Measuring Values in Environmental Research: A Test of an Environmental Portrait Value Questionnaire
Bouman, Thijs; Steg, Linda; Kiers, Henk A. L.
2018-01-01
Four human values are considered to underlie individuals’ environmental beliefs and behaviors: biospheric (i.e., concern for environment), altruistic (i.e., concern for others), egoistic (i.e., concern for personal resources) and hedonic values (i.e., concern for pleasure and comfort). These values are typically measured with an adapted and shortened version of the Schwartz Value Survey (SVS), to which we refer as the Environmental-SVS (E-SVS). Despite being well-validated, recent research has indicated some concerns about the SVS methodology (e.g., comprehensibility, self-presentation biases) and suggested an alternative method of measuring human values: The Portrait Value Questionnaire (PVQ). However, the PVQ has not yet been adapted and applied to measure values most relevant to understand environmental beliefs and behaviors. Therefore, we tested the Environmental-PVQ (E-PVQ) – a PVQ variant of E-SVS –and compared it with the E-SVS in two studies. Our findings provide strong support for the validity and reliability of both the E-SVS and E-PVQ. In addition, we find that respondents slightly preferred the E-PVQ over the E-SVS (Study 1). In general, both scales correlate similarly to environmental self-identity (Study 1), energy behaviors (Studies 1 and 2), pro-environmental personal norms, climate change beliefs and policy support (Study 2). Accordingly, both methodologies show highly similar results and seem well-suited for measuring human values underlying environmental behaviors and beliefs. PMID:29743874
Rost, Felicitas; Luyten, Patrick; Fonagy, Peter
2018-03-01
The two-configurations model developed by Blatt and colleagues offers a comprehensive conceptual and empirical framework for understanding depression. This model suggests that depressed patients struggle, at different developmental levels, with issues related to dependency (anaclitic issues) or self-definition (introjective issues), or a combination of both. This paper reports three studies on the development and preliminary validation of the Anaclitic-Introjective Depression Assessment, an observer-rated assessment tool of impairments in relatedness and self-definition in clinical depression based on the item pool of the Shedler-Westen Assessment Procedure. Study 1 describes the development of the measure using expert consensus rating and Q-methodology. Studies 2 and 3 report the assessment of its psychometric properties, preliminary reliability, and validity in a sample of 128 patients diagnosed with treatment-resistant depression. Four naturally occurring clusters of depressed patients were identified using Q-factor analysis, which, overall, showed meaningful and theoretically expected relationships with anaclitic/introjective prototypes as formulated by experts, as well as with clinical, social, occupational, global, and relational functioning. Taken together, findings reported in this paper provide preliminary evidence for the reliability and validity of the Anaclitic-Introjective Depression Assessment, an observer-rated measure that allows the detection of important nuanced differentiations between and within anaclitic and introjective depression. Copyright © 2017 John Wiley & Sons, Ltd.
Kivekäs, Eija; Kinnunen, Ulla-Mari; Paananen, Pekka; Kälviäinen, Reetta; Haatainen, Kaisa; Saranto, Kaija
2016-01-01
A trigger is a powerful tool for identifying adverse events to measure the level of any kind of harm caused in patient care. Studies with epilepsy patients have illustrated that using triggers as a methodology with data mining may increase patient well-being. The purpose of this study is to test the functionality and validity of the previously defined triggers to describe the status of epilepsy patient's well-being. In both medical and nursing data, the triggers described patients' well-being comprehensively. The narratives showed that there was overlapping in triggers. The preliminary results of triggers encourage us to develop some reminders to the documentation of epilepsy patient well-being. These provide healthcare professionals with further and more detailed information when necessary.
New methodology to reconstruct in 2-D the cuspal enamel of modern human lower molars.
Modesto-Mata, Mario; García-Campos, Cecilia; Martín-Francés, Laura; Martínez de Pinillos, Marina; García-González, Rebeca; Quintino, Yuliet; Canals, Antoni; Lozano, Marina; Dean, M Christopher; Martinón-Torres, María; Bermúdez de Castro, José María
2017-08-01
In the last years different methodologies have been developed to reconstruct worn teeth. In this article, we propose a new 2-D methodology to reconstruct the worn enamel of lower molars. Our main goals are to reconstruct molars with a high level of accuracy when measuring relevant histological variables and to validate the methodology calculating the errors associated with the measurements. This methodology is based on polynomial regression equations, and has been validated using two different dental variables: cuspal enamel thickness and crown height of the protoconid. In order to perform the validation process, simulated worn modern human molars were employed. The associated errors of the measurements were also estimated applying methodologies previously proposed by other authors. The mean percentage error estimated in reconstructed molars for these two variables in comparison with their own real values is -2.17% for the cuspal enamel thickness of the protoconid and -3.18% for the crown height of the protoconid. This error significantly improves the results of other methodologies, both in the interobserver error and in the accuracy of the measurements. The new methodology based on polynomial regressions can be confidently applied to the reconstruction of cuspal enamel of lower molars, as it improves the accuracy of the measurements and reduces the interobserver error. The present study shows that it is important to validate all methodologies in order to know the associated errors. This new methodology can be easily exportable to other modern human populations, the human fossil record and forensic sciences. © 2017 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Wu, Amery D.; Stone, Jake E.; Liu, Yan
2016-01-01
This article proposes and demonstrates a methodology for test score validation through abductive reasoning. It describes how abductive reasoning can be utilized in support of the claims made about test score validity. This methodology is demonstrated with a real data example of the Canadian English Language Proficiency Index Program…
Results of Fall 2001 Pilot: Methodology for Validation of Course Prerequisites.
ERIC Educational Resources Information Center
Serban, Andreea M.; Fleming, Steve
The purpose of this study was to test a methodology that will help Santa Barbara City College (SBCC), California, to validate the course prerequisites that fall under the category of highest level of scrutiny--data collection and analysis--as defined by the Chancellor's Office. This study gathered data for the validation of prerequisites for three…
ERIC Educational Resources Information Center
Rubilar, Álvaro Sebastián Bustos; Badillo, Gonzalo Zubieta
2017-01-01
In this article, we report how a geometric task based on the ACODESA methodology (collaborative learning, scientific debate and self-reflection) promotes the reformulation of the students' validations and allows revealing the students' aims in each of the stages of the methodology. To do so, we present the case of a team and, particularly, one of…
ERIC Educational Resources Information Center
Accardo, Amy L.; Finnegan, Elizabeth G.; Gulkus, Steven P.; Papay, Clare K.
2017-01-01
Learners with autism spectrum disorder (ASD) often exhibit difficulty in the area of reading comprehension. Research connecting the learning needs of individuals with ASD, existing effective practices, teacher training, and teacher perceptions of their own ability to teach reading comprehension is scarce. Quantitative survey methodology and…
Simulation validation and management
NASA Astrophysics Data System (ADS)
Illgen, John D.
1995-06-01
Illgen Simulation Technologies, Inc., has been working interactive verification and validation programs for the past six years. As a result, they have evolved a methodology that has been adopted and successfully implemented by a number of different verification and validation programs. This methodology employs a unique case of computer-assisted software engineering (CASE) tools to reverse engineer source code and produce analytical outputs (flow charts and tables) that aid the engineer/analyst in the verification and validation process. We have found that the use of CASE tools saves time,which equate to improvements in both schedule and cost. This paper will describe the ISTI-developed methodology and how CASe tools are used in its support. Case studies will be discussed.
Pope, D; Katreniak, Z; Guha, J; Puzzolo, E; Higgerson, J; Steels, S; Woode-Owusu, M; Bruce, N; Birt, Christopher A; Ameijden, E van; Verma, A
2017-05-01
Measuring health and its determinants in urban populations is essential to effectively develop public health policies maximizing health gain within this context. Adolescents are important in this regard given the origins of leading causes of morbidity and mortality develop pre-adulthood. Comprehensive, accurate and comparable information on adolescent urban health indicators from heterogeneous urban contexts is an important challenge. EURO-URHIS 2 aimed to develop standardized tools and methodologies collecting data from adolescents across heterogenous European urban contexts. Questionnaires were developed including (i) comprehensive assessment of urban health indicators from 7 pre-defined domains, (ii) use of previously validated questions from a literature review and other European surveys, (iii) translation/back-translation into European languages and (iv) piloting. Urban area-specific data collection methodologies were established through literature review, consultation and piloting. School-based surveys of 14-16-year olds (400-800 per urban area) were conducted in 13 European countries (33 urban areas). Participation rates were high (80-100%) for students from schools taking part in the surveys from all urban areas, and data quality was generally good (low rates of missing/spoiled data). Overall, 13 850 questionnaires were collected, coded and entered for EURO-URHIS 2. Dissemination included production of urban area health profiles (allowing benchmarking for a number of important public health indicators in young people) and use of visualization tools as part of the EURO-URHIS 2 project. EURO-URHIS 2 has developed standardized survey tools and methodologies for assessing key measures of health and its determinants in adolescents from heterogenous urban contexts and demonstrated the utility of this data to public health practitioners and policy makers. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
Towards a Consolidated Approach for the Assessment of Evaluation Models of Nuclear Power Reactors
Epiney, A.; Canepa, S.; Zerkak, O.; ...
2016-11-02
The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less
Testing Reading Comprehension of Theoretical Discourse with Cloze.
ERIC Educational Resources Information Center
Greene, Benjamin B., Jr.
2001-01-01
Presents evidence from a large sample of reading test scores for the validity of cloze-based assessments of reading comprehension for the discourse typically encountered in introductory college economics textbooks. Notes that results provide strong evidence that appropriately designed cloze tests permit valid assessments of reading comprehension…
Weems, Carl F.; Scott, Brandon G.; Nitiéma, Pascal; Noffsinger, Mary A.; Pfefferbaum, Rose L.; Varma, Vandana; Chakraburtty, Amarsha
2013-01-01
Background A comprehensive review of the design principles and methodological approaches that have been used to make inferences from the research on disasters in children is needed. Objective To identify the methodological approaches used to study children’s reactions to three recent major disasters—the September 11, 2001, attacks; the 2004 Indian Ocean Tsunami; and Hurricane Katrina. Methods This review was guided by a systematic literature search. Results A total of 165 unduplicated empirical reports were generated by the search and examined for this review. This included 83 references on September 11, 29 on the 2004 Tsunami, and 53 on Hurricane Katrina. Conclusions A diversity of methods has been brought to bear in understanding children’s reactions to disasters. While cross-sectional studies predominate, pre-event data for some investigations emerged from archival data and data from studies examining non-disaster topics. The nature and extent of the influence of risk and protective variables beyond disaster exposure are not fully understood due, in part, to limitations in the study designs used in the extant research. Advancing an understanding of the roles of exposure and various individual, family, and social factors depends upon the extent to which measures and assessment techniques are valid and reliable, as well as on data sources and data collection designs. Comprehensive assessments that extend beyond questionnaires and checklists to include interviews and cognitive and biological measures to elucidate the negative and positive effects of disasters on children also may improve the knowledge base. PMID:24443635
Pfefferbaum, Betty; Weems, Carl F; Scott, Brandon G; Nitiéma, Pascal; Noffsinger, Mary A; Pfefferbaum, Rose L; Varma, Vandana; Chakraburtty, Amarsha
2013-08-01
A comprehensive review of the design principles and methodological approaches that have been used to make inferences from the research on disasters in children is needed. To identify the methodological approaches used to study children's reactions to three recent major disasters-the September 11, 2001, attacks; the 2004 Indian Ocean Tsunami; and Hurricane Katrina. This review was guided by a systematic literature search. A total of 165 unduplicated empirical reports were generated by the search and examined for this review. This included 83 references on September 11, 29 on the 2004 Tsunami, and 53 on Hurricane Katrina. A diversity of methods has been brought to bear in understanding children's reactions to disasters. While cross-sectional studies predominate, pre-event data for some investigations emerged from archival data and data from studies examining non-disaster topics. The nature and extent of the influence of risk and protective variables beyond disaster exposure are not fully understood due, in part, to limitations in the study designs used in the extant research. Advancing an understanding of the roles of exposure and various individual, family, and social factors depends upon the extent to which measures and assessment techniques are valid and reliable, as well as on data sources and data collection designs. Comprehensive assessments that extend beyond questionnaires and checklists to include interviews and cognitive and biological measures to elucidate the negative and positive effects of disasters on children also may improve the knowledge base.
Towards Test Driven Development for Computational Science with pFUnit
NASA Technical Reports Server (NTRS)
Rilee, Michael L.; Clune, Thomas L.
2014-01-01
Developers working in Computational Science & Engineering (CSE)/High Performance Computing (HPC) must contend with constant change due to advances in computing technology and science. Test Driven Development (TDD) is a methodology that mitigates software development risks due to change at the cost of adding comprehensive and continuous testing to the development process. Testing frameworks tailored for CSE/HPC, like pFUnit, can lower the barriers to such testing, yet CSE software faces unique constraints foreign to the broader software engineering community. Effective testing of numerical software requires a comprehensive suite of oracles, i.e., use cases with known answers, as well as robust estimates for the unavoidable numerical errors associated with implementation with finite-precision arithmetic. At first glance these concerns often seem exceedingly challenging or even insurmountable for real-world scientific applications. However, we argue that this common perception is incorrect and driven by (1) a conflation between model validation and software verification and (2) the general tendency in the scientific community to develop relatively coarse-grained, large procedures that compound numerous algorithmic steps.We believe TDD can be applied routinely to numerical software if developers pursue fine-grained implementations that permit testing, neatly side-stepping concerns about needing nontrivial oracles as well as the accumulation of errors. We present an example of a successful, complex legacy CSE/HPC code whose development process shares some aspects with TDD, which we contrast with current and potential capabilities. A mix of our proposed methodology and framework support should enable everyday use of TDD by CSE-expert developers.
Krüger, H P
1989-02-01
The term "speech chronemics" is introduced to characterize a research strategy which extracts from the physical qualities of the speech signal only the pattern of ons ("speaking") and offs ("pausing"). The research in this field can be structured into the methodological dimension "unit of time", "number of speakers", and "quality of the prosodic measures". It is shown that a researcher's actual decision for one method largely determines the outcome of his study. Then, with the Logoport a new portable measurement device is presented. It enables the researcher to study speaking behavior over long periods of time (up to 24 hours) in the normal environment of his subjects. Two experiments are reported. The first shows the validity of articulation pauses for variations in the physiological state of the organism. The second study proves a new betablocking agent to have sociotropic effects: in a long-term trial socially high-strung subjects showed an improved interaction behavior (compared to placebo and socially easy-going persons) in their everyday life. Finally, the need for a comprehensive theoretical foundation and for standardization of measurement situations and methods is emphasized.
Pelletier, K R
1997-12-01
This paper is a critical review of the clinical and cost outcome evaluation studies of multifactorial, comprehensive, cardiovascular risk management programs in worksites. A comprehensive international literature search conducted under the auspices of the National Heart, Lung and Blood Institute identified 17 articles based on 12 studies that examined the clinical outcomes of multifactorial, comprehensive programs. These articles were identified through MEDLINE, manual searches of recent journals, and through direct inquiries to worksite health promotion researchers. All studies were conducted between 1978 and 1995, with 1978 being the date of the first citation of a methodologically rigorous evaluation. Of the 12 research studies, only 8 utilized the worksite as both the unit of assignment and as the unit of analysis. None of the studies analyzed adequately for cost effectiveness. Given this limitation, this review briefly considers the relevant worksite research that has demonstrated cost outcomes. Worksite-based, multifactorial cardiovascular intervention programs reviewed for this article varied widely in the comprehensiveness, intensity, and duration of both the interventions and evaluations. Results from randomized trials suggest that providing opportunities for individualized, cardiovascular risk reduction counseling for high-risk employees within the context of comprehensive programming may be the critical component of an effective worksite intervention. Despite the many limitations of the current methodologies of the 12 studies, the majority of the research to date indicates the following: (1) favorable clinical and cost outcomes; (2) that more recent and more rigorously designed research tends to support rather than refute earlier and less rigorously designed studies; and (3) that rather than interpreting the methodological flaws and diversity as inherently negative, one may consider it as indicative of a robust phenomena evident in many types of worksites, with diverse employees, differing interventions, and varying degrees of methodological sophistication. Results of these studies reviewed provide both cautious optimism about the effectiveness of these worksite programs and insights regarding the essential components and characteristics of successful programs.
NASA Technical Reports Server (NTRS)
Gault, J. W. (Editor); Trivedi, K. S. (Editor); Clary, J. B. (Editor)
1980-01-01
The validation process comprises the activities required to insure the agreement of system realization with system specification. A preliminary validation methodology for fault tolerant systems documented. A general framework for a validation methodology is presented along with a set of specific tasks intended for the validation of two specimen system, SIFT and FTMP. Two major areas of research are identified. First, are those activities required to support the ongoing development of the validation process itself, and second, are those activities required to support the design, development, and understanding of fault tolerant systems.
Collins, N J; Prinsen, C A C; Christensen, R; Bartels, E M; Terwee, C B; Roos, E M
2016-08-01
To conduct a systematic review and meta-analysis to synthesize evidence regarding measurement properties of the Knee injury and Osteoarthritis Outcome Score (KOOS). A comprehensive literature search identified 37 eligible papers evaluating KOOS measurement properties in participants with knee injuries and/or osteoarthritis (OA). Methodological quality was evaluated using the COSMIN checklist. Where possible, meta-analysis of extracted data was conducted for all studies and stratified by age and knee condition; otherwise narrative synthesis was performed. KOOS has adequate internal consistency, test-retest reliability and construct validity in young and old adults with knee injuries and/or OA. The ADL subscale has better content validity for older patients and Sport/Rec for younger patients with knee injuries, while the Pain subscale is more relevant for painful knee conditions. The five-factor structure of the original KOOS is unclear. There is some evidence that the KOOS subscales demonstrate sufficient unidimensionality, but this requires confirmation. Although measurement error requires further evaluation, the minimal detectable change for KOOS subscales ranges from 14.3 to 19.6 for younger individuals, and ≥20 for older individuals. Evidence of responsiveness comes from larger effect sizes following surgical (especially total knee replacement) than non-surgical interventions. KOOS demonstrates adequate content validity, internal consistency, test-retest reliability, construct validity and responsiveness for age- and condition-relevant subscales. Structural validity, cross-cultural validity and measurement error require further evaluation, as well as construct validity of KOOS Physical function Short form. Suggested order of subscales for different knee conditions can be applied in hierarchical testing of endpoints in clinical trials. PROSPERO (CRD42011001603). Copyright © 2016 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Does Validation during Language Comprehension Depend on an Evaluative Mindset?
ERIC Educational Resources Information Center
Isberner, Maj-Britt; Richter, Tobias
2014-01-01
Whether information is routinely and nonstrategically evaluated for truth during comprehension is still a point of contention. Previous studies supporting the assumption of nonstrategic validation have used a Stroop-like paradigm in which participants provided yes/no judgments in tasks unrelated to the truth or plausibility of the experimental…
Measuring Speech Comprehensibility in Students with Down Syndrome
Woynaroski, Tiffany; Camarata, Stephen
2016-01-01
Purpose There is an ongoing need to develop assessments of spontaneous speech that focus on whether the child's utterances are comprehensible to listeners. This study sought to identify the attributes of a stable ratings-based measure of speech comprehensibility, which enabled examining the criterion-related validity of an orthography-based measure of the comprehensibility of conversational speech in students with Down syndrome. Method Participants were 10 elementary school students with Down syndrome and 4 unfamiliar adult raters. Averaged across-observer Likert ratings of speech comprehensibility were called a ratings-based measure of speech comprehensibility. The proportion of utterance attempts fully glossed constituted an orthography-based measure of speech comprehensibility. Results Averaging across 4 raters on four 5-min segments produced a reliable (G = .83) ratings-based measure of speech comprehensibility. The ratings-based measure was strongly (r > .80) correlated with the orthography-based measure for both the same and different conversational samples. Conclusion Reliable and valid measures of speech comprehensibility are achievable with the resources available to many researchers and some clinicians. PMID:27299989
Methodological convergence of program evaluation designs.
Chacón-Moscoso, Salvador; Anguera, M Teresa; Sanduvete-Chaves, Susana; Sánchez-Martín, Milagrosa
2014-01-01
Nowadays, the confronting dichotomous view between experimental/quasi-experimental and non-experimental/ethnographic studies still exists but, despite the extensive use of non-experimental/ethnographic studies, the most systematic work on methodological quality has been developed based on experimental and quasi-experimental studies. This hinders evaluators and planners' practice of empirical program evaluation, a sphere in which the distinction between types of study is changing continually and is less clear. Based on the classical validity framework of experimental/quasi-experimental studies, we carry out a review of the literature in order to analyze the convergence of design elements in methodological quality in primary studies in systematic reviews and ethnographic research. We specify the relevant design elements that should be taken into account in order to improve validity and generalization in program evaluation practice in different methodologies from a practical methodological and complementary view. We recommend ways to improve design elements so as to enhance validity and generalization in program evaluation practice.
An economic analysis methodology for project evaluation and programming.
DOT National Transportation Integrated Search
2013-08-01
Economic analysis is a critical component of a comprehensive project or program evaluation methodology that considers all key : quantitative and qualitative impacts of highway investments. It allows highway agencies to identify, quantify, and value t...
Brown, David; Cuccurullo, Sara; Lee, Joseph; Petagna, Ann; Strax, Thomas
2008-08-01
This project sought to create an educational module including evaluation methodology to instruct physical medicine and rehabilitation (PM&R) residents in electrodiagnostic evaluation of patients with neuromuscular problems, and to verify acquired competencies in those electrodiagnostic skills through objective evaluation methodology. Sixteen residents were trained by board-certified neuromuscular and electrodiagnostic medicine physicians through technical training, lectures, and review of self-assessment examination (SAE) concepts from the American Academy of Physical Medicine & Rehabilitation syllabus provided in the Archives of Physical Medicine and Rehabilitation. After delivery of the educational module, knowledge acquisition and skill attainment were measured in (1) clinical skill in diagnostic procedures via a procedure checklist, (2) diagnosis and ability to design a patient-care management plan via chart simulated recall (CSR) exams, (3) physician/patient interaction via patient surveys, (4) physician/staff interaction via 360-degree global ratings, and (5) ability to write a comprehensive patient-care report and to document a patient-care management plan in accordance with Medicare guidelines via written patient reports. Assessment tools developed for this program address the basic competencies outlined by the Accreditation Council for Graduate Medical Education (ACGME). To test the success of the standardized educational module, data were collected on an ongoing basis. Objective measures compared resident SAE scores in electrodiagnostics (EDX) before and after institution of the comprehensive EDX competency module in a PM&R residency program. Fifteen of 16 residents (94%) successfully demonstrated proficiency in every segment of the evaluation element of the educational module by the end of their PGY-4 electrodiagnostic rotation. The resident who did not initially pass underwent remedial coursework and passed on the second attempt. Furthermore, the residents' proficiency as demonstrated by the evaluation after implementation of the standardized educational module positively correlated to an increase in resident SAE scores in EDX compared with resident scores before implementation of the educational module. Resident proficiency in EDX medicine skills and knowledge was objectively verified after completion of the standardized educational module. Validation of the assessment tools is evidenced by collected data correlating with significantly improved SAE scores and American Association of Neuromuscular and Electrodiagnostic Medicine (AANEM) exam scores, as outlined in the result section. In addition, the clinical development tool (procedure checklist) was validated by residents being individually observed performing skills and deemed competent by an AANEM-certified physician. The standardized educational module and evaluation methodology provide a potential framework for the definition of baseline competency in the clinical skill area of EDX.
ERIC Educational Resources Information Center
Osler, James Edward, II
2015-01-01
This monograph provides an epistemological rational for the Accumulative Manifold Validation Analysis [also referred by the acronym "AMOVA"] statistical methodology designed to test psychometric instruments. This form of inquiry is a form of mathematical optimization in the discipline of linear stochastic modelling. AMOVA is an in-depth…
The quest for an accurate accounting of public health expenditures.
Atchison, C; Barry, M A; Kanarek, N; Gebbie, K
2000-09-01
This article describes one effort to develop management tools that will help public health administrators and policy makers implement comprehensive public health strategies. It recounts the ongoing development of a methodology through which the Essential Public Health Services can be related to public health budgets, appropriations, and expenditures. Through three pilot projects involving: (1) nine state health agencies, (2) three local health agencies, and (3) all local jurisdictions and the state health agency in one state, a workable methodology for identifying public expenditures for comprehensive public health programming has been identified.
Thurber, Raymond; Read, Linda Eklof
2008-01-01
This article describes how education specialists from a 359-bed acute care hospital in the Northeast developed and implemented a comprehensive educational plan to prepare all staff members on the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) tracer methodology and upcoming triennial survey. This methodology can be utilized by staff development educators in any setting to not only prepare staff members for a successful JCAHO survey but also to meet or exceed JCAHO standards in one's everyday job.
A systematic review of the quality of homeopathic pathogenetic trials published from 1945 to 1995.
Dantas, F; Fisher, P; Walach, H; Wieland, F; Rastogi, D P; Teixeira, H; Koster, D; Jansen, J P; Eizayaga, J; Alvarez, M E P; Marim, M; Belon, P; Weckx, L L M
2007-01-01
The quality of information gathered from homeopathic pathogenetic trials (HPTs), also known as 'provings', is fundamental to homeopathy. We systematically reviewed HPTs published in six languages (English, German, Spanish, French, Portuguese and Dutch) from 1945 to 1995, to assess their quality in terms of the validity of the information they provide. The literature was comprehensively searched, only published reports of HPTs were included. Information was extracted by two reviewers per trial using a form with 87 items. Information on: medicines, volunteers, ethical aspects, blinding, randomization, use of placebo, adverse effects, assessments, presentation of data and number of claimed findings were recorded. Methodological quality was assessed by an index including indicators of internal and external validity, personal judgement and comments of reviewers for each study. 156 HPTs on 143 medicines, involving 2815 volunteers, produced 20,538 pathogenetic effects (median 6.5 per volunteer). There was wide variation in methods and results. Sample size (median 15, range 1-103) and trial duration (mean 34 days) were very variable. Most studies had design flaws, particularly absence of proper randomization, blinding, placebo control and criteria for analysis of outcomes. Mean methodological score was 5.6 (range 4-16). More symptoms were reported from HPTs of poor quality than from better ones. In 56% of trials volunteers took placebo. Pathogenetic effects were claimed in 98% of publications. On average about 84% of volunteers receiving active treatment developed symptoms. The quality of reports was in general poor, and much important information was not available. The HPTs were generally of low methodological quality. There is a high incidence of pathogenetic effects in publications and volunteers but this could be attributable to design flaws. Homeopathic medicines, tested in HPTs, appear safe. The central question of whether homeopathic medicines in high dilutions can provoke effects in healthy volunteers has not yet been definitively answered, because of methodological weaknesses of the reports. Improvement of the method and reporting of results of HPTs are required. References to all included RCTs are available on-line at.
The CPT Reading Comprehension Test: A Validity Study.
ERIC Educational Resources Information Center
Napoli, Anthony R.; Raymond, Lanette A.; Coffey, Cheryl A.; Bosco, Diane M.
1998-01-01
Describes a study done at Suffolk County Community College (New York) that assessed the validity of the College Board's Computerized Placement Test in Reading Comprehension (CPT-R) by comparing test results of 1,154 freshmen with the results of the Degree of Power Reading Test. Results confirmed the CPT-R's reliability in identifying basic…
Developing and Validating Proof Comprehension Tests in Undergraduate Mathematics
ERIC Educational Resources Information Center
Mejía-Ramos, Juan Pablo; Lew, Kristen; de la Torre, Jimmy; Weber, Keith
2017-01-01
In this article, we describe and illustrate the process by which we developed and validated short, multiple-choice, reliable tests to assess undergraduate students' comprehension of three mathematical proofs. We discuss the purpose for each stage and how it benefited the design of our instruments. We also suggest ways in which this process could…
Orlandini, S; Pasquini, B; Caprini, C; Del Bubba, M; Squarcialupi, L; Colotta, V; Furlanetto, S
2016-09-30
A comprehensive strategy involving the use of mixture-process variable (MPV) approach and Quality by Design principles has been applied in the development of a capillary electrophoresis method for the simultaneous determination of the anti-inflammatory drug diclofenac and its five related substances. The selected operative mode consisted in microemulsion electrokinetic chromatography with the addition of methyl-β-cyclodextrin. The critical process parameters included both the mixture components (MCs) of the microemulsion and the process variables (PVs). The MPV approach allowed the simultaneous investigation of the effects of MCs and PVs on the critical resolution between diclofenac and its 2-deschloro-2-bromo analogue and on analysis time. MPV experiments were used both in the screening phase and in the Response Surface Methodology, making it possible to draw MCs and PVs contour plots and to find important interactions between MCs and PVs. Robustness testing was carried out by MPV experiments and validation was performed following International Conference on Harmonisation guidelines. The method was applied to a real sample of diclofenac gastro-resistant tablets. Copyright © 2016 Elsevier B.V. All rights reserved.
Methodological challenges of validating a clinical decision-making tool in the practice environment.
Brennan, Caitlin W; Daly, Barbara J
2015-04-01
Validating a measurement tool intended for use in the practice environment poses challenges that may not be present when validating a tool intended solely for research purposes. The aim of this article is to describe the methodological challenges of validating a clinical decision-making tool, the Oncology Acuity Tool, which nurses use to make nurse assignment and staffing decisions prospectively each shift. Data were derived from a larger validation study, during which several methodological challenges arose. Revisions to the tool, including conducting iterative feedback cycles with end users, were necessary before the validation study was initiated. The "true" value of patient acuity is unknown, and thus, two approaches to inter-rater reliability assessment were used. Discordant perspectives existed between experts and end users. Balancing psychometric rigor with clinical relevance may be achieved through establishing research-practice partnerships, seeking active and continuous feedback with end users, and weighing traditional statistical rules of thumb with practical considerations. © The Author(s) 2014.
Causal Interpretations of Psychological Attributes
ERIC Educational Resources Information Center
Kane, Mike
2017-01-01
In the article "Rethinking Traditional Methods of Survey Validation" Andrew Maul describes a minimalist validation methodology for survey instruments, which he suggests is widely used in some areas of psychology and then critiques this methodology empirically and conceptually. He provides a reduction ad absurdum argument by showing that…
ERIC Educational Resources Information Center
Paribakht, T. Sima; Wesche, Marjorie Bingham
A study investigated the role of comprehension of meaningful language input in young adults' second language learning, focusing on: (1) what kinds of measurement instruments and procedures can be used in tracking student gains in specific aspects of target language proficiency; (2) development of a reliable self-report scale capturing different…
Factors affecting reproducibility between genome-scale siRNA-based screens
Barrows, Nicholas J.; Le Sommer, Caroline; Garcia-Blanco, Mariano A.; Pearson, James L.
2011-01-01
RNA interference-based screening is a powerful new genomic technology which addresses gene function en masse. To evaluate factors influencing hit list composition and reproducibility, we performed two identically designed small interfering RNA (siRNA)-based, whole genome screens for host factors supporting yellow fever virus infection. These screens represent two separate experiments completed five months apart and allow the direct assessment of the reproducibility of a given siRNA technology when performed in the same environment. Candidate hit lists generated by sum rank, median absolute deviation, z-score, and strictly standardized mean difference were compared within and between whole genome screens. Application of these analysis methodologies within a single screening dataset using a fixed threshold equivalent to a p-value ≤ 0.001 resulted in hit lists ranging from 82 to 1,140 members and highlighted the tremendous impact analysis methodology has on hit list composition. Intra- and inter-screen reproducibility was significantly influenced by the analysis methodology and ranged from 32% to 99%. This study also highlighted the power of testing at least two independent siRNAs for each gene product in primary screens. To facilitate validation we conclude by suggesting methods to reduce false discovery at the primary screening stage. In this study we present the first comprehensive comparison of multiple analysis strategies, and demonstrate the impact of the analysis methodology on the composition of the “hit list”. Therefore, we propose that the entire dataset derived from functional genome-scale screens, especially if publicly funded, should be made available as is done with data derived from gene expression and genome-wide association studies. PMID:20625183
Song, Fujian; Loke, Yoon K; Walsh, Tanya; Glenny, Anne-Marie; Eastwood, Alison J; Altman, Douglas G
2009-04-03
To investigate basic assumptions and other methodological problems in the application of indirect comparison in systematic reviews of competing healthcare interventions. Survey of published systematic reviews. Inclusion criteria Systematic reviews published between 2000 and 2007 in which an indirect approach had been explicitly used. Identified reviews were assessed for comprehensiveness of the literature search, method for indirect comparison, and whether assumptions about similarity and consistency were explicitly mentioned. The survey included 88 review reports. In 13 reviews, indirect comparison was informal. Results from different trials were naively compared without using a common control in six reviews. Adjusted indirect comparison was usually done using classic frequentist methods (n=49) or more complex methods (n=18). The key assumption of trial similarity was explicitly mentioned in only 40 of the 88 reviews. The consistency assumption was not explicit in most cases where direct and indirect evidence were compared or combined (18/30). Evidence from head to head comparison trials was not systematically searched for or not included in nine cases. Identified methodological problems were an unclear understanding of underlying assumptions, inappropriate search and selection of relevant trials, use of inappropriate or flawed methods, lack of objective and validated methods to assess or improve trial similarity, and inadequate comparison or inappropriate combination of direct and indirect evidence. Adequate understanding of basic assumptions underlying indirect and mixed treatment comparison is crucial to resolve these methodological problems. APPENDIX 1: PubMed search strategy. APPENDIX 2: Characteristics of identified reports. APPENDIX 3: Identified studies. References of included studies.
Manheim, F.T.; Buchholtz ten Brink, Marilyn R.; Mecray, E.L.
1998-01-01
A comprehensive database of sediment chemistry and environmental parameters has been compiled for Boston Harbor and Massachusetts Bay. This work illustrates methodologies for rescuing and validating sediment data from heterogeneous historical sources. It greatly expands spatial and temporal data coverage of estuarine and coastal sediments. The database contains about 3500 samples containing inorganic chemical, organic, texture and other environmental data dating from 1955 to 1994. Cooperation with local and federal agencies as well as universities was essential in locating and screening documents for the database. More than 80% of references utilized came from sources with limited distribution (gray literature). Task sharing was facilitated by a comprehensive and clearly defined data dictionary for sediments. It also served as a data entry template and flat file format for data processing and as a basis for interpretation and graphical illustration. Standard QA/QC protocols are usually inapplicable to historical sediment data. In this work outliers and data quality problems were identified by batch screening techniques that also provide visualizations of data relationships and geochemical affinities. No data were excluded, but qualifying comments warn users of problem data. For Boston Harbor, the proportion of irreparable or seriously questioned data was remarkably small (<5%), although concentration values for metals and organic contaminants spanned 3 orders of magnitude for many elements or compounds. Data from the historical database provide alternatives to dated cores for measuring changes in surficial sediment contamination level with time. The data indicate that spatial inhomogeneity in harbor environments can be large with respect to sediment-hosted contaminants. Boston Inner Harbor surficial sediments showed decreases in concentrations of Cu, Hg, and Zn of 40 to 60% over a 17-year period.A comprehensive database of sediment chemistry and environmental parameters has been compiled for Boston Harbor and Massachusetts Bay. This work illustrates methodologies for rescuing and validating sediment data from heterogeneous historical sources. It greatly expands spatial and temporal data coverage of estuarine and coastal sediments. The database contains about 3500 samples containing inorganic chemical, organic, texture and other environmental data dating from 1995 to 1994. Cooperation with local and federal agencies as well as universities was essential in locating and screening documents for the database. More than 80% of references utilized came from sources with limited distribution (gray Task sharing was facilitated by a comprehensive and clearly defined data dictionary for sediments. It also served as a data entry template and flat file format for data processing and as a basis for interpretation and graphical illustration. Standard QA/QC protocols are usually inapplicable to historical sediment data. In this work outliers and data quality problems were identified by batch screening techniques that also provide visualizations of data relationships and geochemical affinities. No data were excluded, but qualifying comments warn users of problem data. For Boston Harbor, the proportion of irreparable or seriously questioned data was remarkably small (<5%), although concentration values for metals and organic contaminants spanned 3 orders of magnitude for many elements or compounds. Data from the historical database provide alternatives to dated cores for measuring changes in surficial sediment contamination level with time. The data indicate that spatial inhomogeneity in harbor environments can be large with respect to sediment-hosted contaminants. Boston Inner Harbor surficial sediments showed decreases in concentrations Cu, Hg, and Zn of 40 to 60% over a 17-year period.
NASA Astrophysics Data System (ADS)
He, Jingjing; Wang, Dengjiang; Zhang, Weifang
2015-03-01
This study presents an experimental and modeling study for damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in-situ non-destructive testing during fatigue cyclical loading. A multi-feature integration method is developed to quantify the crack size using signal features of correlation coefficient, amplitude change, and phase change. In addition, probability of detection (POD) model is constructed to quantify the reliability of the developed sizing method. Using the developed crack size quantification method and the resulting POD curve, probabilistic fatigue life prediction can be performed to provide comprehensive information for decision-making. The effectiveness of the overall methodology is demonstrated and validated using several aircraft lap joint specimens from different manufactures and under different loading conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricci, Paolo; Theiler, C.; Fasoli, A.
A methodology for plasma turbulence code validation is discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The present work extends the analysis carried out in a previous paper [P. Ricci et al., Phys. Plasmas 16, 055703 (2009)] where the validation observables were introduced. Here, it is discussed how to quantify the agreement between experiments and simulations with respect to each observable, how to define a metric to evaluate this agreement globally, and - finally - how to assess the quality of a validation procedure. The methodology is then applied to the simulation of the basic plasmamore » physics experiment TORPEX [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulation models.« less
NASA Astrophysics Data System (ADS)
Zhafirah Muhammad, Nurul; Harun, A.; Hambali, N. A. M. A.; Murad, S. A. Z.; Mohyar, S. N.; Isa, M. N.; Jambek, AB
2017-11-01
Increased demand in internet of thing (IOT) application based has inadvertently forced the move towards higher complexity of integrated circuit supporting SoC. Such spontaneous increased in complexity poses unequivocal complicated validation strategies. Hence, the complexity allows researchers to come out with various exceptional methodologies in order to overcome this problem. This in essence brings about the discovery of dynamic verification, formal verification and hybrid techniques. In reserve, it is very important to discover bugs at infancy of verification process in (SoC) in order to reduce time consuming and fast time to market for the system. Ergo, in this paper we are focusing on the methodology of verification that can be done at Register Transfer Level of SoC based on the AMBA bus design. On top of that, the discovery of others verification method called Open Verification Methodology (OVM) brings out an easier way in RTL validation methodology neither as the replacement for the traditional method yet as an effort for fast time to market for the system. Thus, the method called OVM is proposed in this paper as the verification method for larger design to avert the disclosure of the bottleneck in validation platform.
Shea, Beverley J; Grimshaw, Jeremy M; Wells, George A; Boers, Maarten; Andersson, Neil; Hamel, Candyce; Porter, Ashley C; Tugwell, Peter; Moher, David; Bouter, Lex M
2007-02-15
Our objective was to develop an instrument to assess the methodological quality of systematic reviews, building upon previous tools, empirical evidence and expert consensus. A 37-item assessment tool was formed by combining 1) the enhanced Overview Quality Assessment Questionnaire (OQAQ), 2) a checklist created by Sacks, and 3) three additional items recently judged to be of methodological importance. This tool was applied to 99 paper-based and 52 electronic systematic reviews. Exploratory factor analysis was used to identify underlying components. The results were considered by methodological experts using a nominal group technique aimed at item reduction and design of an assessment tool with face and content validity. The factor analysis identified 11 components. From each component, one item was selected by the nominal group. The resulting instrument was judged to have face and content validity. A measurement tool for the 'assessment of multiple systematic reviews' (AMSTAR) was developed. The tool consists of 11 items and has good face and content validity for measuring the methodological quality of systematic reviews. Additional studies are needed with a focus on the reproducibility and construct validity of AMSTAR, before strong recommendations can be made on its use.
Bolte, Gabriele; David, Madlen; Dębiak, Małgorzata; Fiedel, Lotta; Hornberg, Claudia; Kolossa-Gehring, Marike; Kraus, Ute; Lätzsch, Rebecca; Paeck, Tatjana; Palm, Kerstin; Schneider, Alexandra
2018-06-01
The comprehensive consideration of sex/gender in health research is essential to increase relevance and validity of research results. Contrary to other areas of health research, there is no systematic summary of the current state of research on the significance of sex/gender in environmental health. Within the interdisciplinary research network Sex/Gender-Environment-Health (GeUmGe-NET) the current state of integration of sex/gender aspects or, respectively, gender theoretical concepts into research was systematically assessed within selected topics of the research areas environmental toxicology, environmental medicine, environmental epidemiology and public health research on environment and health. Knowledge gaps and research needs were identified in all research areas. Furthermore, the potential for methodological advancements by using gender theoretical concepts was depicted. A dialogue between biomedical research, public health research, and gender studies was started with the research network GeUmGe-NET. This dialogue has to be continued particularly regarding a common testing of methodological innovations in data collection and data analysis. Insights of this interdisciplinary research are relevant for practice areas such as environmental health protection, health promotion, environmental justice, and environmental health monitoring.
Cognition in multiple sclerosis
Benedict, Ralph; Enzinger, Christian; Filippi, Massimo; Geurts, Jeroen J.; Hamalainen, Paivi; Hulst, Hanneke; Inglese, Matilde; Leavitt, Victoria M.; Rocca, Maria A.; Rosti-Otajarvi, Eija M.; Rao, Stephen
2018-01-01
Cognitive decline is recognized as a prevalent and debilitating symptom of multiple sclerosis (MS), especially deficits in episodic memory and processing speed. The field aims to (1) incorporate cognitive assessment into standard clinical care and clinical trials, (2) utilize state-of-the-art neuroimaging to more thoroughly understand neural bases of cognitive deficits, and (3) develop effective, evidence-based, clinically feasible interventions to prevent or treat cognitive dysfunction, which are lacking. There are obstacles to these goals. Our group of MS researchers and clinicians with varied expertise took stock of the current state of the field, and we identify several important practical and theoretical challenges, including key knowledge gaps and methodologic limitations related to (1) understanding and measurement of cognitive deficits, (2) neuroimaging of neural bases and correlates of deficits, and (3) development of effective treatments. This is not a comprehensive review of the extensive literature, but instead a statement of guidelines and priorities for the field. For instance, we provide recommendations for improving the scientific basis and methodologic rigor for cognitive rehabilitation research. Toward this end, we call for multidisciplinary collaborations toward development of biologically based theoretical models of cognition capable of empirical validation and evidence-based refinement, providing the scientific context for effective treatment discovery. PMID:29343470
BTC method for evaluation of remaining strength and service life of bridge cables.
DOT National Transportation Integrated Search
2011-09-01
"This report presents the BTC method; a comprehensive state-of-the-art methodology for evaluation of remaining : strength and residual life of bridge cables. The BTC method is a probability-based, proprietary, patented, and peerreviewed : methodology...
Technology Assessment for Powertrain Components Final Report CRADA No. TC-1124-95
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tokarz, F.; Gough, C.
LLNL utilized its defense technology assessment methodologies in combination with its capabilities in the energy; manufacturing, and transportation technologies to demonstrate a methodology that synthesized available but incomplete information on advanced automotive technologies into a comprehensive framework.
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Goldberg, Robert K.; Lerch, Bradley A.; Saleeb, Atef F.
2009-01-01
Herein a general, multimechanism, physics-based viscoelastoplastic model is presented in the context of an integrated diagnosis and prognosis methodology which is proposed for structural health monitoring, with particular applicability to gas turbine engine structures. In this methodology, diagnostics and prognostics will be linked through state awareness variable(s). Key technologies which comprise the proposed integrated approach include (1) diagnostic/detection methodology, (2) prognosis/lifing methodology, (3) diagnostic/prognosis linkage, (4) experimental validation, and (5) material data information management system. A specific prognosis lifing methodology, experimental characterization and validation and data information management are the focal point of current activities being pursued within this integrated approach. The prognostic lifing methodology is based on an advanced multimechanism viscoelastoplastic model which accounts for both stiffness and/or strength reduction damage variables. Methods to characterize both the reversible and irreversible portions of the model are discussed. Once the multiscale model is validated the intent is to link it to appropriate diagnostic methods to provide a full-featured structural health monitoring system.
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Goldberg, Robert K.; Lerch, Bradley A.; Saleeb, Atef F.
2009-01-01
Herein a general, multimechanism, physics-based viscoelastoplastic model is presented in the context of an integrated diagnosis and prognosis methodology which is proposed for structural health monitoring, with particular applicability to gas turbine engine structures. In this methodology, diagnostics and prognostics will be linked through state awareness variable(s). Key technologies which comprise the proposed integrated approach include 1) diagnostic/detection methodology, 2) prognosis/lifing methodology, 3) diagnostic/prognosis linkage, 4) experimental validation and 5) material data information management system. A specific prognosis lifing methodology, experimental characterization and validation and data information management are the focal point of current activities being pursued within this integrated approach. The prognostic lifing methodology is based on an advanced multi-mechanism viscoelastoplastic model which accounts for both stiffness and/or strength reduction damage variables. Methods to characterize both the reversible and irreversible portions of the model are discussed. Once the multiscale model is validated the intent is to link it to appropriate diagnostic methods to provide a full-featured structural health monitoring system.
ERIC Educational Resources Information Center
Spencer, Trina D.; Goldstein, Howard; Kelley, Elizabeth Spencer; Sherman, Amber; McCune, Luke
2017-01-01
Despite research demonstrating the importance of language comprehension to later reading abilities, curriculum-based measures to assess language comprehension abilities in preschoolers remain lacking. The Assessment of Story Comprehension (ASC) features brief, child-relevant stories and a series of literal and inferential questions with a focus on…
ERIC Educational Resources Information Center
Spencer, Trina D.; Goldstein, Howard; Kelley, Elizabeth Spencer; Sherman, Amber; McCune, Luke
2017-01-01
Despite research demonstrating the importance of language comprehension to later reading abilities, curriculum based measures to assess language comprehension abilities in preschoolers remain lacking. The Assessment of Story Comprehension (ASC) features brief, child-relevant stories and a series of literal and inferential questions with a focus on…
Tularosa Basin Play Fairway Analysis: Methodology Flow Charts
Adam Brandt
2015-11-15
These images show the comprehensive methodology used for creation of a Play Fairway Analysis to explore the geothermal resource potential of the Tularosa Basin, New Mexico. The deterministic methodology was originated by the petroleum industry, but was custom-modified to function as a knowledge-based geothermal exploration tool. The stochastic PFA flow chart uses weights of evidence, and is data-driven.
de Klerk, Susan; Buchanan, Helen; Jerosch-Herold, Christina
Systematic review. The Disabilities of the Arm Shoulder and Hand Questionnaire has multiple language versions from many countries around the world. In addition there is extensive research evidence of its psychometric properties. The purpose of this study was to systematically review the evidence available on the validity and clinical utility of the Disabilities of the Arm Shoulder and Hand as a measure of activity and participation in patients with musculoskeletal hand injuries in developing country contexts. We registered the review with international prospective register of systematic reviews prior to conducting a comprehensive literature search and extracting descriptive data. Two reviewers independently assessed methodological quality with the Consensus-Based Standards for the Selection of Health Measurement Instruments critical appraisal tool, the checklist to operationalize measurement characteristics of patient-rated outcome measures and the multidimensional model of clinical utility. Fourteen studies reporting 12 language versions met the eligibility criteria. Two language versions (Persian and Turkish) had an overall rating of good, and one (Thai) had an overall rating of excellent for cross-cultural validity. The remaining 9 language versions had an overall poor rating for cross-cultural validity. Content and construct validity and clinical utility yielded similar results. Poor quality ratings for validity and clinical utility were due to insufficient documentation of results and inadequate psychometric testing. With the increase in migration and globalization, hand therapists are likely to require a range of culturally adapted and translated versions of the Disabilities of the Arm Shoulder and Hand. Recommendations include rigorous application and reporting of cross-cultural adaptation, appropriate psychometric testing, and testing of clinical utility in routine clinical practice. Copyright © 2017 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Moru, Eunice Kolitsoe; Nchejane, John; Ramollo, Motlatsi; Rammea, Lisema
2017-01-01
The reported study explored undergraduate science students' validation and comprehension of written proofs, reasons given either to accept or reject mathematical procedures employed in the proofs, and the difficulties students encountered in reading the proofs. The proofs were constructed using both the Comparison and the Integral tests in the…
Validity Evidence for the Test of Silent Reading Efficiency and Comprehension (TOSREC)
ERIC Educational Resources Information Center
Johnson, Evelyn S.; Pool, Juli L.; Carter, Deborah R.
2011-01-01
An essential component of a response to intervention (RTI) framework is a screening process that is both accurate and efficient. The purpose of this study was to analyze the validity evidence for the "Test of Silent Reading Efficiency and Comprehension" (TOSREC) to determine its potential for use within a screening process. Participants included…
Confirmatory Factor Analysis of the TerraNova Comprehensive Tests of Basic Skills/5
ERIC Educational Resources Information Center
Stevens, Joseph J.; Zvoch, Keith
2007-01-01
Confirmatory factor analysis was used to explore the internal validity of scores on the TerraNova Comprehensive Tests of Basic Skills/5 using samples from a southwestern school district and standardization samples reported by the publisher. One of the strengths claimed for battery-type achievement tests is provision of reliable and valid samples…
Comprehension of Multiple Documents with Conflicting Information: A Two-Step Model of Validation
ERIC Educational Resources Information Center
Richter, Tobias; Maier, Johanna
2017-01-01
In this article, we examine the cognitive processes that are involved when readers comprehend conflicting information in multiple texts. Starting from the notion of routine validation during comprehension, we argue that readers' prior beliefs may lead to a biased processing of conflicting information and a one-sided mental model of controversial…
Automatic tree parameter extraction by a Mobile LiDAR System in an urban context.
Herrero-Huerta, Mónica; Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo
2018-01-01
In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees.
Automatic tree parameter extraction by a Mobile LiDAR System in an urban context
Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo
2018-01-01
In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees. PMID:29689076
Manterola, Carlos; Torres, Rodrigo; Burgos, Luis; Vial, Manuel; Pineda, Viviana
2006-07-01
Surgery is a curative treatment for gastric cancer (GC). As relapse is frequent, adjuvant therapies such as postoperative chemo radiotherapy have been tried. In Chile, some hospitals adopted Macdonald's study as a protocol for the treatment of GC. To determine methodological quality and internal and external validity of the Macdonald study. Three instruments were applied that assess methodological quality. A critical appraisal was done and the internal and external validity of the methodological quality was analyzed with two scales: MINCIR (Methodology and Research in Surgery), valid for therapy studies and CONSORT (Consolidated Standards of Reporting Trials), valid for randomized controlled trials (RCT). Guides and scales were applied by 5 researchers with training in clinical epidemiology. The reader's guide verified that the Macdonald study was not directed to answer a clearly defined question. There was random assignment, but the method used is not described and the patients were not considered until the end of the study (36% of the group with surgery plus chemo radiotherapy did not complete treatment). MINCIR scale confirmed a multicentric RCT, not blinded, with an unclear randomized sequence, erroneous sample size estimation, vague objectives and no exclusion criteria. CONSORT system proved the lack of working hypothesis and specific objectives as well as an absence of exclusion criteria and identification of the primary variable, an imprecise estimation of sample size, ambiguities in the randomization process, no blinding, an absence of statistical adjustment and the omission of a subgroup analysis. The instruments applied demonstrated methodological shortcomings that compromise the internal and external validity of the.
Methodology, Methods, and Metrics for Testing and Evaluating Augmented Cognition Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greitzer, Frank L.
The augmented cognition research community seeks cognitive neuroscience-based solutions to improve warfighter performance by applying and managing mitigation strategies to reduce workload and improve the throughput and quality of decisions. The focus of augmented cognition mitigation research is to define, demonstrate, and exploit neuroscience and behavioral measures that support inferences about the warfighter’s cognitive state that prescribe the nature and timing of mitigation. A research challenge is to develop valid evaluation methodologies, metrics and measures to assess the impact of augmented cognition mitigations. Two considerations are external validity, which is the extent to which the results apply to operational contexts;more » and internal validity, which reflects the reliability of performance measures and the conclusions based on analysis of results. The scientific rigor of the research methodology employed in conducting empirical investigations largely affects the validity of the findings. External validity requirements also compel us to demonstrate operational significance of mitigations. Thus it is important to demonstrate effectiveness of mitigations under specific conditions. This chapter reviews some cognitive science and methodological considerations in designing augmented cognition research studies and associated human performance metrics and analysis methods to assess the impact of augmented cognition mitigations.« less
Climatological Processing and Product Development for the TRMM Ground Validation Program
NASA Technical Reports Server (NTRS)
Marks, D. A.; Kulie, M. S.; Robinson, M.; Silberstein, D. S.; Wolff, D. B.; Ferrier, B. S.; Amitai, E.; Fisher, B.; Wang, J.; Augustine, D.;
2000-01-01
The Tropical Rainfall Measuring Mission (TRMM) satellite was successfully launched in November 1997.The main purpose of TRMM is to sample tropical rainfall using the first active spaceborne precipitation radar. To validate TRMM satellite observations, a comprehensive Ground Validation (GV) Program has been implemented. The primary goal of TRMM GV is to provide basic validation of satellite-derived precipitation measurements over monthly climatologies for the following primary sites: Melbourne, FL; Houston, TX; Darwin, Australia- and Kwajalein Atoll, RMI As part of the TRMM GV effort, research analysts at NASA Goddard Space Flight Center (GSFC) generate standardized rainfall products using quality-controlled ground-based radar data from the four primary GV sites. This presentation will provide an overview of TRMM GV climatological processing and product generation. A description of the data flow between the primary GV sites, NASA GSFC, and the TRMM Science and Data Information System (TSDIS) will be presented. The radar quality control algorithm, which features eight adjustable height and reflectivity parameters, and its effect on monthly rainfall maps, will be described. The methodology used to create monthly, gauge-adjusted rainfall products for each primary site will also be summarized. The standardized monthly rainfall products are developed in discrete, modular steps with distinct intermediate products. A summary of recently reprocessed official GV rainfall products available for TRMM science users will be presented. Updated basic standardized product results involving monthly accumulation, Z-R relationship, and gauge statistics for each primary GV site will also be displayed.
The Comprehension and Validation of Social Information.
ERIC Educational Resources Information Center
Wyer, Robert S., Jr.; Radvansky, Gabriel A.
1999-01-01
Proposes a theory of social cognition to account for the comprehension and verification of social information. The theory views comprehension as a process of constructing situation models of new information on the basis of previously formed models about its referents. The comprehension of both single statements and multiple pieces of information…
Collecting and validating experiential expertise is doable but poses methodological challenges.
Burda, Marika H F; van den Akker, Marjan; van der Horst, Frans; Lemmens, Paul; Knottnerus, J André
2016-04-01
To give an overview of important methodological challenges in collecting, validating, and further processing experiential expertise and how to address these challenges. Based on our own experiences in studying the concept, operationalization, and contents of experiential expertise, we have formulated methodological issues regarding the inventory and application of experiential expertise. The methodological challenges can be categorized in six developmental research stages, comprising the conceptualization of experiential expertise, methods to harvest experiential expertise, the validation of experiential expertise, evaluation of the effectiveness, how to translate experiential expertise into acceptable guidelines, and how to implement these. The description of methodological challenges and ways to handle those are illustrated using diabetes mellitus as an example. Experiential expertise can be defined and operationalized in terms of successful illness-related behaviors and translated into recommendations regarding life domains. Pathways have been identified to bridge the gaps between the world of patients' daily lives and the medical world. Copyright © 2016 Elsevier Inc. All rights reserved.
Taylor, Rachel M; Fern, Lorna A; Solanki, Anita; Hooker, Louise; Carluccio, Anna; Pye, Julia; Jeans, David; Frere-Smith, Tom; Gibson, Faith; Barber, Julie; Raine, Rosalind; Stark, Dan; Feltbower, Richard; Pearce, Susie; Whelan, Jeremy S
2015-07-28
Patient experience is increasingly used as an indicator of high quality care in addition to more traditional clinical end-points. Surveys are generally accepted as appropriate methodology to capture patient experience. No validated patient experience surveys exist specifically for adolescents and young adults (AYA) aged 13-24 years at diagnosis with cancer. This paper describes early work undertaken to develop and validate a descriptive patient experience survey for AYA with cancer that encompasses both their cancer experience and age-related issues. We aimed to develop, with young people, an experience survey meaningful and relevant to AYA to be used in a longitudinal cohort study (BRIGHTLIGHT), ensuring high levels of acceptability to maximise study retention. A three-stage approach was employed: Stage 1 involved developing a conceptual framework, conducting literature/Internet searches and establishing content validity of the survey; Stage 2 confirmed the acceptability of methods of administration and consisted of four focus groups involving 11 young people (14-25 years), three parents and two siblings; and Stage 3 established survey comprehension through telephone-administered cognitive interviews with a convenience sample of 23 young people aged 14-24 years. Stage 1: Two-hundred and thirty eight questions were developed from qualitative reports of young people's cancer and treatment-related experience. Stage 2: The focus groups identified three core themes: (i) issues directly affecting young people, e.g. impact of treatment-related fatigue on ability to complete survey; (ii) issues relevant to the actual survey, e.g. ability to answer questions anonymously; (iii) administration issues, e.g. confusing format in some supporting documents. Stage 3: Cognitive interviews indicated high levels of comprehension requiring minor survey amendments. Collaborating with young people with cancer has enabled a survey of to be developed that is both meaningful to young people but also examines patient experience and outcomes associated with specialist cancer care. Engagement of young people throughout the survey development has ensured the content appropriately reflects their experience and is easily understood. The BRIGHTLIGHT survey was developed for a specific research project but has the potential to be used as a TYA cancer survey to assess patient experience and the care they receive.
Apfelbacher, Christian J; Heinl, Daniel; Prinsen, Cecilia A C; Deckert, Stefanie; Chalmers, Joanne; Ofenloch, Robert; Humphreys, Rosemary; Sach, Tracey; Chamlin, Sarah; Schmitt, Jochen
2015-04-16
Eczema is a common chronic or chronically relapsing skin disease that has a substantial impact on quality of life (QoL). By means of a consensus-based process, the Harmonising Outcome Measures in Eczema (HOME) initiative has identified QoL as one of the four core outcome domains to be assessed in all eczema trials (Allergy 67(9):1111-7, 2012). Various measurement instruments exist to measure QoL in adults with eczema, but there is a great variability in both content and quality (for example, reliability and validity) of the instruments used, and it is not always clear if the best instrument is being used. Therefore, the aim of the proposed research is a comprehensive systematic assessment of the measurement properties of the existing measurement instruments that were developed and/or validated for the measurement of patient-reported QoL in adults with eczema. This study is a systematic review of the measurement properties of patient-reported measures of QoL developed and/or validated for adults with eczema. Medline via PubMed and EMBASE will be searched using a selection of relevant search terms. Eligible studies will be primary empirical studies evaluating, describing, or comparing measurement properties of QoL instruments for adult patients with eczema. Eligibility assessment and data abstraction will be performed independently by two reviewers. Evidence tables will be generated for study characteristics, instrument characteristics, measurement properties, and interpretability. The quality of the measurement properties will be assessed using predefined criteria. Methodological quality of studies will be assessed using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. A best evidence synthesis will be undertaken if more than one study has investigated a particular measurement property. The proposed systematic review will produce a comprehensive assessment of measurement properties of existing QoL instruments in adult patients with eczema. We aim to identify one best currently available instrument to measure QoL in eczema patients. PROSPERO CRD42015017138.
Bio-Optical Measurement and Modeling of the California Current and Polar Oceans
NASA Technical Reports Server (NTRS)
Mitchell, B. Greg; Fargion, Giulietta S. (Technical Monitor)
2001-01-01
The principal goals of our research are to validate standard or experimental products through detailed bio-optical and biogeochemical measurements, and to combine ocean optical observations with advanced radiative transfer modeling to contribute to satellite vicarious radiometric calibration and advanced algorithm development. To achieve our goals requires continued efforts to execute complex field programs globally, as well as development of advanced ocean optical measurement protocols. We completed a comprehensive set of ocean optical observations in the California Current, Southern Ocean, Indian Ocean requiring a large commitment to instrument calibration, measurement protocols, data processing and data merger. We augmented separately funded projects of our own, as well as others, to acquire ill situ data sets we have collected on various global cruises supported by separate grants or contracts. In collaboration with major oceanographic ship-based observation programs funded by various agencies (CalCOFI, US JGOFS, NOAA AMLR, INDOEX and Japan/East Sea) our SIMBIOS effort has resulted in data from diverse bio-optical provinces. For these global deployments we generate a high-quality, methodologically consistent, data set encompassing a wide-range of oceanic conditions. Global data collected in recent years have been integrated with our on-going CalCOFI database and have been used to evaluate SeaWiFS algorithms and to carry out validation studies. The combined database we have assembled now comprises more than 700 stations and includes observations for the clearest oligotrophic waters, highly eutrophic blooms, red-tides and coastal case 2 conditions. The data has been used to validate water-leaving radiance estimated with SeaWiFS as well as bio-optical algorithms for chlorophyll pigments. The comprehensive data is utilized for development of experimental algorithms (e.g. high-low latitude pigment transition, phytoplankton absorption, and cDOM). During this period we completed 9 peer-reviewed publications in high quality journals, and presented aspects of our work at more than 10 scientific conferences.
Dunst, J; Willich, N; Sack, H; Engenhart-Cabillic, R; Budach, V; Popp, W
2014-02-01
The QUIRO study aimed to establish a secure level of quality and innovation in radiation oncology. Over 6 years, 27 specific surveys were conducted at 24 radiooncological departments. In all, 36 renowned experts from the field of radiation oncology (mostly head physicians and full professors) supported the realization of the study. A salient feature of the chosen methodological approach is the "process" as a means of systematizing diversified medical-technical procedures according to standardized criteria. On the one hand, "processes" as a tool of translation are adapted for creating and transforming standards into concrete clinical and medical actions; on the other hand, they provide the basis for standardized instruments and methods to determine the required needs of physicians, staff, and equipment. In the foreground of the collection and measurement of resource requirements were the processes of direct service provision which were subdivided into modules for reasons of clarity and comprehensibility. Overhead tasks (i.e., participation in quality management) were excluded from the main study and examined in a separate survey with appropriate methods. After the exploration of guidelines, tumor- or indication-specific examination and treatment processes were developed in expert workshops. Moreover, those specific modules were defined which characterize these entities and indications in a special degree. Afterwards, these modules were compiled according to their time and resources required in the "reference institution", i.e., in specialized and as competent recognized departments (mostly from the university area), by various suitable survey methods. The significance of the QUIRO study and the validity of the results were optimized in a process of constant improvements and comprehensive checks. As a consequence, the QUIRO study yields representative results concerning the resource requirement for specialized, qualitatively and technologically highly sophisticated radiooncologic treatment in Germany.
Audigé, Laurent; Cornelius, Carl-Peter; Ieva, Antonio Di; Prein, Joachim
2014-01-01
Validated trauma classification systems are the sole means to provide the basis for reliable documentation and evaluation of patient care, which will open the gateway to evidence-based procedures and healthcare in the coming years. With the support of AO Investigation and Documentation, a classification group was established to develop and evaluate a comprehensive classification system for craniomaxillofacial (CMF) fractures. Blueprints for fracture classification in the major constituents of the human skull were drafted and then evaluated by a multispecialty group of experienced CMF surgeons and a radiologist in a structured process during iterative agreement sessions. At each session, surgeons independently classified the radiological imaging of up to 150 consecutive cases with CMF fractures. During subsequent review meetings, all discrepancies in the classification outcome were critically appraised for clarification and improvement until consensus was reached. The resulting CMF classification system is structured in a hierarchical fashion with three levels of increasing complexity. The most elementary level 1 simply distinguishes four fracture locations within the skull: mandible (code 91), midface (code 92), skull base (code 93), and cranial vault (code 94). Levels 2 and 3 focus on further defining the fracture locations and for fracture morphology, achieving an almost individual mapping of the fracture pattern. This introductory article describes the rationale for the comprehensive AO CMF classification system, discusses the methodological framework, and provides insight into the experiences and interactions during the evaluation process within the core groups. The details of this system in terms of anatomy and levels are presented in a series of focused tutorials illustrated with case examples in this special issue of the Journal. PMID:25489387
Audigé, Laurent; Cornelius, Carl-Peter; Di Ieva, Antonio; Prein, Joachim
2014-12-01
Validated trauma classification systems are the sole means to provide the basis for reliable documentation and evaluation of patient care, which will open the gateway to evidence-based procedures and healthcare in the coming years. With the support of AO Investigation and Documentation, a classification group was established to develop and evaluate a comprehensive classification system for craniomaxillofacial (CMF) fractures. Blueprints for fracture classification in the major constituents of the human skull were drafted and then evaluated by a multispecialty group of experienced CMF surgeons and a radiologist in a structured process during iterative agreement sessions. At each session, surgeons independently classified the radiological imaging of up to 150 consecutive cases with CMF fractures. During subsequent review meetings, all discrepancies in the classification outcome were critically appraised for clarification and improvement until consensus was reached. The resulting CMF classification system is structured in a hierarchical fashion with three levels of increasing complexity. The most elementary level 1 simply distinguishes four fracture locations within the skull: mandible (code 91), midface (code 92), skull base (code 93), and cranial vault (code 94). Levels 2 and 3 focus on further defining the fracture locations and for fracture morphology, achieving an almost individual mapping of the fracture pattern. This introductory article describes the rationale for the comprehensive AO CMF classification system, discusses the methodological framework, and provides insight into the experiences and interactions during the evaluation process within the core groups. The details of this system in terms of anatomy and levels are presented in a series of focused tutorials illustrated with case examples in this special issue of the Journal.
Risk-based Methodology for Validation of Pharmaceutical Batch Processes.
Wiles, Frederick
2013-01-01
In January 2011, the U.S. Food and Drug Administration published new process validation guidance for pharmaceutical processes. The new guidance debunks the long-held industry notion that three consecutive validation batches or runs are all that are required to demonstrate that a process is operating in a validated state. Instead, the new guidance now emphasizes that the level of monitoring and testing performed during process performance qualification (PPQ) studies must be sufficient to demonstrate statistical confidence both within and between batches. In some cases, three qualification runs may not be enough. Nearly two years after the guidance was first published, little has been written defining a statistical methodology for determining the number of samples and qualification runs required to satisfy Stage 2 requirements of the new guidance. This article proposes using a combination of risk assessment, control charting, and capability statistics to define the monitoring and testing scheme required to show that a pharmaceutical batch process is operating in a validated state. In this methodology, an assessment of process risk is performed through application of a process failure mode, effects, and criticality analysis (PFMECA). The output of PFMECA is used to select appropriate levels of statistical confidence and coverage which, in turn, are used in capability calculations to determine when significant Stage 2 (PPQ) milestones have been met. The achievement of Stage 2 milestones signals the release of batches for commercial distribution and the reduction of monitoring and testing to commercial production levels. Individuals, moving range, and range/sigma charts are used in conjunction with capability statistics to demonstrate that the commercial process is operating in a state of statistical control. The new process validation guidance published by the U.S. Food and Drug Administration in January of 2011 indicates that the number of process validation batches or runs required to demonstrate that a pharmaceutical process is operating in a validated state should be based on sound statistical principles. The old rule of "three consecutive batches and you're done" is no longer sufficient. The guidance, however, does not provide any specific methodology for determining the number of runs required, and little has been published to augment this shortcoming. The paper titled "Risk-based Methodology for Validation of Pharmaceutical Batch Processes" describes a statistically sound methodology for determining when a statistically valid number of validation runs has been acquired based on risk assessment and calculation of process capability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reckinger, Scott James; Livescu, Daniel; Vasilyev, Oleg V.
A comprehensive numerical methodology has been developed that handles the challenges introduced by considering the compressive nature of Rayleigh-Taylor instability (RTI) systems, which include sharp interfacial density gradients on strongly stratified background states, acoustic wave generation and removal at computational boundaries, and stratification-dependent vorticity production. The computational framework is used to simulate two-dimensional single-mode RTI to extreme late-times for a wide range of flow compressibility and variable density effects. The results show that flow compressibility acts to reduce the growth of RTI for low Atwood numbers, as predicted from linear stability analysis.
Medicaid payment policies for nursing home care: A national survey
Buchanan, Robert J.; Madel, R. Peter; Persons, Dan
1991-01-01
This research gives a comprehensive overview of the nursing home payment methodologies used by each State Medicaid program. To present this comprehensive overview, 1988 data were collected by survey from 49 States and the District of Columbia. The literature was reviewed and integrated into the study to provide a theoretical framework to analyze the collected data. The data are organized and presented as follows: payment levels, payment methods, payment of capital-related costs, and incentives in nursing home payment. We conclude with a discussion of the impact these different methodologies have on program cost containment, quality, and recipient access. PMID:10114935
Montecinos, P; Rodewald, A M
1994-06-01
The aim this work was to assess and compare the achievements of medical students, subjected to problem based learning methodology. The information and comprehension categories of Bloom were tested in 17 medical students in four different occasions during the physiopathology course, using a multiple choice knowledge test. There was a significant improvement in the number of correct answers towards the end of the course. It is concluded that these medical students obtained adequate learning achievements in the information subcategory of Bloom using problem based learning methodology, during the physiopathology course.
Construct Validity: Advances in Theory and Methodology
Strauss, Milton E.; Smith, Gregory T.
2008-01-01
Measures of psychological constructs are validated by testing whether they relate to measures of other constructs as specified by theory. Each test of relations between measures reflects on the validity of both the measures and the theory driving the test. Construct validation concerns the simultaneous process of measure and theory validation. In this chapter, we review the recent history of validation efforts in clinical psychological science that has led to this perspective, and we review five recent advances in validation theory and methodology of importance for clinical researchers. These are: the emergence of nonjustificationist philosophy of science; an increasing appreciation for theory and the need for informative tests of construct validity; valid construct representation in experimental psychopathology; the need to avoid representing multidimensional constructs with a single score; and the emergence of effective new statistical tools for the evaluation of convergent and discriminant validity. PMID:19086835
Mindfulness: A systematic review of instruments to measure an emergent patientreported outcome (PRO)
Park, Taehwan; Reilly-Spong, Maryanne
2013-01-01
Purpose Mindfulness has emerged as an important health concept based on evidence that mindfulness interventions reduce symptoms and improve health-related quality of life. The objectives of this study were to systematically assess and compare the properties of instruments to measure self-reported mindfulness. Methods Ovid Medline®, CINAHL®, and PsycINFO® were searched through May 2012, and articles were selected if their primary purpose was development or evaluation of the measurement properties (validity, reliability, responsiveness) of a self-report mindfulness scale. Two reviewers independently evaluated the methodological quality of the selected studies using the COnsensus-based Standards for the selection of health status Measurement INstruments (COSMIN) checklist. Discrepancies were discussed with a third reviewer, and scored by consensus. Finally, a level of evidence approach was used to synthesize results and study quality. Results Our search strategy identified a total of 2,588 articles. Forty-six articles, reporting 79 unique studies, met inclusion criteria. Ten instruments quantifying mindfulness as a unidimensional scale (n=5) or as a set of 2 to 5 subscales (n=5) were reviewed. The Mindful Attention Awareness Scale (MAAS) was evaluated by the most studies (n=27), and had positive overall quality ratings for most of the psychometric properties reviewed. The Five Facet Mindfulness Questionnaire (FFMQ) received the highest possible rating (“consistent findings in multiple studies of good methodological quality”) for two properties, internal consistency and construct validation by hypothesis testing. However, none of the instruments had sufficient evidence of content validity. Comprehensiveness of construct coverage had not been assessed; qualitative methods to confirm understanding and relevance were absent. In addition, estimates of test-retest reliability, responsiveness, or measurement error to guide users in protocol development or interpretation of scores were lacking. Conclusions Current mindfulness scales have important conceptual differences, and none can be strongly recommended based solely on superior psychometric properties. Important limitations in the field are the absence of qualitative evaluations and accepted external referents to support construct validity. Investigators need to proceed cautiously before optimizing any mindfulness intervention based on the existing scales. PMID:23539467
Gebremariam, Mekdes K; Vaqué-Crusellas, Cristina; Andersen, Lene F; Stok, F Marijn; Stelmach-Mardas, Marta; Brug, Johannes; Lien, Nanna
2017-02-14
Comprehensive and psychometrically tested measures of availability and accessibility of food are needed in order to explore availability and accessibility as determinants and predictors of dietary behaviors. The main aim of this systematic review was to update the evidence regarding the psychometric properties of measures of food availability and accessibility among youth. A secondary objective was to assess how availability and accessibility were conceptualized in the included studies. A systematic literature search was conducted using Medline, Embase, PsycINFO and Web of Science. Methodological studies published between January 2010 and March 2016 and reporting on at least one psychometric property of a measure of availability and/or accessibility of food among youth were included. Two reviewers independently extracted data and assessed study quality. Existing criteria were used to interpret reliability and validity parameters. A total of 20 studies were included. While 16 studies included measures of food availability, three included measures of both availability and accessibility; one study included a measure of accessibility only. Different conceptualizations of availability and accessibility were used across the studies. The measures aimed at assessing availability and/or accessibility in the home environment (n = 11), the school (n = 4), stores (n = 3), childcare/early care and education services (n = 2) and restaurants (n = 1). Most studies followed systematic steps in the development of the measures. The most common psychometrics tested for these measures were test-retest reliability and criterion validity. The majority of the measures had satisfactory evidence of reliability and/or validity. None of the included studies assessed the responsiveness of the measures. The review identified several measures of food availability or accessibility among youth with satisfactory evidence of reliability and/or validity. Findings indicate a need for more studies including measures of accessibility and addressing its conceptualization. More testing of some of the identified measures in different population groups is also warranted, as is the development of more measures of food availability and accessibility in the broader environment such as the neighborhood food environment.
Computational Acoustic Beamforming for Noise Source Identification for Small Wind Turbines.
Ma, Ping; Lien, Fue-Sang; Yee, Eugene
2017-01-01
This paper develops a computational acoustic beamforming (CAB) methodology for identification of sources of small wind turbine noise. This methodology is validated using the case of the NACA 0012 airfoil trailing edge noise. For this validation case, the predicted acoustic maps were in excellent conformance with the results of the measurements obtained from the acoustic beamforming experiment. Following this validation study, the CAB methodology was applied to the identification of noise sources generated by a commercial small wind turbine. The simulated acoustic maps revealed that the blade tower interaction and the wind turbine nacelle were the two primary mechanisms for sound generation for this small wind turbine at frequencies between 100 and 630 Hz.
ERIC Educational Resources Information Center
Baker, Doris Luft; Biancarosa, Gina; Park, Bitnara Jasmine; Bousselot, Tracy; Smith, Jean-Louise; Baker, Scott K.; Kame'enui, Edward J.; Alonzo, Julie; Tindal, Gerald
2015-01-01
We examined the criterion validity and diagnostic efficiency of oral reading fluency (ORF), word reading accuracy, and reading comprehension (RC) for students in Grades 7 and 8 taking into account form effects of ORF, time of assessment, and individual differences, including student designations of limited English proficiency and special education…
ERIC Educational Resources Information Center
McGuffey, Amy R.
2016-01-01
A healthy school climate is necessary for improvement. The purpose of this study was to evaluate the construct validity and usability of the Comprehensive Assessment of School Environment (CASE) as it was purportedly realigned to the three dimensions of the Breaking Ranks Framework developed by the National Association of Secondary School…
Diagnosing malignant melanoma in ambulatory care: a systematic review of clinical prediction rules.
Harrington, Emma; Clyne, Barbara; Wesseling, Nieneke; Sandhu, Harkiran; Armstrong, Laura; Bennett, Holly; Fahey, Tom
2017-03-06
Malignant melanoma has high morbidity and mortality rates. Early diagnosis improves prognosis. Clinical prediction rules (CPRs) can be used to stratify patients with symptoms of suspected malignant melanoma to improve early diagnosis. We conducted a systematic review of CPRs for melanoma diagnosis in ambulatory care. Systematic review. A comprehensive search of PubMed, EMBASE, PROSPERO, CINAHL, the Cochrane Library and SCOPUS was conducted in May 2015, using combinations of keywords and medical subject headings (MeSH) terms. Studies deriving and validating, validating or assessing the impact of a CPR for predicting melanoma diagnosis in ambulatory care were included. Data extraction and methodological quality assessment were guided by the CHARMS checklist. From 16 334 studies reviewed, 51 were included, validating the performance of 24 unique CPRs. Three impact analysis studies were identified. Five studies were set in primary care. The most commonly evaluated CPRs were the ABCD, more than one or uneven distribution of Colour, or a large (greater than 6 mm) Diameter (ABCD) dermoscopy rule (at a cut-point of >4.75; 8 studies; pooled sensitivity 0.85, 95% CI 0.73 to 0.93, specificity 0.72, 95% CI 0.65 to 0.78) and the 7-point dermoscopy checklist (at a cut-point of ≥1 recommending ruling in melanoma; 11 studies; pooled sensitivity 0.77, 95% CI 0.61 to 0.88, specificity 0.80, 95% CI 0.59 to 0.92). The methodological quality of studies varied. At their recommended cut-points, the ABCD dermoscopy rule is more useful for ruling out melanoma than the 7-point dermoscopy checklist. A focus on impact analysis will help translate melanoma risk prediction rules into useful tools for clinical practice. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
A Discrepancy-Based Methodology for Nuclear Training Program Evaluation.
ERIC Educational Resources Information Center
Cantor, Jeffrey A.
1991-01-01
A three-phase comprehensive process for commercial nuclear power training program evaluation is presented. The discrepancy-based methodology was developed after the Three Mile Island nuclear reactor accident. It facilitates analysis of program components to identify discrepancies among program specifications, actual outcomes, and industry…
Partitioning an object-oriented terminology schema.
Gu, H; Perl, Y; Halper, M; Geller, J; Kuo, F; Cimino, J J
2001-07-01
Controlled medical terminologies are increasingly becoming strategic components of various healthcare enterprises. However, the typical medical terminology can be difficult to exploit due to its extensive size and high density. The schema of a medical terminology offered by an object-oriented representation is a valuable tool in providing an abstract view of the terminology, enhancing comprehensibility and making it more usable. However, schemas themselves can be large and unwieldy. We present a methodology for partitioning a medical terminology schema into manageably sized fragments that promote increased comprehension. Our methodology has a refinement process for the subclass hierarchy of the terminology schema. The methodology is carried out by a medical domain expert in conjunction with a computer. The expert is guided by a set of three modeling rules, which guarantee that the resulting partitioned schema consists of a forest of trees. This makes it easier to understand and consequently use the medical terminology. The application of our methodology to the schema of the Medical Entities Dictionary (MED) is presented.
Delamination Assessment Tool for Spacecraft Composite Structures
NASA Astrophysics Data System (ADS)
Portela, Pedro; Preller, Fabian; Wittke, Henrik; Sinnema, Gerben; Camanho, Pedro; Turon, Albert
2012-07-01
Fortunately only few cases are known where failure of spacecraft structures due to undetected damage has resulted in a loss of spacecraft and launcher mission. However, several problems related to damage tolerance and in particular delamination of composite materials have been encountered during structure development of various ESA projects and qualification testing. To avoid such costly failures during development, launch or service of spacecraft, launcher and reusable launch vehicles structures a comprehensive damage tolerance verification approach is needed. In 2009, the European Space Agency (ESA) initiated an activity called “Delamination Assessment Tool” which is led by the Portuguese company HPS Lda and includes academic and industrial partners. The goal of this study is the development of a comprehensive damage tolerance verification approach for launcher and reusable launch vehicles (RLV) structures, addressing analytical and numerical methodologies, material-, subcomponent- and component testing, as well as non-destructive inspection. The study includes a comprehensive review of current industrial damage tolerance practice resulting from ECSS and NASA standards, the development of new Best Practice Guidelines for analysis, test and inspection methods and the validation of these with a real industrial case study. The paper describes the main findings of this activity so far and presents a first iteration of a Damage Tolerance Verification Approach, which includes the introduction of novel analytical and numerical tools at an industrial level. This new approach is being put to the test using real industrial case studies provided by the industrial partners, MT Aerospace, RUAG Space and INVENT GmbH
Preliminary Validation of Composite Material Constitutive Characterization
John G. Michopoulos; Athanasios lliopoulos; John C. Hermanson; Adrian C. Orifici; Rodney S. Thomson
2012-01-01
This paper is describing the preliminary results of an effort to validate a methodology developed for composite material constitutive characterization. This methodology involves using massive amounts of data produced from multiaxially tested coupons via a 6-DoF robotic system called NRL66.3 developed at the Naval Research Laboratory. The testing is followed by...
Video in the Evaluation Process.
ERIC Educational Resources Information Center
Pelletier, Raymond J.
The rationale and methodology for using videotape recordings to test foreign language listening comprehension are discussed. First, the advantages of using video in teaching and testing listening comprehension are examined and the specific listening skills to be developed at the beginning level are outlined. Issues in the selection of video…
Readability and Comprehension of Self-Report Binge Eating Measures
Richards, Lauren K.; McHugh, R. Kathryn; Pratt, Elizabeth M.; Thompson-Brenner, Heather
2013-01-01
The validity of self-report binge eating instruments among individuals with limited literacy is uncertain. This study aims to evaluate reading grade level and multiple domains of comprehension of 13 commonly used self-report assessments of binge eating for use in low-literacy populations. We evaluated self-report binge eating measures with respect to reading grade levels, measure length, formatting and linguistic problems. Results: All measures were written at a reading grade level higher than is recommended for patient materials (above the 5th to 6th grade level), and contained several challenging elements related to comprehension. Correlational analyses suggested that readability and comprehension elements were distinct contributors to measure difficulty. Individuals with binge eating who have low levels of educational attainment or limited literacy are often underrepresented in measure validation studies. Validity of measures and accurate assessment of symptoms depends on an individual's ability to read and comprehend instructions and items, and these may be compromised in populations with lower levels of literacy. PMID:23557814
Readability and comprehension of self-report binge eating measures.
Richards, Lauren K; McHugh, R Kathryn; Pratt, Elizabeth M; Thompson-Brenner, Heather
2013-04-01
The validity of self-report binge eating instruments among individuals with limited literacy is uncertain. This study aims to evaluate reading grade level and multiple domains of comprehension of 13 commonly used self-report assessments of binge eating for use in low-literacy populations. We evaluated self-report binge eating measures with respect to reading grade levels, measure length, formatting and linguistic problems. All measures were written at a reading grade level higher than is recommended for patient materials (above the 5th to 6th grade level), and contained several challenging elements related to comprehension. Correlational analyses suggested that readability and comprehension elements were distinct contributors to measure difficulty. Individuals with binge eating who have low levels of educational attainment or limited literacy are often underrepresented in measure validation studies. Validity of measures and accurate assessment of symptoms depend on an individual's ability to read and comprehend instructions and items, and these may be compromised in populations with lower levels of literacy. Copyright © 2013 Elsevier Ltd. All rights reserved.
A Comprehensive Guide for Performing Sample Preparation and Top-Down Protein Analysis
Padula, Matthew P.; Berry, Iain J.; O′Rourke, Matthew B.; Raymond, Benjamin B.A.; Santos, Jerran; Djordjevic, Steven P.
2017-01-01
Methodologies for the global analysis of proteins in a sample, or proteome analysis, have been available since 1975 when Patrick O′Farrell published the first paper describing two-dimensional gel electrophoresis (2D-PAGE). This technique allowed the resolution of single protein isoforms, or proteoforms, into single ‘spots’ in a polyacrylamide gel, allowing the quantitation of changes in a proteoform′s abundance to ascertain changes in an organism′s phenotype when conditions change. In pursuit of the comprehensive profiling of the proteome, significant advances in technology have made the identification and quantitation of intact proteoforms from complex mixtures of proteins more routine, allowing analysis of the proteome from the ‘Top-Down’. However, the number of proteoforms detected by Top-Down methodologies such as 2D-PAGE or mass spectrometry has not significantly increased since O’Farrell’s paper when compared to Bottom-Up, peptide-centric techniques. This article explores and explains the numerous methodologies and technologies available to analyse the proteome from the Top-Down with a strong emphasis on the necessity to analyse intact proteoforms as a better indicator of changes in biology and phenotype. We arrive at the conclusion that the complete and comprehensive profiling of an organism′s proteome is still, at present, beyond our reach but the continuing evolution of protein fractionation techniques and mass spectrometry brings comprehensive Top-Down proteome profiling closer. PMID:28387712
A Comprehensive Guide for Performing Sample Preparation and Top-Down Protein Analysis.
Padula, Matthew P; Berry, Iain J; O Rourke, Matthew B; Raymond, Benjamin B A; Santos, Jerran; Djordjevic, Steven P
2017-04-07
Methodologies for the global analysis of proteins in a sample, or proteome analysis, have been available since 1975 when Patrick O'Farrell published the first paper describing two-dimensional gel electrophoresis (2D-PAGE). This technique allowed the resolution of single protein isoforms, or proteoforms, into single 'spots' in a polyacrylamide gel, allowing the quantitation of changes in a proteoform's abundance to ascertain changes in an organism's phenotype when conditions change. In pursuit of the comprehensive profiling of the proteome, significant advances in technology have made the identification and quantitation of intact proteoforms from complex mixtures of proteins more routine, allowing analysis of the proteome from the 'Top-Down'. However, the number of proteoforms detected by Top-Down methodologies such as 2D-PAGE or mass spectrometry has not significantly increased since O'Farrell's paper when compared to Bottom-Up, peptide-centric techniques. This article explores and explains the numerous methodologies and technologies available to analyse the proteome from the Top-Down with a strong emphasis on the necessity to analyse intact proteoforms as a better indicator of changes in biology and phenotype. We arrive at the conclusion that the complete and comprehensive profiling of an organism's proteome is still, at present, beyond our reach but the continuing evolution of protein fractionation techniques and mass spectrometry brings comprehensive Top-Down proteome profiling closer.
[The added value of information summaries supporting clinical decisions at the point-of-care.
Banzi, Rita; González-Lorenzo, Marien; Kwag, Koren Hyogene; Bonovas, Stefanos; Moja, Lorenzo
2016-11-01
Evidence-based healthcare requires the integration of the best research evidence with clinical expertise and patients' values. International publishers are developing evidence-based information services and resources designed to overcome the difficulties in retrieving, assessing and updating medical information as well as to facilitate a rapid access to valid clinical knowledge. Point-of-care information summaries are defined as web-based medical compendia that are specifically designed to deliver pre-digested, rapidly accessible, comprehensive, and periodically updated information to health care providers. Their validity must be assessed against marketing claims that they are evidence-based. We periodically evaluate the content development processes of several international point-of-care information summaries. The number of these products has increased along with their quality. The last analysis done in 2014 identified 26 products and found that three of them (Best Practice, Dynamed e Uptodate) scored the highest across all evaluated dimensions (volume, quality of the editorial process and evidence-based methodology). Point-of-care information summaries as stand-alone products or integrated with other systems, are gaining ground to support clinical decisions. The choice of one product over another depends both on the properties of the service and the preference of users. However, even the most innovative information system must rely on transparent and valid contents. Individuals and institutions should regularly assess the value of point-of-care summaries as their quality changes rapidly over time.
Valley City State College Planning Manual.
ERIC Educational Resources Information Center
Valley City State Coll., ND.
The Valley City State College, North Dakota, planning manual, which was based on the Futures Creating Paradigm methodology, is presented. The paradigm is a methodology for interdisciplinary policy planning and establishment of objectives and goals. The first planning stage involved preparing comprehensive narratives in the following areas likely…
Uher, Jana
2015-12-01
Taxonomic "personality" models are widely used in research and applied fields. This article applies the Transdisciplinary Philosophy-of-Science Paradigm for Research on Individuals (TPS-Paradigm) to scrutinise the three methodological steps that are required for developing comprehensive "personality" taxonomies: 1) the approaches used to select the phenomena and events to be studied, 2) the methods used to generate data about the selected phenomena and events and 3) the reduction principles used to extract the "most important" individual-specific variations for constructing "personality" taxonomies. Analyses of some currently popular taxonomies reveal frequent mismatches between the researchers' explicit and implicit metatheories about "personality" and the abilities of previous methodologies to capture the particular kinds of phenomena toward which they are targeted. Serious deficiencies that preclude scientific quantifications are identified in standardised questionnaires, psychology's established standard method of investigation. These mismatches and deficiencies derive from the lack of an explicit formulation and critical reflection on the philosophical and metatheoretical assumptions being made by scientists and from the established practice of radically matching the methodological tools to researchers' preconceived ideas and to pre-existing statistical theories rather than to the particular phenomena and individuals under study. These findings raise serious doubts about the ability of previous taxonomies to appropriately and comprehensively reflect the phenomena towards which they are targeted and the structures of individual-specificity occurring in them. The article elaborates and illustrates with empirical examples methodological principles that allow researchers to appropriately meet the metatheoretical requirements and that are suitable for comprehensively exploring individuals' "personality".
Longo, Umile Giuseppe; Saris, Daniël; Poolman, Rudolf W; Berton, Alessandra; Denaro, Vincenzo
2012-10-01
The aims of this study were to obtain an overview of the methodological quality of studies on the measurement properties of rotator cuff questionnaires and to describe how well various aspects of the design and statistical analyses of studies on measurement properties are performed. A systematic review of published studies on the measurement properties of rotator cuff questionnaires was performed. Two investigators independently rated the quality of the studies using the Consensus-based Standards for the selection of health Measurement Instruments checklist. This checklist was developed in an international Delphi consensus study. Sixteen studies were included, in which two measurement instruments were evaluated, namely the Western Ontario Rotator Cuff Index and the Rotator Cuff Quality-of-Life Measure. The methodological quality of the included studies was adequate on some properties (construct validity, reliability, responsiveness, internal consistency, and translation) but need to be improved on other aspects. The most important methodological aspects that need to be developed are as follows: measurement error, content validity, structural validity, cross-cultural validity, criterion validity, and interpretability. Considering the importance of adequate measurement properties, it is concluded that, in the field of rotator cuff pathology, there is room for improvement in the methodological quality of studies measurement properties. II.
Developing Comprehension vs. Production of "Because" and "So."
ERIC Educational Resources Information Center
McCabe, Allyssa; Peterson, Carole
A series of studies evaluated methodological issues in the investigation of children's developing comprehension and production of the words "because" and "so." The familiarity of task materials and their relevance to 4-, 6-, and 8-year-old children's experience were the focus of the first study. For the second study, involving…
Energy Service Companies as a Component of a Comprehensive University Sustainability Strategy
ERIC Educational Resources Information Center
Pearce, Joshua M.; Miller, Laura L.
2006-01-01
Purpose: This paper aims to quantify and critically analyze the best practices of a comprehensive environmental stewardship strategy (ESS), which included a guaranteed energy savings program (GESP) that utilized an energy service company (ESCO). Design/methodology/approach: The environmental and economic benefits and limitations of an approach…
ERIC Educational Resources Information Center
van den Bosch, Roxette M.; Espin, Christine A.; Chung, Siuman; Saab, Nadira
2017-01-01
Teachers have difficulty using data from Curriculum-based Measurement (CBM) progress graphs of students with learning difficulties for instructional decision-making. As a first step in unraveling those difficulties, we studied teachers' comprehension of CBM graphs. Using think-aloud methodology, we examined 23 teachers' ability to…
ERIC Educational Resources Information Center
Gough, Timothy Jerome
2017-01-01
The purpose of this study was to determine how teachers in an urban school district implemented Comprehensive Literacy Improvement Program (CLIP) and balanced literacy framework in second through fifth grade classrooms by exploring the evidence of implementation of guided reading strategies. Instructional delivery, training methodology, phonemic…
Russo, Arthur C
2012-12-01
Operation Enduring Freedom and Operation Iraqi Freedom combat veterans given definite diagnoses of mild Traumatic Brain Injury (TBI) during the Veteran Health Administration (VHA) Comprehensive TBI evaluation and reporting no post-deployment head injury were examined to assess (a) consistency of self-reported memory impairment and (b) symptom validity test (SVT) performance via a two-part study. Study 1 found that while 49 of 50 veterans reported moderate to very severe memory impairment during the VHA Comprehensive TBI evaluation, only 7 had reported any memory problem at the time of their Department of Defense (DOD) post-deployment health assessment. Study 2 found that of 38 veterans referred for neuropsychological evaluations following a positive VHA Comprehensive TBI evaluation, 68.4% failed the Word Memory Test, a forced choice memory recognition symptom validity task. Together, these studies raise questions concerning the use of veteran symptom self-report for TBI assessments and argue for the inclusion of SVTs and the expanded use of contemporaneous DOD records to improve the diagnostic accuracy of the VHA Comprehensive TBI evaluation.
ERIC Educational Resources Information Center
Hintze, John M.; Ryan, Amanda L.; Stoner, Gary
2003-01-01
The purpose of this study was to (a) examine the concurrent validity of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) with the Comprehensive Test of Phonological Processing (CTOPP), and (b) explore the diagnostic accuracy of the DIBELS in predicting CTOPP performance using suggested and alternative cut-scores. Eighty-six students…
NASA Astrophysics Data System (ADS)
Eimori, Takahisa; Anami, Kenji; Yoshimatsu, Norifumi; Hasebe, Tetsuya; Murakami, Kazuaki
2014-01-01
A comprehensive design optimization methodology using intuitive nondimensional parameters of inversion-level and saturation-level is proposed, especially for ultralow-power, low-voltage, and high-performance analog circuits with mixed strong, moderate, and weak inversion metal-oxide-semiconductor transistor (MOST) operations. This methodology is based on the synthesized charge-based MOST model composed of Enz-Krummenacher-Vittoz (EKV) basic concepts and advanced-compact-model (ACM) physics-based equations. The key concept of this methodology is that all circuit and system characteristics are described as some multivariate functions of inversion-level parameters, where the inversion level is used as an independent variable representative of each MOST. The analog circuit design starts from the first step of inversion-level design using universal characteristics expressed by circuit currents and inversion-level parameters without process-dependent parameters, followed by the second step of foundry-process-dependent design and the last step of verification using saturation-level criteria. This methodology also paves the way to an intuitive and comprehensive design approach for many kinds of analog circuit specifications by optimization using inversion-level log-scale diagrams and saturation-level criteria. In this paper, we introduce an example of our design methodology for a two-stage Miller amplifier.
Comprehensive Design Reliability Activities for Aerospace Propulsion Systems
NASA Technical Reports Server (NTRS)
Christenson, R. L.; Whitley, M. R.; Knight, K. C.
2000-01-01
This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.
Indicators for the automated analysis of drug prescribing quality.
Coste, J; Séné, B; Milstein, C; Bouée, S; Venot, A
1998-01-01
Irrational and inconsistent drug prescription has considerable impact on morbidity, mortality, health service utilization, and community burden. However, few studies have addressed the methodology of processing the information contained in these drug orders used to study the quality of drug prescriptions and prescriber behavior. We present a comprehensive set of quantitative indicators for the quality of drug prescriptions which can be derived from a drug order. These indicators were constructed using explicit a priori criteria which were previously validated on the basis of scientific data. Automatic computation is straightforward, using a relational database system, such that large sets of prescriptions can be processed with minimal human effort. We illustrate the feasibility and value of this approach by using a large set of 23,000 prescriptions for several diseases, selected from a nationally representative prescriptions database. Our study may result in direct and wide applications in the epidemiology of medical practice and in quality control procedures.
A Review of Hypothesized Determinants Associated with Bighorn Sheep (Ovis canadensis) Die-Offs
Miller, David S.; Hoberg, Eric; Weiser, Glen; Aune, Keith; Atkinson, Mark; Kimberling, Cleon
2012-01-01
Multiple determinants have been hypothesized to cause or favor disease outbreaks among free-ranging bighorn sheep (Ovis canadensis) populations. This paper considered direct and indirect causes of mortality, as well as potential interactions among proposed environmental, host, and agent determinants of disease. A clear, invariant relationship between a single agent and field outbreaks has not yet been documented, in part due to methodological limitations and practical challenges associated with developing rigorous study designs. Therefore, although there is a need to develop predictive models for outbreaks and validated mitigation strategies, uncertainty remains as to whether outbreaks are due to endemic or recently introduced agents. Consequently, absence of established and universal explanations for outbreaks contributes to conflict among wildlife and livestock stakeholders over land use and management practices. This example illustrates the challenge of developing comprehensive models for understanding and managing wildlife diseases in complex biological and sociological environments. PMID:22567546
Kostopoulou, O
The paper describes the process of developing a taxonomy of patient safety in general practice. The methodologies employed included fieldwork, task analysis and confidential reporting of patient-safety events in five West Midlands practices. Reported events were traced back to their root causes and contributing factors. The resulting taxonomy is based on a theoretical model of human cognition, includes multiple levels of classification to reflect the chain of causation and considers affective and physiological influences on performance. Events are classified at three levels. At level one, the information-processing model of cognition is used to classify errors. At level two, immediate causes are identified, internal and external to the individual. At level three, more remote causal factors are classified as either 'work organization' or 'technical' with subcategories. The properties of the taxonomy (validity, reliability, comprehensiveness) as well as its usability and acceptability remain to be tested with potential users.
Situating methodology within qualitative research.
Kramer-Kile, Marnie L
2012-01-01
Qualitative nurse researchers are required to make deliberate and sometimes complex methodological decisions about their work. Methodology in qualitative research is a comprehensive approach in which theory (ideas) and method (doing) are brought into close alignment. It can be difficult, at times, to understand the concept of methodology. The purpose of this research column is to: (1) define qualitative methodology; (2) illuminate the relationship between epistemology, ontology and methodology; (3) explicate the connection between theory and method in qualitative research design; and 4) highlight relevant examples of methodological decisions made within cardiovascular nursing research. Although there is no "one set way" to do qualitative research, all qualitative researchers should account for the choices they make throughout the research process and articulate their methodological decision-making along the way.
Computational Acoustic Beamforming for Noise Source Identification for Small Wind Turbines
Lien, Fue-Sang
2017-01-01
This paper develops a computational acoustic beamforming (CAB) methodology for identification of sources of small wind turbine noise. This methodology is validated using the case of the NACA 0012 airfoil trailing edge noise. For this validation case, the predicted acoustic maps were in excellent conformance with the results of the measurements obtained from the acoustic beamforming experiment. Following this validation study, the CAB methodology was applied to the identification of noise sources generated by a commercial small wind turbine. The simulated acoustic maps revealed that the blade tower interaction and the wind turbine nacelle were the two primary mechanisms for sound generation for this small wind turbine at frequencies between 100 and 630 Hz. PMID:28378012
Three-Dimensional Extension of a Digital Library Service System
ERIC Educational Resources Information Center
Xiao, Long
2010-01-01
Purpose: The paper aims to provide an overall methodology and case study for the innovation and extension of a digital library, especially the service system. Design/methodology/approach: Based on the three-dimensional structure theory of the information service industry, this paper combines a comprehensive analysis with the practical experiences…
Brunckhorst, Oliver; Shahid, Shahab; Aydin, Abdullatif; McIlhenny, Craig; Khan, Shahid; Raza, Syed Johar; Sahai, Arun; Brewin, James; Bello, Fernando; Kneebone, Roger; Khan, Muhammad Shamim; Dasgupta, Prokar; Ahmed, Kamran
2015-09-01
Current training modalities within ureteroscopy have been extensively validated and must now be integrated within a comprehensive curriculum. Additionally, non-technical skills often cause surgical error and little research has been conducted to combine this with technical skills teaching. This study therefore aimed to develop and validate a curriculum for semi-rigid ureteroscopy, integrating both technical and non-technical skills teaching within the programme. Delphi methodology was utilised for curriculum development and content validation, with a randomised trial then conducted (n = 32) for curriculum evaluation. The developed curriculum consisted of four modules; initially developing basic technical skills and subsequently integrating non-technical skills teaching. Sixteen participants underwent the simulation-based curriculum and were subsequently assessed, together with the control cohort (n = 16) within a full immersion environment. Both technical (Time to completion, OSATS and a task specific checklist) and non-technical (NOTSS) outcome measures were recorded with parametric and non-parametric analyses used depending on the distribution of our data as evaluated by a Shapiro-Wilk test. Improvements within the intervention cohort demonstrated educational value across all technical and non-technical parameters recorded, including time to completion (p < 0.01), OSATS scores (p < 0.001), task specific checklist scores (p = 0.011) and NOTSS scores (p < 0.001). Content validity, feasibility and acceptability were all demonstrated through curriculum development and post-study questionnaire results. The current developed curriculum demonstrates that integrating both technical and non-technical skills teaching is both educationally valuable and feasible. Additionally, the curriculum offers a validated simulation-based training modality within ureteroscopy and a framework for the development of other simulation-based programmes.
Agreeing on Validity Arguments
ERIC Educational Resources Information Center
Sireci, Stephen G.
2013-01-01
Kane (this issue) presents a comprehensive review of validity theory and reminds us that the focus of validation is on test score interpretations and use. In reacting to his article, I support the argument-based approach to validity and all of the major points regarding validation made by Dr. Kane. In addition, I call for a simpler, three-step…
Bridges, Susan M; Parthasarathy, Divya S; Au, Terry K F; Wong, Hai Ming; Yiu, Cynthia K Y; McGrath, Colman P
2014-01-01
This paper describes the development of a new literacy assessment instrument, the Hong Kong Oral Health Literacy Assessment Task for Paediatric Dentistry (HKOHLAT-P). Its relationship to literacy theory is analyzed to establish content and face validity. Implications for construct validity are examined by analyzing cognitive demand to determine how "comprehension" is measured. Key influences from literacy assessment were identified to analyze item development. Cognitive demand was analyzed using an established taxonomy. The HKOHLAT-P focuses on the functional domain of health literacy assessment. Items had strong content and face validity reflecting established principles from modern literacy theory. Inclusion of new text types signified relevant developments in the area of new literacies. Analysis of cognitive demand indicated that this instrument assesses the "comprehension" domain, specifically the areas of factual and procedural knowledge, with some assessment of conceptual knowledge. Metacognitive knowledge was not assessed. Comprehension tasks assessing patient health literacy predominantly examine functional health literacy at the lower levels of comprehension. Item development is influenced by the fields of situated and authentic literacy. Inclusion of content regarding multiliteracies is suggested for further research. Development of functional health literacy assessment instruments requires careful consideration of the clinical context in determining construct validity. © 2013 American Association of Public Health Dentistry.
The Validity and reliability of the Comprehensive Home Environment Survey (CHES).
Pinard, Courtney A; Yaroch, Amy L; Hart, Michael H; Serrano, Elena L; McFerren, Mary M; Estabrooks, Paul A
2014-01-01
Few comprehensive measures exist to assess contributors to childhood obesity within the home, specifically among low-income populations. The current study describes the modification and psychometric testing of the Comprehensive Home Environment Survey (CHES), an inclusive measure of the home food, physical activity, and media environment related to childhood obesity. The items were tested for content relevance by an expert panel and piloted in the priority population. The CHES was administered to low-income parents of children 5 to 17 years (N = 150), including a subsample of parents a second time and additional caregivers to establish test-retest and interrater reliabilities. Children older than 9 years (n = 95), as well as parents (N = 150) completed concurrent assessments of diet and physical activity behaviors (predictive validity). Analyses and item trimming resulted in 18 subscales and a total score, which displayed adequate internal consistency (α = .74-.92) and high test-retest reliability (r ≥ .73, ps < .01) and interrater reliability (r ≥ .42, ps < .01). The CHES score and a validated screener for the home environment were correlated (r = .37, p < .01; concurrent validity). CHES subscales were significantly correlated with behavioral measures (r = -.20-.55, p < .05; predictive validity). The CHES shows promise as a valid/reliable assessment of the home environment related to childhood obesity, including healthy diet and physical activity.
Discrete and continuous dynamics modeling of a mass moving on a flexible structure
NASA Technical Reports Server (NTRS)
Herman, Deborah Ann
1992-01-01
A general discrete methodology for modeling the dynamics of a mass that moves on the surface of a flexible structure is developed. This problem was motivated by the Space Station/Mobile Transporter system. A model reduction approach is developed to make the methodology applicable to large structural systems. To validate the discrete methodology, continuous formulations are also developed. Three different systems are examined: (1) simply-supported beam, (2) free-free beam, and (3) free-free beam with two points of contact between the mass and the flexible beam. In addition to validating the methodology, parametric studies were performed to examine how the system's physical properties affect its dynamics.
ERIC Educational Resources Information Center
Coleman, Chris; Lindstrom, Jennifer; Nelson, Jason; Lindstrom, William; Gregg, K. Noel
2010-01-01
The comprehension section of the "Nelson-Denny Reading Test" (NDRT) is widely used to assess the reading comprehension skills of adolescents and adults in the United States. In this study, the authors explored the content validity of the NDRT Comprehension Test (Forms G and H) by asking university students (with and without at-risk…
ERIC Educational Resources Information Center
Keenan, Janice M.; Betjemann, Rebecca S.; Olson, Richard K.
2008-01-01
Comprehension tests are often used interchangeably, suggesting an implicit assumption that they are all measuring the same thing. We examine the validity of this assumption by comparing some of the most popular reading comprehension measures used in research and clinical practice in the United States: the Gray Oral Reading Test (GORT), the two…
ERIC Educational Resources Information Center
Thomas, Lisa B.
2012-01-01
Reading comprehension is a critical aspect of the reading process. Children who experience significant problems in reading comprehension are at risk for long-term academic and social problems. High-quality measures are needed for early, efficient, and effective identification of children in need of remediation in reading comprehension. Substantial…
Weis, Joachim; Tomaszewski, Krzysztof A; Hammerlid, Eva; Ignacio Arraras, Juan; Conroy, Thierry; Lanceley, Anne; Schmidt, Heike; Wirtz, Markus; Singer, Susanne; Pinto, Monica; Alm El-Din, Mohamed; Compter, Inge; Holzner, Bernhard; Hofmeister, Dirk; Chie, Wei-Chu; Czeladzki, Marek; Harle, Amelie; Jones, Louise; Ritter, Sabrina; Flechtner, Hans-Henning; Bottomley, Andrew
2017-05-01
The European Organisation for Research and Treatment of Cancer (EORTC) Group has developed a new multidimensional instrument measuring cancer-related fatigue to be used in conjunction with the quality of life core questionnaire (EORTC QLQ-C30). The module EORTC QLQ-FA13 assesses physical, cognitive, and emotional aspects of cancer-related fatigue. The methodology follows the EORTC guidelines for phase IV validation of modules. This paper focuses on the results of the psychometric validation of the factorial structure of the module. For validation and cross-validation confirmatory factor analysis (maximum likelihood estimation), intraclass correlation and Cronbach alpha for internal consistency were employed. The study involved an international multicenter collaboration of 11 European and non-European countries. A total of 946 patients with various tumor diagnoses were enrolled. Based on the confirmatory factor analysis, we could approve the three-dimensional structure of the module. Removing one item and reassigning the factorial mapping of another item resulted in the EORTC QLQ-FA12. For the revised scale, we found evidence supporting good local (indicator reliability ≥ 0.60, factor reliability ≥ 0.82) and global model fit (GFI t1|t2 = 0.965/0.957, CFI t1|t2 = 0.976/0.972, RMSEA t1|t2 = 0.060/0.069) for both measurement points. For each scale, test-retest reliability proved to be very good (intraclass correlation: R t1-t2 = 0.905-0.921) and internal consistency proved to be good to high (Cronbach alpha = .79-.90). Based on the former phase III module, the multidimensional structure was revised as a phase IV module (EORTC FA12) with an improved scale structure. For a comprehensive validation of the EORTC FA12, further aspects of convergent and divergent validity as well as sensitivity to change should be determined. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A standardised protocol for the validation of banking methodologies for arterial allografts.
Lomas, R J; Dodd, P D F; Rooney, P; Pegg, D E; Hogg, P A; Eagle, M E; Bennett, K E; Clarkson, A; Kearney, J N
2013-09-01
The objective of this study was to design and test a protocol for the validation of banking methodologies for arterial allografts. A series of in vitro biomechanical and biological assessments were derived, and applied to paired fresh and banked femoral arteries. The ultimate tensile stress and strain, suture pullout stress and strain, expansion/rupture under hydrostatic pressure, histological structure and biocompatibility properties of disinfected and cryopreserved femoral arteries were compared to those of fresh controls. No significant differences were detected in any of the test criteria. This validation protocol provides an effective means of testing and validating banking protocols for arterial allografts.
Landolt, Alison S; Milling, Leonard S
2011-08-01
This paper presents a comprehensive methodological review of research on the efficacy of hypnosis for reducing labor and delivery pain. To be included, studies were required to use a between-subjects or mixed model design in which hypnosis was compared with a control condition or alternative intervention in reducing labor pain. An exhaustive search of the PsycINFO and PubMed databases produced 13 studies satisfying these criteria. Hetero-hypnosis and self-hypnosis were consistently shown to be more effective than standard medical care, supportive counseling, and childbirth education classes in reducing pain. Other benefits included better infant Apgar scores and shorter Stage 1 labor. Common methodological limitations of the literature include a failure to use random assignment, to specify the demographic characteristics of samples, and to use a treatment manual. Copyright © 2011 Elsevier Ltd. All rights reserved.
Woltjen, Knut; Ito, Kenichi; Tsuzuki, Teruhisa; Rancourt, Derrick E
2008-01-01
In recent years, methods to address the simplification of targeting vector (TV) construction have been developed and validated. Based on in vivo recombination in Escherichia coli, these protocols have reduced dependence on restriction endonucleases, allowing the fabrication of complex TV constructs with relative ease. Using a methodology based on phage-plasmid recombination, we have developed a comprehensive TV construction protocol dubbed Orpheus recombination (ORE). The ORE system addresses all necessary requirements for TV construction; from the isolation of genespecific regions of homology to the deposition of selection/disruption cassettes. ORE makes use of a small recombination plasmid, which bears positive and negative selection markers and a cloned homologous "probe" region. This probe plasmid may be introduced into and excised from phage-borne murine genomic clones by two rounds of single crossover recombination. In this way, desired clones can be specifically isolated from a heterogeneous library of phage. Furthermore, if the probe region contains a designed mutation, it may be deposited seamlessly into the genomic clone. The complete removal of operational sequences allows unlimited repetition of the procedure to customize and finalize TVs within a few weeks. Successful gene-specific clone isolation, point mutations, large deletions, cassette insertions, and finally coincident clone isolation and mutagenesis have all been demonstrated with this method.
Bitter, Neis A; Roeg, Diana P K; van Nieuwenhuizen, Chijs; van Weeghel, Jaap
2015-07-22
There is an increasing amount of evidence for the effectiveness of rehabilitation interventions for people with severe mental illness (SMI). In the Netherlands, a rehabilitation methodology that is well known and often applied is the Comprehensive Approach to Rehabilitation (CARe) methodology. The overall goal of the CARe methodology is to improve the client's quality of life by supporting the client in realizing his/her goals and wishes, handling his/her vulnerability and improving the quality of his/her social environment. The methodology is strongly influenced by the concept of 'personal recovery' and the 'strengths case management model'. No controlled effect studies have been conducted hitherto regarding the CARe methodology. This study is a two-armed cluster randomized controlled trial (RCT) that will be executed in teams from three organizations for sheltered and supported housing, which provide services to people with long-term severe mental illness. Teams in the intervention group will receive the multiple-day CARe methodology training from a specialized institute and start working according the CARe Methodology guideline. Teams in the control group will continue working in their usual way. Standardized questionnaires will be completed at baseline (T0), and 10 (T1) and 20 months (T2) post baseline. Primary outcomes are recovery, social functioning and quality of life. The model fidelity of the CARe methodology will be assessed at T1 and T2. This study is the first controlled effect study on the CARe methodology and one of the few RCTs on a broad rehabilitation method or strength-based approach. This study is relevant because mental health care organizations have become increasingly interested in recovery and rehabilitation-oriented care. The trial registration number is ISRCTN77355880 .
Cue-Dependent Interference in Comprehension
ERIC Educational Resources Information Center
Van Dyke, Julie A.; McElree, Brian
2011-01-01
The role of interference as a primary determinant of forgetting in memory has long been accepted, however its role as a contributor to poor comprehension is just beginning to be understood. The current paper reports two studies, in which speed-accuracy tradeoff and eye-tracking methodologies were used with the same materials to provide converging…
Care for Our Children: A Comprehensive Plan for Child Care Services in Berkeley.
ERIC Educational Resources Information Center
Pacific Training and Technical Assistance Corp., Berkeley, CA.
This document reports research and recommendations made by the Pacific Training and Technical Assistance Corporation for a comprehensive child-care program in Berkeley. The report is divided into two sections. Section I, "Research and Planning," describes research methodology and findings and includes demographic information on the city…
Evaluation Issues in the Comprehensive Employment and Training Act (CETA) Legislation.
ERIC Educational Resources Information Center
Spirer, Janet E.
Underutilization of evaluation findings relative to the Comprehensive Employment and Training Act (CETA) legislation may not stem primarily from factors usually identified in the literature (e.g., methodological reasons) but may be superseded by a more potent factor such as the prominence of the policy or program on the national agenda. Viewed…
ERIC Educational Resources Information Center
Zimman, Richard N.
Using ethnographic case study methodology (involving open-ended interviews, participant observation, and document analysis) theories of administrative organization, processes, and behavior were tested during a three-week observation of a model comprehensive (experimental) high school. Although the study is limited in its general application, it…
ERIC Educational Resources Information Center
Srivastava, Anugamini Priya; Dhar, Rajib Lochan
2016-01-01
Purpose: This study aims to analyse the impact of authentic leadership (AL) on academic optimism (AO) through the mediating role of affective commitment (AC). As this study also examines the moderating role of training comprehensiveness (TC) in strengthening the relation between AC and AO. Design/methodology/approach: Data were collected from…
NASA Astrophysics Data System (ADS)
Gobbato, Maurizio; Kosmatka, John B.; Conte, Joel P.
2014-04-01
Fatigue-induced damage is one of the most uncertain and highly unpredictable failure mechanisms for a large variety of mechanical and structural systems subjected to cyclic and random loads during their service life. A health monitoring system capable of (i) monitoring the critical components of these systems through non-destructive evaluation (NDE) techniques, (ii) assessing their structural integrity, (iii) recursively predicting their remaining fatigue life (RFL), and (iv) providing a cost-efficient reliability-based inspection and maintenance plan (RBIM) is therefore ultimately needed. In contribution to these objectives, the first part of the paper provides an overview and extension of a comprehensive reliability-based fatigue damage prognosis methodology — previously developed by the authors — for recursively predicting and updating the RFL of critical structural components and/or sub-components in aerospace structures. In the second part of the paper, a set of experimental fatigue test data, available in the literature, is used to provide a numerical verification and an experimental validation of the proposed framework at the reliability component level (i.e., single damage mechanism evolving at a single damage location). The results obtained from this study demonstrate (i) the importance and the benefits of a nearly continuous NDE monitoring system, (ii) the efficiency of the recursive Bayesian updating scheme, and (iii) the robustness of the proposed framework in recursively updating and improving the RFL estimations. This study also demonstrates that the proposed methodology can lead to either an extent of the RFL (with a consequent economical gain without compromising the minimum safety requirements) or an increase of safety by detecting a premature fault and therefore avoiding a very costly catastrophic failure.
2011-09-01
a quality evaluation with limited data, a model -based assessment must be...that affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a ...affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a wide range
Methodological Validation of Quality of Life Questionnaire for Coal Mining Groups-Indian Scenario
ERIC Educational Resources Information Center
Sen, Sayanti; Sen, Goutam; Tewary, B. K.
2012-01-01
Maslow's hierarchy-of-needs theory has been used to predict development of Quality of Life (QOL) in countries over time. In this paper an attempt has been taken to derive a methodological validation of quality of life questionnaire which have been prepared for the study area. The objective of the study is to standardize a questionnaire tool to…
CFD Analysis of the SBXC Glider Airframe
2016-06-01
mathematically on finite element methods. To validate and verify the methodology developed, a mathematical comparison was made with the previous research data...greater than 15 m/s. 14. SUBJECT TERMS finite element method, computational fluid dynamics, Y Plus, mesh element quality, aerodynamic data, fluid...based mathematically on finite element methods. To validate and verify the methodology developed, a mathematical comparison was made with the
Psychometric Properties of Language Assessments for Children Aged 4–12 Years: A Systematic Review
Denman, Deborah; Speyer, Renée; Munro, Natalie; Pearce, Wendy M.; Chen, Yu-Wei; Cordier, Reinie
2017-01-01
Introduction: Standardized assessments are widely used by speech pathologists in clinical and research settings to evaluate the language abilities of school-aged children and inform decisions about diagnosis, eligibility for services and intervention. Given the significance of these decisions, it is important that assessments have sound psychometric properties. Objective: The aim of this systematic review was to examine the psychometric quality of currently available comprehensive language assessments for school-aged children and identify assessments with the best evidence for use. Methods: Using the PRISMA framework as a guideline, a search of five databases and a review of websites and textbooks was undertaken to identify language assessments and published material on the reliability and validity of these assessments. The methodological quality of selected studies was evaluated using the COSMIN taxonomy and checklist. Results: Fifteen assessments were evaluated. For most assessments evidence of hypothesis testing (convergent and discriminant validity) was identified; with a smaller number of assessments having some evidence of reliability and content validity. No assessments presented with evidence of structural validity, internal consistency or error measurement. Overall, all assessments were identified as having limitations with regards to evidence of psychometric quality. Conclusions: Further research is required to provide good evidence of psychometric quality for currently available language assessments. Of the assessments evaluated, the Assessment of Literacy and Language, the Clinical Evaluation of Language Fundamentals-5th Edition, the Clinical Evaluation of Language Fundamentals-Preschool: 2nd Edition and the Preschool Language Scales-5th Edition presented with most evidence and are thus recommended for use. PMID:28936189
Castrejon, I; Carmona, L; Agrinier, N; Andres, M; Briot, K; Caron, M; Christensen, R; Consolaro, A; Curbelo, R; Ferrer, Montserrat; Foltz, Violaine; Gonzalez, C; Guillemin, F; Machado, P M; Prodinger, Birgit; Ravelli, A; Scholte-Voshaar, M; Uhlig, T; van Tuyl, L H D; Zink, A; Gossec, L
2015-01-01
Patient reported outcomes (PROs) are relevant in rheumatology. Variable accessibility and validity of commonly used PROs are obstacles to homogeneity in evidence synthesis. The objective of this project was to provide a comprehensive library of "validated PROs". A launch meeting with rheumatologists, PROs methodological experts, and patients, was held to define the library's aims and scope, and basic requirements. To feed the library we performed systematic reviews on selected diseases and domains. Relevant information on PROs was collected using standardised data collection forms based on the COSMIN checklist. The EULAR Outcomes Measures Library (OML), whose aims are to provide and to advise on PROs on a user-friendly manner albeit based on scientific grounds, has been launched and made accessible to all. PROs currently included cover any domain and, are generic or specifically target to the following diseases: rheumatoid arthritis, osteoarthritis, spondyloarthritis, low back pain, systemic lupus erythematosus, gout, osteoporosis, juvenile idiopathic arthritis, and fibromyalgia. Up to 236 instruments (106 generic and 130 specific) have been identified, evaluated, and included. The systematic review for SLE, which yielded 10 specific instruments, is presented here as an example. The OML website includes, for each PRO, information on the construct being measured and the extent of validation, recommendations for use, and available versions; it also contains a glossary on common validation terms. The OML is an in progress library led by rheumatologists, related professionals and patients, that will help to better understand and apply PROs in rheumatic and musculoskeletal diseases.
Zarit, Steven H.; Liu, Yin; Bangerter, Lauren R.; Rovine, Michael J.
2017-01-01
Objectives There is growing emphasis on empirical validation of the efficacy of community-based services for older people and their families, but research on services such as respite care faces methodological challenges that have limited the growth of outcome studies. We identify problems associated with the usual research approaches for studying respite care, with the goal of stimulating use of novel and more appropriate research designs that can lead to improved studies of community-based services. Method Using the concept of research validity, we evaluate the methodological approaches in the current literature on respite services, including adult day services, in-home respite and overnight respite. Results Although randomized control trials (RCTs) are possible in community settings, validity is compromised by practical limitations of randomization and other problems. Quasi-experimental and interrupted time series designs offer comparable validity to RCTs and can be implemented effectively in community settings. Conclusion An emphasis on RCTs by funders and researchers is not supported by scientific evidence. Alternative designs can lead to development of a valid body of research on community services such as respite. PMID:26729467
Zarit, Steven H; Bangerter, Lauren R; Liu, Yin; Rovine, Michael J
2017-03-01
There is growing emphasis on empirical validation of the efficacy of community-based services for older people and their families, but research on services such as respite care faces methodological challenges that have limited the growth of outcome studies. We identify problems associated with the usual research approaches for studying respite care, with the goal of stimulating use of novel and more appropriate research designs that can lead to improved studies of community-based services. Using the concept of research validity, we evaluate the methodological approaches in the current literature on respite services, including adult day services, in-home respite and overnight respite. Although randomized control trials (RCTs) are possible in community settings, validity is compromised by practical limitations of randomization and other problems. Quasi-experimental and interrupted time series designs offer comparable validity to RCTs and can be implemented effectively in community settings. An emphasis on RCTs by funders and researchers is not supported by scientific evidence. Alternative designs can lead to development of a valid body of research on community services such as respite.
Kim, Jung-Hee; Shin, Sujin; Park, Jin-Hwa
2015-04-01
The purpose of this study was to evaluate the methodological quality of nursing studies using structural equation modeling in Korea. Databases of KISS, DBPIA, and National Assembly Library up to March 2014 were searched using the MeSH terms 'nursing', 'structure', 'model'. A total of 152 studies were screened. After removal of duplicates and non-relevant titles, 61 papers were read in full. Of the sixty-one articles retrieved, 14 studies were published between 1992 and 2000, 27, between 2001 and 2010, and 20, between 2011 and March 2014. The methodological quality of the review examined varied considerably. The findings of this study suggest that more rigorous research is necessary to address theoretical identification, two indicator rule, distribution of sample, treatment of missing values, mediator effect, discriminant validity, convergent validity, post hoc model modification, equivalent models issues, and alternative models issues should be undergone. Further research with robust consistent methodological study designs from model identification to model respecification is needed to improve the validity of the research.
Measuring Speech Comprehensibility in Students with Down Syndrome
ERIC Educational Resources Information Center
Yoder, Paul J.; Woynaroski, Tiffany; Camarata, Stephen
2016-01-01
Purpose: There is an ongoing need to develop assessments of spontaneous speech that focus on whether the child's utterances are comprehensible to listeners. This study sought to identify the attributes of a stable ratings-based measure of speech comprehensibility, which enabled examining the criterion-related validity of an orthography-based…
Incremental and Predictive Utility of Formative Assessment Methods of Reading Comprehension
ERIC Educational Resources Information Center
Marcotte, Amanda M.; Hintze, John M.
2009-01-01
Formative assessment measures are commonly used in schools to assess reading and to design instruction accordingly. The purpose of this research was to investigate the incremental and concurrent validity of formative assessment measures of reading comprehension. It was hypothesized that formative measures of reading comprehension would contribute…
ERIC Educational Resources Information Center
Roehrig, Alysia D.; Petscher, Yaacov; Nettles, Stephen M.; Hudson, Roxanne F.; Torgesen, Joseph K.
2008-01-01
We evaluated the validity of DIBELS ("Dynamic Indicators of Basic Early Literacy Skills") ORF ("Oral Reading Fluency") for predicting performance on the "Florida Comprehensive Assessment Test" (FCAT-SSS) and "Stanford Achievement Test" (SAT-10) reading comprehension measures. The usefulness of previously…
Methodologies for Effective Writing Instruction in EFL and ESL Classrooms
ERIC Educational Resources Information Center
Al-Mahrooqi, Rahma, Ed.; Thakur, Vijay Singh; Roscoe, Adrian
2015-01-01
Educators continue to strive for advanced teaching methods to bridge the gap between native and non-native English speaking students. Lessons on written forms of communication continue to be a challenge recognized by educators who wish to improve student comprehension and overall ability to write clearly and expressively. "Methodologies for…
ERIC Educational Resources Information Center
Mohan, Subhas
2015-01-01
This study explores the differences in student achievement on state standardized tests between experiential learning and direct learning instructional methodologies. Specifically, the study compares student performances in Expeditionary Learning schools, which is a Comprehensive School Reform model that utilizes experiential learning, to their…
2011-01-21
If the editors' intention was to produce a comprehensive text book that will be of value to healthcare professionals interested in surgical research and improvements in health care, they have succeeded.
ERIC Educational Resources Information Center
Parent, F.; Baulana, R.; Kahombo, G.; Coppieters, Y.; Garant, M.; De Ketele, J.-M.
2011-01-01
Objective: To describe the methodological steps of developing an integrated reference guide for competences according to the profile of the healthcare professionals concerned. Design: Human resources in healthcare represent a complex issue, which needs conceptual and methodological frameworks and tools to help one understand reality and the limits…
Jiménez-Herranz, Borja; Manrique-Arribas, Juan C; López-Pastor, Víctor M; García-Bengoechea, Enrique
2016-10-01
This research applies a communicative methodology (CM) to the transformation and improvement of the Municipal Comprehensive School Sports Programme in Segovia, Spain (MCSSP), using egalitarian dialogue, based on validity rather than power claims to achieve intersubjectivity and arrive at consensus between all of the Programme's stakeholders through the intervention of an advisory committee (AC). The AC is a body comprising representatives of all stakeholder groups involved in the programme. During the 2013-2014 academic year the programme's AC met four times, operating as a communicative focus group (CFG). The meetings focused on: (1) excluding dimensions (barriers preventing transformation) and transforming dimensions (ways of overcoming barriers), (2) the programme's strengths, (3) the programme's weaknesses and specific actions to remedy them, and (4) the resulting conclusions which were then incorporated into the subsequent programme contract signed between the University and the Segovia Local Authority for 2014-2018. The key conclusions were: (1) the recommendations of the AC widen the range of perspectives and help the research team to make key decisions and (2) the use of CM to fully evaluate the programme and to reach a consensus on how to improve it proved very valuable. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickson, T.L.
1993-01-01
This report discusses probabilistic fracture mechanics (PFM) analysis which is a major element of the comprehensive probabilistic methodology endorsed by the NRC for evaluation of the integrity of Pressurized Water Reactor (PWR) pressure vessels subjected to pressurized-thermal-shock (PTS) transients. It is anticipated that there will be an increasing need for an improved and validated PTS PFM code which is accepted by the NRC and utilities, as more plants approach the PTS screening criteria and are required to perform plant-specific analyses. The NRC funded Heavy Section Steel Technology (HSST) Program at Oak Ridge National Laboratories is currently developing the FAVOR (Fracturemore » Analysis of Vessels: Oak Ridge) PTS PFM code, which is intended to meet this need. The FAVOR code incorporates the most important features of both OCA-P and VISA-II and contains some new capabilities such as PFM global modeling methodology, the capability to approximate the effects of thermal streaming on circumferential flaws located inside a plume region created by fluid and thermal stratification, a library of stress intensity factor influence coefficients, generated by the NQA-1 certified ABAQUS computer code, for an adequate range of two and three dimensional inside surface flaws, the flexibility to generate a variety of output reports, and user friendliness.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickson, T.L.
1993-04-01
This report discusses probabilistic fracture mechanics (PFM) analysis which is a major element of the comprehensive probabilistic methodology endorsed by the NRC for evaluation of the integrity of Pressurized Water Reactor (PWR) pressure vessels subjected to pressurized-thermal-shock (PTS) transients. It is anticipated that there will be an increasing need for an improved and validated PTS PFM code which is accepted by the NRC and utilities, as more plants approach the PTS screening criteria and are required to perform plant-specific analyses. The NRC funded Heavy Section Steel Technology (HSST) Program at Oak Ridge National Laboratories is currently developing the FAVOR (Fracturemore » Analysis of Vessels: Oak Ridge) PTS PFM code, which is intended to meet this need. The FAVOR code incorporates the most important features of both OCA-P and VISA-II and contains some new capabilities such as PFM global modeling methodology, the capability to approximate the effects of thermal streaming on circumferential flaws located inside a plume region created by fluid and thermal stratification, a library of stress intensity factor influence coefficients, generated by the NQA-1 certified ABAQUS computer code, for an adequate range of two and three dimensional inside surface flaws, the flexibility to generate a variety of output reports, and user friendliness.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricci, P., E-mail: paolo.ricci@epfl.ch; Riva, F.; Theiler, C.
In the present work, a Verification and Validation procedure is presented and applied showing, through a practical example, how it can contribute to advancing our physics understanding of plasma turbulence. Bridging the gap between plasma physics and other scientific domains, in particular, the computational fluid dynamics community, a rigorous methodology for the verification of a plasma simulation code is presented, based on the method of manufactured solutions. This methodology assesses that the model equations are correctly solved, within the order of accuracy of the numerical scheme. The technique to carry out a solution verification is described to provide a rigorousmore » estimate of the uncertainty affecting the numerical results. A methodology for plasma turbulence code validation is also discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The Verification and Validation methodology is then applied to the study of plasma turbulence in the basic plasma physics experiment TORPEX [Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulations carried out with the GBS code [Ricci et al., Plasma Phys. Controlled Fusion 54, 124047 (2012)]. The validation procedure allows progress in the understanding of the turbulent dynamics in TORPEX, by pinpointing the presence of a turbulent regime transition, due to the competition between the resistive and ideal interchange instabilities.« less
Development and evaluation of clicker methodology for introductory physics courses
NASA Astrophysics Data System (ADS)
Lee, Albert H.
Many educators understand that lectures are cost effective but not learning efficient, so continue to search for ways to increase active student participation in this traditionally passive learning environment. In-class polling systems, or "clickers", are inexpensive and reliable tools allowing students to actively participate in lectures by answering multiple-choice questions. Students assess their learning in real time by observing instant polling summaries displayed in front of them. This in turn motivates additional discussions which increase the opportunity for active learning. We wanted to develop a comprehensive clicker methodology that creates an active lecture environment for a broad spectrum of students taking introductory physics courses. We wanted our methodology to incorporate many findings of contemporary learning science. It is recognized that learning requires active construction; students need to be actively involved in their own learning process. Learning also depends on preexisting knowledge; students construct new knowledge and understandings based on what they already know and believe. Learning is context dependent; students who have learned to apply a concept in one context may not be able to recognize and apply the same concept in a different context, even when both contexts are considered to be isomorphic by experts. On this basis, we developed question sequences, each involving the same concept but having different contexts. Answer choices are designed to address students preexisting knowledge. These sequences are used with the clickers to promote active discussions and multiple assessments. We have created, validated, and evaluated sequences sufficient in number to populate all of introductory physics courses. Our research has found that using clickers with our question sequences significantly improved student conceptual understanding. Our research has also found how to best measure student conceptual gain using research-based instruments. Finally, we discovered that students need to have full access to the question sequences after lectures to reap the maximum benefit. Chapter 1 provides an introduction to our research. Chapter 2 provides a literature review relevant for our research. Chapter 3 discusses the creation of the clicker question sequences. Chapter 4 provides a picture of the validation process involving both physics experts and the introductory physics students. Chapter 5 describes how the sequences have been used with clickers in lectures. Chapter 6 provides the evaluation of the effectiveness of the clicker methodology. Chapter 7 contains a brief summary of research results and conclusions.
Second Language Listening Strategy Research: Methodological Challenges and Perspectives
ERIC Educational Resources Information Center
Santos, Denise; Graham, Suzanne; Vanderplank, Robert
2008-01-01
This paper explores methodological issues related to research into second language listening strategies. We argue that a number of central questions regarding research methodology in this line of enquiry are underexamined, and we engage in the discussion of three key methodological questions: (1) To what extent is a verbal report a valid and…
ERIC Educational Resources Information Center
Newton, Paul E.
2016-01-01
This paper argues that the dominant framework for conceptualizing validation evidence and analysis--the "five sources" framework from the 1999 "Standards"--is seriously limited. Its limitation raises a significant barrier to understanding the nature of comprehensive validation, and this presents a significant threat to…
Sohlberg, McKay Moore; Fickas, Stephen; Lemoncello, Rik; Hung, Pei-Fang
2009-01-01
To develop a theoretical, functional model of community navigation for individuals with cognitive impairments: the Activities of Community Transportation (ACTs). Iterative design using qualitative methods (i.e. document review, focus groups and observations). Four agencies providing travel training to adults with cognitive impairments in the USA participated in the validation study. A thorough document review and series of focus groups led to the development of a comprehensive model (ACTs Wheels) delineating the requisite steps and skills for community navigation. The model was validated and updated based on observations of 395 actual trips by travellers with navigational challenges from the four participating agencies. Results revealed that the 'ACTs Wheel' models were complete and comprehensive. The 'ACTs Wheels' represent a comprehensive model of the steps needed to navigate to destinations using paratransit and fixed-route public transportation systems for travellers with cognitive impairments. Suggestions are made for future investigations of community transportation for this population.
Richter, Tobias; Schroeder, Sascha; Wöhrmann, Britta
2009-03-01
In social cognition, knowledge-based validation of information is usually regarded as relying on strategic and resource-demanding processes. Research on language comprehension, in contrast, suggests that validation processes are involved in the construction of a referential representation of the communicated information. This view implies that individuals can use their knowledge to validate incoming information in a routine and efficient manner. Consistent with this idea, Experiments 1 and 2 demonstrated that individuals are able to reject false assertions efficiently when they have validity-relevant beliefs. Validation processes were carried out routinely even when individuals were put under additional cognitive load during comprehension. Experiment 3 demonstrated that the rejection of false information occurs automatically and interferes with affirmative responses in a nonsemantic task (epistemic Stroop effect). Experiment 4 also revealed complementary interference effects of true information with negative responses in a nonsemantic task. These results suggest the existence of fast and efficient validation processes that protect mental representations from being contaminated by false and inaccurate information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Katherine R.; Wall, Anna M.; Dobson, Patrick F.
This paper reviews existing methodologies and reporting codes used to describe extracted energy resources such as coal and oil and describes a comparable proposed methodology to describe geothermal resources. The goal is to provide the U.S. Department of Energy's (DOE) Geothermal Technologies Office (GTO) with a consistent and comprehensible means of assessing the impacts of its funding programs. This framework will allow for GTO to assess the effectiveness of research, development, and deployment (RD&D) funding, prioritize funding requests, and demonstrate the value of RD&D programs to the U.S. Congress. Standards and reporting codes used in other countries and energy sectorsmore » provide guidance to inform development of a geothermal methodology, but industry feedback and our analysis suggest that the existing models have drawbacks that should be addressed. In order to formulate a comprehensive metric for use by GTO, we analyzed existing resource assessments and reporting methodologies for the geothermal, mining, and oil and gas industries, and we sought input from industry, investors, academia, national labs, and other government agencies. Using this background research as a guide, we describe a methodology for assessing and reporting on GTO funding according to resource knowledge and resource grade (or quality). This methodology would allow GTO to target funding or measure impact by progression of projects or geological potential for development.« less
Schiffman, Eric; Ohrbach, Richard; Truelove, Edmond; Look, John; Anderson, Gary; Goulet, Jean-Paul; List, Thomas; Svensson, Peter; Gonzalez, Yoly; Lobbezoo, Frank; Michelotti, Ambra; Brooks, Sharon L.; Ceusters, Werner; Drangsholt, Mark; Ettlin, Dominik; Gaul, Charly; Goldberg, Louis J.; Haythornthwaite, Jennifer A.; Hollender, Lars; Jensen, Rigmor; John, Mike T.; De Laat, Antoon; de Leeuw, Reny; Maixner, William; van der Meulen, Marylee; Murray, Greg M.; Nixdorf, Donald R.; Palla, Sandro; Petersson, Arne; Pionchon, Paul; Smith, Barry; Visscher, Corine M.; Zakrzewska, Joanna; Dworkin, Samuel F.
2015-01-01
Aims The original Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Axis I diagnostic algorithms have been demonstrated to be reliable. However, the Validation Project determined that the RDC/TMD Axis I validity was below the target sensitivity of ≥ 0.70 and specificity of ≥ 0.95. Consequently, these empirical results supported the development of revised RDC/TMD Axis I diagnostic algorithms that were subsequently demonstrated to be valid for the most common pain-related TMD and for one temporomandibular joint (TMJ) intra-articular disorder. The original RDC/TMD Axis II instruments were shown to be both reliable and valid. Working from these findings and revisions, two international consensus workshops were convened, from which recommendations were obtained for the finalization of new Axis I diagnostic algorithms and new Axis II instruments. Methods Through a series of workshops and symposia, a panel of clinical and basic science pain experts modified the revised RDC/TMD Axis I algorithms by using comprehensive searches of published TMD diagnostic literature followed by review and consensus via a formal structured process. The panel's recommendations for further revision of the Axis I diagnostic algorithms were assessed for validity by using the Validation Project's data set, and for reliability by using newly collected data from the ongoing TMJ Impact Project—the follow-up study to the Validation Project. New Axis II instruments were identified through a comprehensive search of the literature providing valid instruments that, relative to the RDC/TMD, are shorter in length, are available in the public domain, and currently are being used in medical settings. Results The newly recommended Diagnostic Criteria for TMD (DC/TMD) Axis I protocol includes both a valid screener for detecting any pain-related TMD as well as valid diagnostic criteria for differentiating the most common pain-related TMD (sensitivity ≥ 0.86, specificity ≥ 0.98) and for one intra-articular disorder (sensitivity of 0.80 and specificity of 0.97). Diagnostic criteria for other common intra-articular disorders lack adequate validity for clinical diagnoses but can be used for screening purposes. Inter-examiner reliability for the clinical assessment associated with the validated DC/TMD criteria for pain-related TMD is excellent (kappa ≥ 0.85). Finally, a comprehensive classification system that includes both the common and less common TMD is also presented. The Axis II protocol retains selected original RDC/TMD screening instruments augmented with new instruments to assess jaw function as well as behavioral and additional psychosocial factors. The Axis II protocol is divided into screening and comprehensive self-report instrument sets. The screening instruments’ 41 questions assess pain intensity, pain-related disability, psychological distress, jaw functional limitations, and parafunctional behaviors, and a pain drawing is used to assess locations of pain. The comprehensive instruments, composed of 81 questions, assess in further detail jaw functional limitations and psychological distress as well as additional constructs of anxiety and presence of comorbid pain conditions. Conclusion The recommended evidence-based new DC/TMD protocol is appropriate for use in both clinical and research settings. More comprehensive instruments augment short and simple screening instruments for Axis I and Axis II. These validated instruments allow for identification of patients with a range of simple to complex TMD presentations. PMID:24482784
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, J. R.; Peng, E.; Ahmad, Z.
2015-05-15
We present a comprehensive methodology for the simulation of astronomical images from optical survey telescopes. We use a photon Monte Carlo approach to construct images by sampling photons from models of astronomical source populations, and then simulating those photons through the system as they interact with the atmosphere, telescope, and camera. We demonstrate that all physical effects for optical light that determine the shapes, locations, and brightnesses of individual stars and galaxies can be accurately represented in this formalism. By using large scale grid computing, modern processors, and an efficient implementation that can produce 400,000 photons s{sup −1}, we demonstratemore » that even very large optical surveys can be now be simulated. We demonstrate that we are able to (1) construct kilometer scale phase screens necessary for wide-field telescopes, (2) reproduce atmospheric point-spread function moments using a fast novel hybrid geometric/Fourier technique for non-diffraction limited telescopes, (3) accurately reproduce the expected spot diagrams for complex aspheric optical designs, and (4) recover system effective area predicted from analytic photometry integrals. This new code, the Photon Simulator (PhoSim), is publicly available. We have implemented the Large Synoptic Survey Telescope design, and it can be extended to other telescopes. We expect that because of the comprehensive physics implemented in PhoSim, it will be used by the community to plan future observations, interpret detailed existing observations, and quantify systematics related to various astronomical measurements. Future development and validation by comparisons with real data will continue to improve the fidelity and usability of the code.« less
Barcoding Sponges: An Overview Based on Comprehensive Sampling
Vargas, Sergio; Schuster, Astrid; Sacher, Katharina; Büttner, Gabrielle; Schätzle, Simone; Läuchli, Benjamin; Hall, Kathryn; Hooper, John N. A.; Erpenbeck, Dirk; Wörheide, Gert
2012-01-01
Background Phylum Porifera includes ∼8,500 valid species distributed world-wide in aquatic ecosystems ranging from ephemeral fresh-water bodies to coastal environments and the deep-sea. The taxonomy and systematics of sponges is complicated, and morphological identification can be both time consuming and erroneous due to phenotypic convergence and secondary losses, etc. DNA barcoding can provide sponge biologists with a simple and rapid method for the identification of samples of unknown taxonomic membership. The Sponge Barcoding Project (www.spongebarcoding.org), the first initiative to barcode a non-bilaterian metazoan phylum, aims to provide a comprehensive DNA barcode database for Phylum Porifera. Methodology/Principal Findings ∼7,400 sponge specimens have been extracted, and amplification of the standard COI barcoding fragment has been attempted for approximately 3,300 museum samples with ∼25% mean amplification success. Based on this comprehensive sampling, we present the first report on the workflow and progress of the sponge barcoding project, and discuss some common pitfalls inherent to the barcoding of sponges. Conclusion A DNA-barcoding workflow capable of processing potentially large sponge collections has been developed and is routinely used for the Sponge Barcoding Project with success. Sponge specific problems such as the frequent co-amplification of non-target organisms have been detected and potential solutions are currently under development. The initial success of this innovative project have already demonstrated considerable refinement of sponge systematics, evaluating morphometric character importance, geographic phenotypic variability, and the utility of the standard barcoding fragment for Porifera (despite its conserved evolution within this basal metazoan phylum). PMID:22802937
ERIC Educational Resources Information Center
Çelik, Harun; Pektas, Hüseyin Miraç
2017-01-01
A one-group quasi-experimental design and survey methodology were used to investigate the effect of virtual laboratory practices on preservice teachers' (N = 29) graphic comprehension and interpretation skills with different learning approaches. Pretest and posttest data were collected with the Test of Understanding Kinematic Graphs. The Learning…
ERIC Educational Resources Information Center
Al-Hebaishi, Safaa Mohammad
2017-01-01
Peer teaching has become a productive learning strategy at all education levels. Peer Instruction Method is carried out in a range of forms and contexts like co-tutoring, reciprocal tutoring and discussion groups without teachers. To examine the effectiveness of using the peer instruction method to enhance the conceptual comprehension of…
ERIC Educational Resources Information Center
Fealy, Erin Marie
2010-01-01
The purpose of this case study research was to explore the effects of explicit instruction of graphic organizers to support students' understandings of informational text. An additional purpose was to investigate students' perceptions of using graphic organizers as a comprehension strategy. Using case study methodology, this study occurred…
ERIC Educational Resources Information Center
Riley, Jason M.; Ellegood, William A.; Solomon, Stanislaus; Baker, Jerrine
2017-01-01
Purpose: This study aims to understand how mode of delivery, online versus face-to-face, affects comprehension when teaching operations management concepts via a simulation. Conceptually, the aim is to identify factors that influence the students' ability to learn and retain new concepts. Design/methodology/approach: Leveraging Littlefield…
ERIC Educational Resources Information Center
Redcay, Jessica D.; Preston, Sean M.
2016-01-01
Purpose: This study aims to examine the differences in second grade students' reading fluency and comprehension scores when using varying levels of teacher-guided iPad® app instruction to determine effective reading practices. Design/methodology/approach: This study reports the results of the quasi-experimental pre-post study by providing…
ERIC Educational Resources Information Center
Shintani, Natsuko; Li, Shaofeng; Ellis, Rod
2013-01-01
This article reports a meta-analysis of studies that investigated the relative effectiveness of comprehension-based instruction (CBI) and production-based instruction (PBI). The meta-analysis only included studies that featured a direct comparison of CBI and PBI in order to ensure methodological and statistical robustness. A total of 35 research…
ERIC Educational Resources Information Center
Grueneich, Royal; Trabasso, Tom
This review of research involving children's moral judgment of literature indicates that such research has been plagued by serious methodological problems stemming largely from the fact that the stimulus materials used to assess children's comprehension and evaluations have tended to be poorly constructed. It contends that this forces children to…
ERIC Educational Resources Information Center
Kaefer, Tanya; Pinkham, Ashley M.; Neuman, Susan B.
2017-01-01
Research (Evans & Saint-Aubin, 2005) suggests systematic patterns in how young children visually attend to storybooks. However, these studies have not addressed whether visual attention is predictive of children's storybook comprehension. In the current study, we used eye-tracking methodology to examine two-year-olds' visual attention while…
Evaluating Comprehensive School Reform Models at Scale: Focus on Implementation
ERIC Educational Resources Information Center
Vernez, Georges; Karam, Rita; Mariano, Louis T.; DeMartini, Christine
2006-01-01
This study was designed to fill the "implementation measurement" gap. A methodology to quantitatively measure the level of Comprehensive School Reform (CSR) implementation that can be used across a variety of CSR models was developed, and then applied to measure actual implementation of four different CSR models in a large number of schools. The…
ERIC Educational Resources Information Center
Gordon, Peter C.; Hendrick, Randall; Johnson, Marcus; Lee, Yoonhyoung
2006-01-01
The nature of working memory operation during complex sentence comprehension was studied by means of eye-tracking methodology. Readers had difficulty when the syntax of a sentence required them to hold 2 similar noun phrases (NPs) in working memory before syntactically and semantically integrating either of the NPs with a verb. In sentence …
ERIC Educational Resources Information Center
Rahman, Taslima; Mislevy, Robert J.
2017-01-01
To demonstrate how methodologies for assessing reading comprehension can grow out of views of the construct suggested in the reading research literature, we constructed tasks and carried out psychometric analyses that were framed in accordance with 2 leading reading models. In estimating item difficulty and subsequently, examinee proficiency, an…
Select Methodology for Validating Advanced Satellite Measurement Systems
NASA Technical Reports Server (NTRS)
Larar, Allen M.; Zhou, Daniel K.; Liu, Xi; Smith, William L.
2008-01-01
Advanced satellite sensors are tasked with improving global measurements of the Earth's atmosphere, clouds, and surface to enable enhancements in weather prediction, climate monitoring capability, and environmental change detection. Measurement system validation is crucial to achieving this goal and maximizing research and operational utility of resultant data. Field campaigns including satellite under-flights with well calibrated FTS sensors aboard high-altitude aircraft are an essential part of the validation task. This presentation focuses on an overview of validation methodology developed for assessment of high spectral resolution infrared systems, and includes results of preliminary studies performed to investigate the performance of the Infrared Atmospheric Sounding Interferometer (IASI) instrument aboard the MetOp-A satellite.
Dosenovic, Svjetlana; Jelicic Kadic, Antonia; Vucic, Katarina; Markovina, Nikolina; Pieper, Dawid; Puljak, Livia
2018-05-08
Systematic reviews (SRs) in the field of neuropathic pain (NeuP) are increasingly important for decision-making. However, methodological flaws in SRs can reduce the validity of conclusions. Hence, it is important to assess the methodological quality of NeuP SRs critically. Additionally, it remains unclear which assessment tool should be used. We studied the methodological quality of SRs published in the field of NeuP and compared two assessment tools. We systematically searched 5 electronic databases to identify SRs of randomized controlled trials of interventions for NeuP available up to March 2015. Two independent reviewers assessed the methodological quality of the studies using the Assessment of Multiple Systematic Reviews (AMSTAR) and the revised AMSTAR (R-AMSTAR) tools. The scores were converted to percentiles and ranked into 4 grades to allow comparison between the two checklists. Gwet's AC1 coefficient was used for interrater reliability assessment. The 97 included SRs had a wide range of methodological quality scores (AMSTAR median (IQR): 6 (5-8) vs. R-AMSTAR median (IQR): 30 (26-35)). The overall agreement score between the 2 raters was 0.62 (95% CI 0.39-0.86) for AMSTAR and 0.62 (95% CI 0.53-0.70) for R-AMSTAR. The 31 Cochrane systematic reviews (CSRs) were consistently ranked higher than the 66 non-Cochrane systematic reviews (NCSRs). The analysis of individual domains showed the best compliance in a comprehensive literature search (item 3) on both checklists. The results for the domain that was the least compliant differed: conflict of interest (item 11) was the item most poorly reported on AMSTAR vs. publication bias assessment (item 10) on R-AMSTAR. A high positive correlation between the total AMSTAR and R-AMSTAR scores for all SRs, as well as for CSRs and NCSRs, was observed. The methodological quality of analyzed SRs in the field of NeuP was not optimal, and CSRs had a higher quality than NCSRs. Both AMSTAR and R-AMSTAR tools produced comparable quality ratings. Our results point out to weaknesses in the methodology of existing SRs on interventions for the management NeuP and call for future improvement by better adherence to analyzed quality checklists, either AMSTAR or R-AMSTAR.
ERIC Educational Resources Information Center
Renom, Marta; Conrad, Andrea; Bascuñana, Helena; Cieza, Alarcos; Galán, Ingrid; Kesselring, Jürg; Coenen, Michaela
2014-01-01
Background: The Comprehensive International Classification of Functioning, Disability and Health (ICF) Core Set for Multiple Sclerosis (MS) is a comprehensive framework to structure the information obtained in multidisciplinary clinical settings according to the biopsychosocial perspective of the International Classification of Functioning,…
Validation of the Simple View of Reading in Hebrew--A Semitic Language
ERIC Educational Resources Information Center
Joshi, R. Malatesha; Ji, Xuejun Ryan; Breznitz, Zvia; Amiel, Meirav; Yulia, Astri
2015-01-01
The Simple View of Reading (SVR) in Hebrew was tested by administering decoding, listening comprehension, and reading comprehension measures to 1,002 students from Grades 2 to 10 in the northern part of Israel. Results from hierarchical regression analyses supported the SVR in Hebrew with decoding and listening comprehension measures explaining…
Berthenet, Marion; Vaillancourt, Régis; Pouliot, Annie
2016-01-01
Poor health literacy has been recognized as a limiting factor in the elderly's ability to comprehend written or verbal medication information and also to successfully adhere to medical regimens. The objective of this study was to validate a set of pictograms depicting medication instructions for use among the elderly to support health literacy. Elderly outpatients were recruited in 3 community pharmacies in Canada. One-on-one structured interviews were conducted to assess comprehension of 76 pictograms from the International Pharmaceutical Federation. Comprehension was assessed using transparency testing and pictogram translucency, or the degree to which the pictogram represents the intended message. A total of 135 participants were enrolled in this study, and 76 pictograms were assessed. A total of 50 pictograms achieved more than 67% comprehension. Pictograms depicting precautions and warnings against certain side effects were generally not well understood. Gender, age, and education level all had a significant impact on the interpretation scores of certain individual pictograms. When all pictograms were included, younger males had a significantly higher comprehension score than older females, and participants with a higher level of education provided significantly higher translucency scores. Even when pictograms reached the comprehension threshold set by the International Organization for Standardization in the general populations, only 50 of these pictograms achieved more than 67% comprehension among the elderly, confirming that validation in this subpopulation should be conducted prior to using specific pictograms. Accompanying pictograms with education about these pictograms and important counseling points remains extremely important.
The development of a multimedia online language assessment tool for young children with autism.
Lin, Chu-Sui; Chang, Shu-Hui; Liou, Wen-Ying; Tsai, Yu-Show
2013-10-01
This study aimed to provide early childhood special education professionals with a standardized and comprehensive language assessment tool for the early identification of language learning characteristics (e.g., hyperlexia) of young children with autism. In this study, we used computer technology to develop a multi-media online language assessment tool that presents auditory or visual stimuli. This online comprehensive language assessment consists of six subtests: decoding, homographs, auditory vocabulary comprehension, visual vocabulary comprehension, auditory sentence comprehension, and visual sentence comprehension. Three hundred typically developing children and 35 children with autism from Tao-Yuan County in Taiwan aged 4-6 participated in this study. The Cronbach α values of the six subtests ranged from .64 to .97. The variance explained by the six subtests ranged from 14% to 56%, the current validity of each subtest with the Peabody Picture Vocabulary Test-Revised ranged from .21 to .45, and the predictive validity of each subtest with WISC-III ranged from .47 to .75. This assessment tool was also found to be able to accurately differentiate children with autism up to 92%. These results indicate that this assessment tool has both adequate reliability and validity. Additionally, 35 children with autism have completed the entire assessment in this study without exhibiting any extremely troubling behaviors. However, future research is needed to increase the sample size of both typically developing children and young children with autism and to overcome the technical challenges associated with internet issues. Copyright © 2013 Elsevier Ltd. All rights reserved.
Sears, Lindsay E.; Agrawal, Sangeeta; Sidney, James A.; Castle, Patricia H.; Coberley, Carter R.; Witters, Dan; Pope, James E.; Harter, James K.
2014-01-01
Abstract Building upon extensive research from 2 validated well-being instruments, the objective of this research was to develop and validate a comprehensive and actionable well-being instrument that informs and facilitates improvement of well-being for individuals, communities, and nations. The goals of the measure were comprehensiveness, validity and reliability, significant relationships with health and performance outcomes, and diagnostic capability for intervention. For measure development and validation, questions from the Well-being Assessment and Wellbeing Finder were simultaneously administered as a test item pool to over 13,000 individuals across 3 independent samples. Exploratory factor analysis was conducted on a random selection from the first sample and confirmed in the other samples. Further evidence of validity was established through correlations to the established well-being scores from the Well-Being Assessment and Wellbeing Finder, and individual outcomes capturing health care utilization and productivity. Results showed the Well-Being 5 score comprehensively captures the known constructs within well-being, demonstrates good reliability and validity, significantly relates to health and performance outcomes, is diagnostic and informative for intervention, and can track and compare well-being over time and across groups. With this tool, well-being deficiencies within a population can be effectively identified, prioritized, and addressed, yielding the potential for substantial improvements to the health status, performance, and quality of life for individuals and cost savings for stakeholders. (Population Health Management 2014;17:357–365) PMID:24892873
Epistemological Dialogue of Validity: Building Validity in Educational and Social Research
ERIC Educational Resources Information Center
Cakir, Mustafa
2012-01-01
The notion of validity in the social sciences is evolving and is influenced by philosophy of science, critiques of objectivity, and epistemological debates. Methodology for validation of the knowledge claims is diverse across different philosophies of science. In other words, definition and the way to establish of validity have evolved as…
Yu, H H; Bi, X; Liu, Y Y
2017-08-10
Objective: To evaluate the reliability and validity of the Chinese version on comprehensive scores for financial toxicity (COST), based on the patient-reported outcome measures. Methods: A total of 118 cancer patients were face-to-face interviewed by well-trained investigators. Cronbach's α and Pearson correlation coefficient were used to evaluate reliability. Content validity index (CVI) and exploratory factor analysis (EFA) were used to evaluate the content validity and construct validity, respectively. Results: The Cronbach's α coefficient appeared as 0.889 for the whole questionnaire, with the results of test-retest were between 0.77 and 0.98. Scale-content validity index (S-CVI) appeared as 0.82, with item-content validity index (I-CVI) between 0.83 and 1.00. Two components were extracted from the Exploratory factor analysis, with cumulative rate as 68.04% and loading>0.60 on every item. Conclusion: The Chinese version of COST scale showed high reliability and good validity, thus can be applied to assess the financial situation in cancer patients.
Statistical Anomalies of Bitflips in SRAMs to Discriminate SBUs From MCUs
NASA Astrophysics Data System (ADS)
Clemente, Juan Antonio; Franco, Francisco J.; Villa, Francesca; Baylac, Maud; Rey, Solenne; Mecha, Hortensia; Agapito, Juan A.; Puchner, Helmut; Hubert, Guillaume; Velazco, Raoul
2016-08-01
Recently, the occurrence of multiple events in static tests has been investigated by checking the statistical distribution of the difference between the addresses of the words containing bitflips. That method has been successfully applied to Field Programmable Gate Arrays (FPGAs) and the original authors indicate that it is also valid for SRAMs. This paper presents a modified methodology that is based on checking the XORed addresses with bitflips, rather than on the difference. Irradiation tests on CMOS 130 & 90 nm SRAMs with 14-MeV neutrons have been performed to validate this methodology. Results in high-altitude environments are also presented and cross-checked with theoretical predictions. In addition, this methodology has also been used to detect modifications in the organization of said memories. Theoretical predictions have been validated with actual data provided by the manufacturer.
Kim, Han-Soo; Yun, JiYeon; Kang, Seungcheol; Han, Ilkyu
2015-07-01
A Korean version of Toronto Extremity Salvage Score (TESS), a widely used disease-specific patient-reported questionnaire for assessing physical function of sarcoma patients, has not been developed. 1) to translate and cross-culturally adapt the TESS into Korean, and 2) to examine its comprehensibility, reliability and validity. TESS was translated into Korean, then translated back into English, and reviewed by a committee to develop the consensus version of the Korean TESS. The Korean TESS was administered to 126 patients to examine its comprehensibility, reliability, and validity. Comprehensibility was high, as the patients rated questions as "easy" or "very easy" in 96% for the TESS lower extremity (LE) and in 97% for the TESS upper extremity (UE). Test-retest reliability with intraclass coefficient (0.874 for LE and 0.979 for UE) and internal consistency with Cronbach's alpha (0.978 for LE and 0.989 for UE) were excellent. Korean TESS correlated with the MSTS score (r = 0.772 for LE and r = 0.635 for UE), and physical functioning domain of EORTC-CLQ C30 (r = 0.840 for LE and r = 0.630 for UE). Our study suggests that Korean version of the TESS is a comprehensible, reliable, and valid instrument to measure patient-reported functional outcome in patients with extremity sarcoma. © 2015 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Edwards, Lisa M.; Pedrotti, Jennifer Teramoto
2008-01-01
This study describes a comprehensive content and methodological review of articles about multiracial issues in 6 journals related to counseling up to the year 2006. The authors summarize findings about the 18 articles that emerged from this review of the "Journal of Counseling Psychology," "Journal of Counseling & Development," "The Counseling…
The Case for Mixed Methodologies in Researching the Teacher's Use of Humour in Adult Education
ERIC Educational Resources Information Center
Struthers, John
2011-01-01
Inconsistencies within the literature result in teachers not having sufficient guidance to develop their humour use in support of learning without risking their professionalism. This article argues for more comprehensive evidence to guide teachers' use of humour, based on mixed methodological approaches. The case is also made for the Interpersonal…
Popes in the Pizza: Analyzing Activity Reports to Create and Sustain a Strategic Plan
ERIC Educational Resources Information Center
Sweet, Charlie; Blythe, Hal; Keeley, E. J.; Forsyth, Ben
2008-01-01
This article presents a practical methodology for creating and sustaining strategic planning, the task analysis. Utilizing our Teaching & Learning Center Strategic Plan as a model, we demonstrate how working with a weekly status report provides a comprehensive listing of detail necessary to analyze and revise the plan. The new methodology is…
ERIC Educational Resources Information Center
Meyiwa, Thenjiwe; Chisanga, Theresa; Mokhele, Paul; Sotshangane, Nkosinathi; Makhanya, Sizakele
2014-01-01
The context in which self-study research is conducted is sometimes complex, affecting the manner in which related data is gathered and interpreted. This article comprises collaboration between three students and two supervisors. It shares methodological choices made by graduate students and supervisors of a rural university at which, self-study…
ERIC Educational Resources Information Center
Margolis, Jesse L.; Nussbaum, Miguel; Rodriguez, Patricio; Rosas, Ricardo
2006-01-01
Many school systems, in both the developed and developing world, are implementing educational technology to assist in student learning. However, there is no clear consensus on how to evaluate these new technologies. This paper proposes a comprehensive methodology for estimating the value of a new educational technology in three steps: benefit…
ERIC Educational Resources Information Center
Saparnis, Gintaras; Saparniene, Diana
2006-01-01
Purpose: The purpose of the research is to reveal the psychosemantics of the opinion of school principals and vice principals on the issues of the development of school management. Methodology: From the methodological point of view the research is based on the teaching of the empirical social research about qualitative and quantitative research…
ERIC Educational Resources Information Center
Stephen, Timothy D.
2011-01-01
The problem of how to rank academic journals in the communication field (human interaction, mass communication, speech, and rhetoric) is one of practical importance to scholars, university administrators, and librarians, yet there is no methodology that covers the field's journals comprehensively and objectively. This article reports a new ranking…
ERIC Educational Resources Information Center
Sevlever, Melina; Gillis, Jennifer M.
2010-01-01
Several authors have suggested that children with autism are impaired in their ability to imitate others. However, diverse methodologies, contradictory findings, and varying theoretical explanations continue to exist in the literature despite decades of research. A comprehensive account of imitation in children with autism is hampered by the lack…
Camacho-Sandoval, Rosa; Sosa-Grande, Eréndira N; González-González, Edith; Tenorio-Calvo, Alejandra; López-Morales, Carlos A; Velasco-Velázquez, Marco; Pavón-Romero, Lenin; Pérez-Tapia, Sonia Mayra; Medina-Rivero, Emilio
2018-06-05
Physicochemical and structural properties of proteins used as active pharmaceutical ingredients of biopharmaceuticals are determinant to carry out their biological activity. In this regard, the assays intended to evaluate functionality of biopharmaceuticals provide confirmatory evidence that they contain the appropriate physicochemical properties and structural conformation. The validation of the methodologies used for the assessment of critical quality attributes of biopharmaceuticals is a key requirement for manufacturing under GMP environments. Herein we present the development and validation of a flow cytometry-based methodology for the evaluation of adalimumab's affinity towards membrane-bound TNFα (mTNFα) on recombinant CHO cells. This in vitro methodology measures the interaction between an in-solution antibody and its target molecule onto the cell surface through a fluorescent signal. The characteristics evaluated during the validation exercise showed that this methodology is suitable for its intended purpose. The assay demonstrated to be accurate (r 2 = 0.92, slope = 1.20), precise (%CV ≤ 18.31) and specific (curve fitting, r 2 = 0.986-0.997) to evaluate binding of adalimumab to mTNFα. The results obtained here provide evidence that detection by flow cytometry is a viable alternative for bioassays used in the pharmaceutical industry. In addition, this methodology could be standardized for the evaluation of other biomolecules acting through the same mechanism of action. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Airport Landside - Volume III : ALSIM Calibration and Validation.
DOT National Transportation Integrated Search
1982-06-01
This volume discusses calibration and validation procedures applied to the Airport Landside Simulation Model (ALSIM), using data obtained at Miami, Denver and LaGuardia Airports. Criteria for the selection of a validation methodology are described. T...
Awaisu, Ahmed; Samsudin, Sulastri; Amir, Nur A; Omar, Che G; Hashim, Mohd I; Mohamad, Mohamed H Nik; Shafie, Asrul A; Hassali, Mohamed A
2010-05-22
The purpose of the linguistic validation of the Wisconsin Smoking Withdrawal Scale (WSWS) was to produce a translated version in Malay language which was "conceptually equivalent" to the original U.S. English version for use in clinical practice and research. A seven-member translation committee conducted the translation process using the following methodology: production of two independent forward translations; comparison and reconciliation of the translations; backward translation of the first reconciled version; comparison of the original WSWS and the backward version leading to the production of the second reconciled version; pilot testing and review of the translation, and finalization. Linguistic and conceptual issues arose during the process of translating the instrument, particularly pertaining to the title, instructions, and some of the items of the scale. In addition, the researchers had to find culturally acceptable equivalents for some terms and idiomatic phrases. Notable among these include expressions such as "irritability", "feeling upbeat", and "nibbling on snacks", which had to be replaced by culturally acceptable expressions. During cognitive debriefing and clinician's review processes, the Malay translated version of WSWS was found to be easily comprehensible, clear, and appropriate for the smoking withdrawal symptoms intended to be measured. We applied a rigorous translation method to ensure conceptual equivalence and acceptability of WSWS in Malay prior to its utilization in research and clinical practice. However, to complete the cultural adaptation process, future psychometric validation is planned to be conducted among Malay speakers.
Gagne, Jeffrey R.; Van Hulle, Carol A.; Aksan, Nazan; Essex, Marilyn J.; Goldsmith, H. Hill
2010-01-01
The authors describe the development and initial validation of a home-based version of the Laboratory Temperament Assessment Battery (Lab-TAB), which was designed to assess childhood temperament using a comprehensive series of emotion-eliciting behavioral episodes. This paper provides researchers with general guidelines for assessing specific behaviors using the Lab-TAB and for forming behavioral composites that correspond to commonly researched temperament dimensions. We used mother ratings and independent post-visit observer ratings to provide validity evidence in a community sample of 4.5 year-old children. 12 Lab-TAB behavioral episodes were employed, yielding 24 within-episode temperament components that collapsed into 9 higher-level composites (Anger, Sadness, Fear, Shyness, Positive Expression, Approach, Active Engagement, Persistence, and Inhibitory Control). These dimensions of temperament are similar to those found in questionnaire-based assessments. Correlations among the 9 composites were low to moderate, suggesting relative independence. As expected, agreement between Lab-TAB measures and post-visit observer ratings was stronger than agreement between the Lab-TAB and mother questionnaire. However, for Active Engagement and Shyness, mother ratings did predict child behavior in the Lab-TAB quite well. Findings demonstrate the feasibility of emotion-eliciting temperament assessment methodologies, suggest appropriate methods for data aggregation into trait-level constructs, and set some expectations for associations between Lab-TAB dimensions and the degree of cross-method convergence between the Lab-TAB and other commonly used temperament assessments. PMID:21480723
Woodruff, Tracey J; Sutton, Patrice
2014-10-01
Synthesizing what is known about the environmental drivers of health is instrumental to taking prevention-oriented action. Methods of research synthesis commonly used in environmental health lag behind systematic review methods developed in the clinical sciences over the past 20 years. We sought to develop a proof of concept of the "Navigation Guide," a systematic and transparent method of research synthesis in environmental health. The Navigation Guide methodology builds on best practices in research synthesis in evidence-based medicine and environmental health. Key points of departure from current methods of expert-based narrative review prevalent in environmental health include a prespecified protocol, standardized and transparent documentation including expert judgment, a comprehensive search strategy, assessment of "risk of bias," and separation of the science from values and preferences. Key points of departure from evidence-based medicine include assigning a "moderate" quality rating to human observational studies and combining diverse evidence streams. The Navigation Guide methodology is a systematic and rigorous approach to research synthesis that has been developed to reduce bias and maximize transparency in the evaluation of environmental health information. Although novel aspects of the method will require further development and validation, our findings demonstrated that improved methods of research synthesis under development at the National Toxicology Program and under consideration by the U.S. Environmental Protection Agency are fully achievable. The institutionalization of robust methods of systematic and transparent review would provide a concrete mechanism for linking science to timely action to prevent harm.
Joffe, Ari R; Bara, Meredith; Anton, Natalie; Nobis, Nathan
2016-09-01
To determine what are considered acceptable standards for animal research (AR) methodology and translation rate to humans, a validated survey was sent to: a) a sample of the general public, via Sampling Survey International (SSI; Canada), Amazon Mechanical Turk (AMT; USA), a Canadian city festival (CF) and a Canadian children's hospital (CH); b) a sample of medical students (two first-year classes); and c) a sample of scientists (corresponding authors and academic paediatricians). There were 1379 responses from the general public sample (SSI, n = 557; AMT, n = 590; CF, n = 195; CH, n = 102), 205/330 (62%) medical student responses, and 23/323 (7%, too few to report) scientist responses. Asked about methodological quality, most of the general public and medical student respondents expect that: AR is of high quality (e.g. anaesthesia and analgesia are monitored, even overnight, and 'humane' euthanasia, optimal statistical design, comprehensive literature review, randomisation and blinding, are performed), and costs and difficulty are not acceptable justifications for lower quality (e.g. costs of expert consultation, or more laboratory staff). Asked about their expectations of translation to humans (of toxicity, carcinogenicity, teratogenicity and treatment findings), most expect translation more than 60% of the time. If translation occurred less than 20% of the time, a minority disagreed that this would "significantly reduce your support for AR". Medical students were more supportive of AR, even if translation occurred less than 20% of the time. Expectations for AR are much higher than empirical data show to have been achieved. 2016 FRAME.
Sabour, Siamak
2018-03-08
The purpose of this letter, in response to Hall, Mehta, and Fackrell (2017), is to provide important knowledge about methodology and statistical issues in assessing the reliability and validity of an audiologist-administered tinnitus loudness matching test and a patient-reported tinnitus loudness rating. The author uses reference textbooks and published articles regarding scientific assessment of the validity and reliability of a clinical test to discuss the statistical test and the methodological approach in assessing validity and reliability in clinical research. Depending on the type of the variable (qualitative or quantitative), well-known statistical tests can be applied to assess reliability and validity. The qualitative variables of sensitivity, specificity, positive predictive value, negative predictive value, false positive and false negative rates, likelihood ratio positive and likelihood ratio negative, as well as odds ratio (i.e., ratio of true to false results), are the most appropriate estimates to evaluate validity of a test compared to a gold standard. In the case of quantitative variables, depending on distribution of the variable, Pearson r or Spearman rho can be applied. Diagnostic accuracy (validity) and diagnostic precision (reliability or agreement) are two completely different methodological issues. Depending on the type of the variable (qualitative or quantitative), well-known statistical tests can be applied to assess validity.
Kimhy, David; Delespaul, Philippe; Ahn, Hongshik; Cai, Shengnan; Shikhman, Marina; Lieberman, Jeffrey A; Malaspina, Dolores; Sloan, Richard P
2010-11-01
Psychosis has been repeatedly suggested to be affected by increases in stress and arousal. However, there is a dearth of evidence supporting the temporal link between stress, arousal, and psychosis during "real-world" functioning. This paucity of evidence may stem from limitations of current research methodologies. Our aim is to the test the feasibility and validity of a novel methodology designed to measure concurrent stress and arousal in individuals with psychosis during "real-world" daily functioning. Twenty patients with psychosis completed a 36-hour ambulatory assessment of stress and arousal. We used experience sampling method with palm computers to assess stress (10 times per day, 10 AM → 10 PM) along with concurrent ambulatory measurement of cardiac autonomic regulation using a Holter monitor. The clocks of the palm computer and Holter monitor were synchronized, allowing the temporal linking of the stress and arousal data. We used power spectral analysis to determine the parasympathetic contributions to autonomic regulation and sympathovagal balance during 5 minutes before and after each experience sample. Patients completed 79% of the experience samples (75% with a valid concurrent arousal data). Momentary increases in stress had inverse correlation with concurrent parasympathetic activity (ρ = -.27, P < .0001) and positive correlation with sympathovagal balance (ρ = .19, P = .0008). Stress and heart rate were not significantly related (ρ = -.05, P = .3875). The findings support the feasibility and validity of our methodology in individuals with psychosis. The methodology offers a novel way to study in high time resolution the concurrent, "real-world" interactions between stress, arousal, and psychosis. The authors discuss the methodology's potential applications and future research directions.
Konz, Tobias; Migliavacca, Eugenia; Dayon, Loïc; Bowman, Gene; Oikonomidi, Aikaterini; Popp, Julius; Rezzi, Serge
2017-05-05
We here describe the development, validation and application of a quantitative methodology for the simultaneous determination of 29 elements in human serum using state-of-the-art inductively coupled plasma triple quadrupole mass spectrometry (ICP-MS/MS). This new methodology offers high-throughput elemental profiling using simple dilution of minimal quantity of serum samples. We report the outcomes of the validation procedure including limits of detection/quantification, linearity of calibration curves, precision, recovery and measurement uncertainty. ICP-MS/MS-based ionomics was used to analyze human serum of 120 older adults. Following a metabolomic data mining approach, the generated ionome profiles were subjected to principal component analysis revealing gender and age-specific differences. The ionome of female individuals was marked by higher levels of calcium, phosphorus, copper and copper to zinc ratio, while iron concentration was lower with respect to male subjects. Age was associated with lower concentrations of zinc. These findings were complemented with additional readouts to interpret micronutrient status including ceruloplasmin, ferritin and inorganic phosphate. Our data supports a gender-specific compartmentalization of the ionome that may reflect different bone remodelling in female individuals. Our ICP-MS/MS methodology enriches the panel of validated "Omics" approaches to study molecular relationships between the exposome and the ionome in relation with nutrition and health.
Modeling and simulation of high-speed wake flows
NASA Astrophysics Data System (ADS)
Barnhardt, Michael Daniel
High-speed, unsteady flows represent a unique challenge in computational hypersonics research. They are found in nearly all applications of interest, including the wakes of reentry vehicles, RCS jet interactions, and scramjet combustors. In each of these examples, accurate modeling of the flow dynamics plays a critical role in design performance. Nevertheless, literature surveys reveal that very little modern research effort has been made toward understanding these problems. The objective of this work is to synthesize current computational methods for high-speed flows with ideas commonly used to model low-speed, turbulent flows in order to create a framework by which we may reliably predict unsteady, hypersonic flows. In particular, we wish to validate the new methodology for the case of a turbulent wake flow at reentry conditions. Currently, heat shield designs incur significant mass penalties due to the large margins applied to vehicle afterbodies in lieu of a thorough understanding of the wake aerothermodynamics. Comprehensive validation studies are required to accurately quantify these modeling uncertainties. To this end, we select three candidate experiments against which we evaluate the accuracy of our methodology. The first set of experiments concern the Mars Science Laboratory (MSL) parachute system and serve to demonstrate that our implementation produces results consistent with prior studies at supersonic conditions. Second, we use the Reentry-F flight test to expand the application envelope to realistic flight conditions. Finally, in the last set of experiments, we examine a spherical capsule wind tunnel configuration in order to perform a more detailed analysis of a realistic flight geometry. In each case, we find that current 1st order in time, 2nd order in space upwind numerical methods are sufficiently accurate to predict statistical measurements: mean, RMS, standard deviation, and so forth. Further potential gains in numerical accuracy are demonstrated using a new class of flux evaluation schemes in combination with 2nd order dual-time stepping. For cases with transitional or turbulent Reynolds numbers, we show that the detached eddy simulation (DES) method holds clear advantage over heritage RANS methods. From this, we conclude that the current methodology is sufficient to predict heating of external, reentry-type applications within experimental uncertainty.
Automatic programming of arc welding robots
NASA Astrophysics Data System (ADS)
Padmanabhan, Srikanth
Automatic programming of arc welding robots requires the geometric description of a part from a solid modeling system, expert weld process knowledge and the kinematic arrangement of the robot and positioner automatically. Current commercial solid models are incapable of storing explicitly product and process definitions of weld features. This work presents a paradigm to develop a computer-aided engineering environment that supports complete weld feature information in a solid model and to create an automatic programming system for robotic arc welding. In the first part, welding features are treated as properties or attributes of an object, features which are portions of the object surface--the topological boundary. The structure for representing the features and attributes is a graph called the Welding Attribute Graph (WAGRAPH). The method associates appropriate weld features to geometric primitives, adds welding attributes, and checks the validity of welding specifications. A systematic structure is provided to incorporate welding attributes and coordinate system information in a CSG tree. The specific implementation of this structure using a hybrid solid modeler (IDEAS) and an object-oriented programming paradigm is described. The second part provides a comprehensive methodology to acquire and represent weld process knowledge required for the proper selection of welding schedules. A methodology of knowledge acquisition using statistical methods is proposed. It is shown that these procedures did little to capture the private knowledge of experts (heuristics), but helped in determining general dependencies, and trends. A need was established for building the knowledge-based system using handbook knowledge and to allow the experts further to build the system. A methodology to check the consistency and validity for such knowledge addition is proposed. A mapping shell designed to transform the design features to application specific weld process schedules is described. A new approach using fixed path modified continuation methods is proposed in the final section to plan continuously the trajectory of weld seams in an integrated welding robot and positioner environment. The joint displacement, velocity, and acceleration histories all along the path as a function of the path parameter for the best possible welding condition are provided for the robot and the positioner to track various paths normally encountered in arc welding.
Environmental Risk Assessment of dredging processes - application to Marin harbour (NW Spain)
NASA Astrophysics Data System (ADS)
Gómez, A. G.; García Alba, J.; Puente, A.; Juanes, J. A.
2014-04-01
A methodological procedure to estimate the environmental risk of dredging operations in aquatic systems has been developed. Environmental risk estimations are based on numerical models results, which provide an appropriated spatio-temporal framework analysis to guarantee an effective decision-making process. The methodological procedure has been applied on a real dredging operation in the port of Marin (NW Spain). Results from Marin harbour confirmed the suitability of the developed methodology and the conceptual approaches as a comprehensive and practical management tool.
Integrated Medical Model Verification, Validation, and Credibility
NASA Technical Reports Server (NTRS)
Walton, Marlei; Kerstman, Eric; Foy, Millennia; Shah, Ronak; Saile, Lynn; Boley, Lynn; Butler, Doug; Myers, Jerry
2014-01-01
The Integrated Medical Model (IMM) was designed to forecast relative changes for a specified set of crew health and mission success risk metrics by using a probabilistic (stochastic process) model based on historical data, cohort data, and subject matter expert opinion. A probabilistic approach is taken since exact (deterministic) results would not appropriately reflect the uncertainty in the IMM inputs. Once the IMM was conceptualized, a plan was needed to rigorously assess input information, framework and code, and output results of the IMM, and ensure that end user requests and requirements were considered during all stages of model development and implementation. METHODS: In 2008, the IMM team developed a comprehensive verification and validation (VV) plan, which specified internal and external review criteria encompassing 1) verification of data and IMM structure to ensure proper implementation of the IMM, 2) several validation techniques to confirm that the simulation capability of the IMM appropriately represents occurrences and consequences of medical conditions during space missions, and 3) credibility processes to develop user confidence in the information derived from the IMM. When the NASA-STD-7009 (7009) was published, the IMM team updated their verification, validation, and credibility (VVC) project plan to meet 7009 requirements and include 7009 tools in reporting VVC status of the IMM. RESULTS: IMM VVC updates are compiled recurrently and include 7009 Compliance and Credibility matrices, IMM VV Plan status, and a synopsis of any changes or updates to the IMM during the reporting period. Reporting tools have evolved over the lifetime of the IMM project to better communicate VVC status. This has included refining original 7009 methodology with augmentation from the NASA-STD-7009 Guidance Document. End user requests and requirements are being satisfied as evidenced by ISS Program acceptance of IMM risk forecasts, transition to an operational model and simulation tool, and completion of service requests from a broad end user consortium including Operations, Science and Technology Planning, and Exploration Planning. CONCLUSIONS: The VVC approach established by the IMM project of combining the IMM VV Plan with 7009 requirements is comprehensive and includes the involvement of end users at every stage in IMM evolution. Methods and techniques used to quantify the VVC status of the IMM have not only received approval from the local NASA community but have also garnered recognition by other federal agencies seeking to develop similar guidelines in the medical modeling community.
Schütte, Judith; Wang, Huange; Antoniou, Stella; Jarratt, Andrew; Wilson, Nicola K; Riepsaame, Joey; Calero-Nieto, Fernando J; Moignard, Victoria; Basilico, Silvia; Kinston, Sarah J; Hannah, Rebecca L; Chan, Mun Chiang; Nürnberg, Sylvia T; Ouwehand, Willem H; Bonzanni, Nicola; de Bruijn, Marella FTR; Göttgens, Berthold
2016-01-01
Transcription factor (TF) networks determine cell-type identity by establishing and maintaining lineage-specific expression profiles, yet reconstruction of mammalian regulatory network models has been hampered by a lack of comprehensive functional validation of regulatory interactions. Here, we report comprehensive ChIP-Seq, transgenic and reporter gene experimental data that have allowed us to construct an experimentally validated regulatory network model for haematopoietic stem/progenitor cells (HSPCs). Model simulation coupled with subsequent experimental validation using single cell expression profiling revealed potential mechanisms for cell state stabilisation, and also how a leukaemogenic TF fusion protein perturbs key HSPC regulators. The approach presented here should help to improve our understanding of both normal physiological and disease processes. DOI: http://dx.doi.org/10.7554/eLife.11469.001 PMID:26901438
Are validated outcome measures used in distal radial fractures truly valid?
Nienhuis, R. W.; Bhandari, M.; Goslings, J. C.; Poolman, R. W.; Scholtes, V. A. B.
2016-01-01
Objectives Patient-reported outcome measures (PROMs) are often used to evaluate the outcome of treatment in patients with distal radial fractures. Which PROM to select is often based on assessment of measurement properties, such as validity and reliability. Measurement properties are assessed in clinimetric studies, and results are often reviewed without considering the methodological quality of these studies. Our aim was to systematically review the methodological quality of clinimetric studies that evaluated measurement properties of PROMs used in patients with distal radial fractures, and to make recommendations for the selection of PROMs based on the level of evidence of each individual measurement property. Methods A systematic literature search was performed in PubMed, EMbase, CINAHL and PsycINFO databases to identify relevant clinimetric studies. Two reviewers independently assessed the methodological quality of the studies on measurement properties, using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. Level of evidence (strong / moderate / limited / lacking) for each measurement property per PROM was determined by combining the methodological quality and the results of the different clinimetric studies. Results In all, 19 out of 1508 identified unique studies were included, in which 12 PROMs were rated. The Patient-rated wrist evaluation (PRWE) and the Disabilities of Arm, Shoulder and Hand questionnaire (DASH) were evaluated on most measurement properties. The evidence for the PRWE is moderate that its reliability, validity (content and hypothesis testing), and responsiveness are good. The evidence is limited that its internal consistency and cross-cultural validity are good, and its measurement error is acceptable. There is no evidence for its structural and criterion validity. The evidence for the DASH is moderate that its responsiveness is good. The evidence is limited that its reliability and the validity on hypothesis testing are good. There is no evidence for the other measurement properties. Conclusion According to this systematic review, there is, at best, moderate evidence that the responsiveness of the PRWE and DASH are good, as are the reliability and validity of the PRWE. We recommend these PROMs in clinical studies in patients with distal radial fractures; however, more clinimetric studies of higher methodological quality are needed to adequately determine the other measurement properties. Cite this article: Dr Y. V. Kleinlugtenbelt. Are validated outcome measures used in distal radial fractures truly valid?: A critical assessment using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. Bone Joint Res 2016;5:153–161. DOI: 10.1302/2046-3758.54.2000462. PMID:27132246
DOT National Transportation Integrated Search
2015-02-01
Although the freeway travel time data has been validated extensively in recent : years, the quality of arterial travel time data is not well known. This project : presents a comprehensive validation scheme for arterial travel time data based : on GPS...
ERIC Educational Resources Information Center
Pirnay-Dummer, Pablo; Ifenthaler, Dirk
2011-01-01
Our study integrates automated natural language-oriented assessment and analysis methodologies into feasible reading comprehension tasks. With the newly developed T-MITOCAR toolset, prose text can be automatically converted into an association net which has similarities to a concept map. The "text to graph" feature of the software is based on…
ERIC Educational Resources Information Center
Foorman, Barbara R.; Petscher, Yaacov
2011-01-01
In Florida, mean proficiency scores are reported on the Florida Comprehensive Achievement Test (FCAT) as well as recommended learning gains from the developmental scale score. Florida now has another within-year measure of growth in reading comprehension from the Florida Assessments for Instruction in Reading (FAIR). The FAIR reading comprehension…
ERIC Educational Resources Information Center
Belet Boyaci, S. Dilek; Güner, Mediha
2018-01-01
The objective of the present study was to determine the impact of authentic task-based authentic material on reading comprehension, writing skills and writing motivation in the Turkish language course. The study was conducted with mixed design methodology. Quantitative data were collected with the quasi-experimental with pre-test post-test with…
ERIC Educational Resources Information Center
Almaghyuli, Azizah; Thompson, Hannah; Lambon Ralph, Matthew A.; Jefferies, Elizabeth
2012-01-01
Patients with multimodal semantic impairment following stroke (referred to here as "semantic aphasia" or SA) fail to show the standard effects of frequency in comprehension tasks. Instead, they show absent or even "reverse" frequency effects: i.e., better understanding of less common words. In addition, SA is associated with poor regulatory…
ERIC Educational Resources Information Center
Wolf, Patrick J.
2012-01-01
This report contains a summary of the findings from the various topical reports that comprise the author's comprehensive longitudinal study. As a summary, it does not include extensive details regarding the study samples and scientific methodologies employed in those topical studies. The research revealed a pattern of school choice results that…
ERIC Educational Resources Information Center
Berger, Louis S.; And Others
This report analyzes a two-step program designed to achieve security in the administration of the English Comprehension Level (ECL) test given by the Defense Language Institute. Since the ECL test score is the basis for major administrative and academic decisions, there is great motivation for performing well, and student test compromise is…
2008-09-01
SEP) is a comprehensive , iterative and recursive problem solving process, applied sequentially top-down by integrated teams. It transforms needs...central integrated design repository. It includes a comprehensive behavior modeling notation to understand the dynamics of a design. CORE is a MBSE...37 F. DYNAMIC POSITIONING..........................................................................38 G. FIREFIGHTING
ERIC Educational Resources Information Center
Brasher, Casey F.
2017-01-01
Reading comprehension assessments often lack instructional utility because they do not accurately pinpoint why a student has difficulty. The varying formats, directions, and response requirements of comprehension assessments lead to differential measurement of underlying skills and contribute to noted amounts of unshared variance among tests. Maze…
Development and Validity of the Rating Scales of Academic Skills for Reading Comprehension
ERIC Educational Resources Information Center
Shapiro, Edward S.; Gebhardt, Sarah; Flatley, Katie; Guard, Kirra B.; Fu, Qiong; Leichman, Erin S.; Calhoon, Mary Beth; Hojnoski, Robin
2017-01-01
The development and psychometric qualities of a measure using teacher judgment to rate performance in reading comprehension for narrative text is described--the Rating Scales for Academic Skills-Reading Comprehension Narrative (RSAS-RCN). Sixty-five teachers from the third, fourth, and fifth grades of 8 elementary schools completed the measure on…
ERIC Educational Resources Information Center
van Steensel, Roel; Oostdam, Ron; van Gelderen, Amos
2013-01-01
On the basis of a validation study of a new test for assessing low-achieving adolescents' reading comprehension skills--the SALT-reading--we analyzed two issues relevant to the field of reading test development. Using the test results of 200 seventh graders, we examined the possibility of identifying reading comprehension subskills and the effects…
ERIC Educational Resources Information Center
Clark, Amy K.
2013-01-01
The present study sought to fit a cognitive diagnostic model (CDM) across multiple forms of a passage-based reading comprehension assessment using the attribute hierarchy method. Previous research on CDMs for reading comprehension assessments served as a basis for the attributes in the hierarchy. The two attribute hierarchies were fit to data from…
ERIC Educational Resources Information Center
Wang, Jing-Ru; Chen, Shin-Feng
2016-01-01
This article reports on the development of an online dynamic approach for assessing and improving students' reading comprehension of science texts--the dynamic assessment for reading comprehension of science text (DARCST). The DARCST blended assessment and response-specific instruction into a holistic learning task for grades 5 and 6 students. The…
Development of a Conceptual Framework to Measure the Social Impact of Burns.
Marino, Molly; Soley-Bori, Marina; Jette, Alan M; Slavin, Mary D; Ryan, Colleen M; Schneider, Jeffrey C; Resnik, Linda; Acton, Amy; Amaya, Flor; Rossi, Melinda; Soria-Saucedo, Rene; Kazis, Lewis E
Measuring community reintegration following burn injury is important to assess the efficacy of therapies designed to optimize recovery. This project aims to develop and validate a conceptual framework for understanding the social impact of burn injuries in adults. The framework is critical for developing the item banks used for a computerized adaptive test. We performed a comprehensive literature review and consulted with clinical experts and burn survivors about social life areas impacted by burn injury. Focus groups with burn survivors and clinicians were conducted to inform and validate the framework. Transcripts were coded using grounded theory methodology. The World Health Organization's International Classification of Functioning, Disability and Health, was chosen to ground the content model. The primary construct identified was social participation, which contains two concepts: societal role and personal relationships. The subdomains chosen for item development were work, recreation and leisure, relating with strangers, and romantic, sexual, family, and informal relationships. Qualitative results strongly suggest that the conceptual model fits the constructs for societal role and personal relationships with the respective subdomains. This conceptual framework has guided the implementation of a large-scale calibration study currently underway which will lead to a computerized adaptive test for monitoring the social impacts of burn injuries during recovery.
A Novel Health Evaluation Strategy for Multifunctional Self-Validating Sensors
Shen, Zhengguang; Wang, Qi
2013-01-01
The performance evaluation of sensors is very important in actual application. In this paper, a theory based on multi-variable information fusion is studied to evaluate the health level of multifunctional sensors. A novel conception of health reliability degree (HRD) is defined to indicate a quantitative health level, which is different from traditional so-called qualitative fault diagnosis. To evaluate the health condition from both local and global perspectives, the HRD of a single sensitive component at multiple time points and the overall multifunctional sensor at a single time point are defined, respectively. The HRD methodology is emphasized by using multi-variable data fusion technology coupled with a grey comprehensive evaluation method. In this method, to acquire the distinct importance of each sensitive unit and the sensitivity of different time points, the information entropy and analytic hierarchy process method are used, respectively. In order to verify the feasibility of the proposed strategy, a health evaluating experimental system for multifunctional self-validating sensors was designed. The five different health level situations have been discussed. Successful results show that the proposed method is feasible, the HRD could be used to quantitatively indicate the health level and it does have a fast response to the performance changes of multifunctional sensors. PMID:23291576
Pfaar, O; Bastl, K; Berger, U; Buters, J; Calderon, M A; Clot, B; Darsow, U; Demoly, P; Durham, S R; Galán, C; Gehrig, R; Gerth van Wijk, R; Jacobsen, L; Klimek, L; Sofiev, M; Thibaudon, M; Bergmann, K C
2017-05-01
Clinical efficacy of pollen allergen immunotherapy (AIT) has been broadly documented in randomized controlled trials. The underlying clinical endpoints are analysed in seasonal time periods predefined based on the background pollen concentration. However, any validated or generally accepted definition from academia or regulatory authorities for this relevant pollen exposure intensity or period of time (season) is currently not available. Therefore, this Task Force initiative of the European Academy of Allergy and Clinical Immunology (EAACI) aimed to propose definitions based on expert consensus. A Task Force of the Immunotherapy and Aerobiology and Pollution Interest Groups of the EAACI reviewed the literature on pollen exposure in the context of defining relevant time intervals for evaluation of efficacy in AIT trials. Underlying principles in measuring pollen exposure and associated methodological problems and limitations were considered to achieve a consensus. The Task Force achieved a comprehensive position in defining pollen exposure times for different pollen types. Definitions are presented for 'pollen season', 'high pollen season' (or 'peak pollen period') and 'high pollen days'. This EAACI position paper provides definitions of pollen exposures for different pollen types for use in AIT trials. Their validity as standards remains to be tested in future studies. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Bayesian Monte Carlo and Maximum Likelihood Approach for ...
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood estimation (BMCML) to calibrate a lake oxygen recovery model. We first derive an analytical solution of the differential equation governing lake-averaged oxygen dynamics as a function of time-variable wind speed. Statistical inferences on model parameters and predictive uncertainty are then drawn by Bayesian conditioning of the analytical solution on observed daily wind speed and oxygen concentration data obtained from an earlier study during two recovery periods on a eutrophic lake in upper state New York. The model is calibrated using oxygen recovery data for one year and statistical inferences were validated using recovery data for another year. Compared with essentially two-step, regression and optimization approach, the BMCML results are more comprehensive and performed relatively better in predicting the observed temporal dissolved oxygen levels (DO) in the lake. BMCML also produced comparable calibration and validation results with those obtained using popular Markov Chain Monte Carlo technique (MCMC) and is computationally simpler and easier to implement than the MCMC. Next, using the calibrated model, we derive an optimal relationship between liquid film-transfer coefficien
Coastal Atmosphere and Sea Time Series (CoASTS)
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Zibordi, Giuseppe; Berthon, Jean-Francoise; Doyle, John P.; Grossi, Stefania; vanderLinde, Dirk; Targa, Cristina; Alberotanza, Luigi; McClain, Charles R. (Technical Monitor)
2002-01-01
The Coastal Atmosphere and Sea Time Series (CoASTS) Project aimed at supporting ocean color research and applications, from 1995 up to the time of publication of this document, has ensured the collection of a comprehensive atmospheric and marine data set from an oceanographic tower located in the northern Adriatic Sea. The instruments and the measurement methodologies used to gather quantities relevant for bio-optical modeling and for the calibration and validation of ocean color sensors, are described. Particular emphasis is placed on four items: (1) the evaluation of perturbation effects in radiometric data (i.e., tower-shading, instrument self-shading, and bottom effects); (2) the intercomparison of seawater absorption coefficients from in situ measurements and from laboratory spectrometric analysis on discrete samples; (3) the intercomparison of two filter techniques for in vivo measurement of particulate absorption coefficients; and (4) the analysis of repeatability and reproducibility of the most relevant laboratory measurements carried out on seawater samples (i.e., particulate and yellow substance absorption coefficients, and pigment and total suspended matter concentrations). Sample data are also presented and discussed to illustrate the typical features characterizing the CoASTS measurement site in view of supporting the suitability of the CoASTS data set for bio-optical modeling and ocean color calibration and validation.
Note: Methodology for the analysis of Bluetooth gateways in an implemented scatternet.
Etxaniz, J; Monje, P M; Aranguren, G
2014-03-01
This Note introduces a novel methodology to analyze the time performance of Bluetooth gateways in multi-hop networks, known as scatternets. The methodology is focused on distinguishing between the processing time and the time that each communication between nodes takes along an implemented scatternet. This technique is not only valid for Bluetooth networks but also for other wireless networks that offer access to their middleware in order to include beacons in the operation of the nodes. We show in this Note the results of the tests carried out on a Bluetooth scatternet in order to highlight the reliability and effectiveness of the methodology. The results also validate this technique showing convergence in the results when subtracting the time for the beacons from the delay measurements.
Evaluation of stormwater harvesting sites using multi criteria decision methodology
NASA Astrophysics Data System (ADS)
Inamdar, P. M.; Sharma, A. K.; Cook, Stephen; Perera, B. J. C.
2018-07-01
Selection of suitable urban stormwater harvesting sites and associated project planning are often complex due to spatial, temporal, economic, environmental and social factors, and related various other variables. This paper is aimed at developing a comprehensive methodology framework for evaluating of stormwater harvesting sites in urban areas using Multi Criteria Decision Analysis (MCDA). At the first phase, framework selects potential stormwater harvesting (SWH) sites using spatial characteristics in a GIS environment. In second phase, MCDA methodology is used for evaluating and ranking of SWH sites in multi-objective and multi-stakeholder environment. The paper briefly describes first phase of framework and focuses chiefly on the second phase of framework. The application of the methodology is also demonstrated over a case study comprising of the local government area, City of Melbourne (CoM), Australia for the benefit of wider water professionals engaged in this area. Nine performance measures (PMs) were identified to characterise the objectives and system performance related to the eight alternative SWH sites for the demonstration of the application of developed methodology. To reflect the stakeholder interests in the current study, four stakeholder participant groups were identified, namely, water authorities (WA), academics (AC), consultants (CS), and councils (CL). The decision analysis methodology broadly consisted of deriving PROMETHEE II rankings of eight alternative SWH sites in the CoM case study, under two distinct group decision making scenarios. The major innovation of this work is the development and application of comprehensive methodology framework that assists in the selection of potential sites for SWH, and facilitates the ranking in multi-objective and multi-stakeholder environment. It is expected that the proposed methodology will assist the water professionals and managers with better knowledge that will reduce the subjectivity in the selection and evaluation of SWH sites.
Stream habitat analysis using the instream flow incremental methodology
Bovee, Ken D.; Lamb, Berton L.; Bartholow, John M.; Stalnaker, Clair B.; Taylor, Jonathan; Henriksen, Jim
1998-01-01
This document describes the Instream Flow Methodology in its entirety. This also is to serve as a comprehensive introductory textbook on IFIM for training courses as it contains the most complete and comprehensive description of IFIM in existence today. This should also serve as an official guide to IFIM in publication to counteract the misconceptions about the methodology that have pervaded the professional literature since the mid-1980's as this describes IFIM as it is envisioned by its developers. The document is aimed at the decisionmakers of management and allocation of natural resources in providing them an overview; and to those who design and implement studies to inform the decisionmakers. There should be enough background on model concepts, data requirements, calibration techniques, and quality assurance to help the technical user design and implement a cost-effective application of IFIM that will provide policy-relevant information. Some of the chapters deal with basic organization of IFIM, procedural sequence of applying IFIM starting with problem identification, study planning and implementation, and problem resolution.
Pareto frontier analyses based decision making tool for transportation of hazardous waste.
Das, Arup; Mazumder, T N; Gupta, A K
2012-08-15
Transportation of hazardous wastes through a region poses immense threat on the development along its road network. The risk to the population, exposed to such activities, has been documented in the past. However, a comprehensive framework for routing hazardous wastes has often been overlooked. A regional Hazardous Waste Management scheme should incorporate a comprehensive framework for hazardous waste transportation. This framework would incorporate the various stakeholders involved in decision making. Hence, a multi-objective approach is required to safeguard the interest of all the concerned stakeholders. The objective of this study is to design a methodology for routing of hazardous wastes between the generating units and the disposal facilities through a capacity constrained network. The proposed methodology uses posteriori method with multi-objective approach to find non-dominated solutions for the system consisting of multiple origins and destinations. A case study of transportation of hazardous wastes in Kolkata Metropolitan Area has also been provided to elucidate the methodology. Copyright © 2012 Elsevier B.V. All rights reserved.
Toward building a comprehensive data mart
NASA Astrophysics Data System (ADS)
Boulware, Douglas; Salerno, John; Bleich, Richard; Hinman, Michael L.
2004-04-01
To uncover new relationships or patterns one must first build a corpus of data or what some call a data mart. How can we make sure we have collected all the pertinent data and have maximized coverage? There are hundreds of search engines that are available for use on the Internet today. Which one is best? Is one better for one problem and a second better for another? Are meta-search engines better than individual search engines? In this paper we look at one possible approach in developing a methodology to compare a number of search engines. Before we present this methodology, we first provide our motivation towards the need for increased coverage. We next investigate how we can obtain ground truth and what the ground truth can provide us in the way of some insight into the Internet and search engine capabilities. We then conclude our discussion by developing a methodology in which we compare a number of the search engines and how we can increase overall coverage and thus a more comprehensive data mart.
Supino, Phyllis G; Borer, Jeffrey S
2007-05-01
Due to inadequate preparation, many medical professionals are unable to critically evaluate published research articles or properly design, execute and present their own research. To increase exposure among physicians, medical students, and allied health professionals to diverse methodological issues involved in performing research. A comprehensive course on research methodology was newly designed for physicians and other members of an academic medical community, and has been successfully implemented beginning 1991. The role of the study hypothesis is highlighted; interactive pedagogical techniques are employed to promote audience engagement. Participants complete an annual evaluation to assess course quality and perceived outcomes. Outcomes also are assessed qualitatively by faculty. More than 500 physicians/other professionals have participated. Ratings have been consistently high. Topics deemed most valuable are investigational planning, hypothesis construction and study designs. An enhancement of capacity to define hypotheses and apply methodological concepts in the criticism of scientific papers and development of protocols/manuscripts has been observed. Participants and faculty believe the course improves critical appraisal skills and ability to conduct research. Our experience shows it is feasible to accomplish these objectives, with a high level of satisfaction, through a didactic program targeted to the general academic community.
Comprehensive analysis of soil nitrogen removal by catch crops based on growth and water use
NASA Astrophysics Data System (ADS)
Yasutake, D.; Kondo, K.; Yamane, S.; Kitano, M.; Mori, M.; Fujiwara, T.
2016-07-01
A new methodology for comprehensive analysis of the characteristics of nitrogen (N) removal from greenhouse soil by catch crop was proposed in relation to its growth and water use. The N removal is expressed as the product of five parameters: net assimilation rate, specific leaf area, shoot dry weight, water use efficiency for N removal, and water requirement for growth. This methodology was applied to the data of a greenhouse experiment where corn was cultivated under three plant densities. We analyzed the effect of plant density and examined the effectiveness of the methodology. Higher plant densities are advantageous not only for total N removal but also for water use efficiency in N removal and growth because of the large specific leaf area, shoot dry weight, and decreased soil evaporation. On the other hand, significant positive or negative linear relationships were found between all five parameters and N removal. This should improve the understanding of the N removal mechanisms and the interactions among its components. We show the effectiveness of our analytical methodology, which can contribute to identifying the optimum plant density according to the field situations (available water amount, soil N quantity to be removed) for practical catch crop cultivation.
Leighton, Angela; Weinborn, Michael; Maybery, Murray
2014-10-01
Bigler (2012) and Larrabee (2012) recently addressed the state of the science surrounding performance validity tests (PVTs) in a dialogue highlighting evidence for the valid and increased use of PVTs, but also for unresolved problems. Specifically, Bigler criticized the lack of guidance from neurocognitive processing theory in the PVT literature. For example, individual PVTs have applied the simultaneous forced-choice methodology using a variety of test characteristics (e.g., word vs. picture stimuli) with known neurocognitive processing implications (e.g., the "picture superiority effect"). However, the influence of such variations on classification accuracy has been inadequately evaluated, particularly among cognitively impaired individuals. The current review places the PVT literature in the context of neurocognitive processing theory, and identifies potential methodological factors to account for the significant variability we identified in classification accuracy across current PVTs. We subsequently evaluated the utility of a well-known cognitive manipulation to provide a Clinical Analogue Methodology (CAM), that is, to alter the PVT performance of healthy individuals to be similar to that of a cognitively impaired group. Initial support was found, suggesting the CAM may be useful alongside other approaches (analogue malingering methodology) for the systematic evaluation of PVTs, particularly the influence of specific neurocognitive processing components on performance.
Translations of Developmental Screening Instruments: An Evidence Map of Available Research.
El-Behadli, Ana F; Neger, Emily N; Perrin, Ellen C; Sheldrick, R Christopher
2015-01-01
Children whose parents do not speak English experience significant disparities in the identification of developmental delays and disorders; however, little is known about the availability and validity of translations of developmental screeners. The goal was to create a map of the scientific evidence regarding translations of the 9 Academy of Pediatrics-recommended screening instruments into languages other than English. The authors conducted a systematic search of Medline and PsycINFO, references of identified articles, publishers' Web sites, and official manuals. Through evidence mapping, a new methodology supported by AHRQ and the Cochrane Collaboration, the authors documented the extent and distribution of published evidence supporting translations of developmental screeners. Data extraction focused on 3 steps of the translation and validation process: (1) translation methods used, (2) collection of normative data in the target language, and (3) evidence for reliability and validity. The authors identified 63 distinct translations among the 9 screeners, of which 44 had supporting evidence published in peer-reviewed sources. Of the 63 translations, 35 had at least some published evidence regarding translation methods used, 28 involving normative data, and 32 regarding reliability and/or construct validity. One-third of the translations found were of the Denver Developmental Screening Test. Specific methods used varied greatly across screeners, as did the level of detail with which results were reported. Few developmental screeners have been translated into many languages. Evidence map of the authors demonstrates considerable variation in both the amount and the comprehensiveness of information available about translated instruments. Informal guidelines exist for conducting translation of psychometric instruments but not for documentation of this process. The authors propose that uniform guidelines be established for reporting translation research in peer-reviewed journals, similar to those for clinical trials and studies of diagnostic accuracy.
Bhinder, Bhavneet; Antczak, Christophe; Ramirez, Christina N.; Shum, David; Liu-Sullivan, Nancy; Radu, Constantin; Frattini, Mark G.
2013-01-01
Abstract RNA interference technology is becoming an integral tool for target discovery and validation.; With perhaps the exception of only few studies published using arrayed short hairpin RNA (shRNA) libraries, most of the reports have been either against pooled siRNA or shRNA, or arrayed siRNA libraries. For this purpose, we have developed a workflow and performed an arrayed genome-scale shRNA lethality screen against the TRC1 library in HeLa cells. The resulting targets would be a valuable resource of candidates toward a better understanding of cellular homeostasis. Using a high-stringency hit nomination method encompassing criteria of at least three active hairpins per gene and filtered for potential off-target effects (OTEs), referred to as the Bhinder–Djaballah analysis method, we identified 1,252 lethal and 6 rescuer gene candidates, knockdown of which resulted in severe cell death or enhanced growth, respectively. Cross referencing individual hairpins with the TRC1 validated clone database, 239 of the 1,252 candidates were deemed independently validated with at least three validated clones. Through our systematic OTE analysis, we have identified 31 microRNAs (miRNAs) in lethal and 2 in rescuer genes; all having a seed heptamer mimic in the corresponding shRNA hairpins and likely cause of the OTE observed in our screen, perhaps unraveling a previously unknown plausible essentiality of these miRNAs in cellular viability. Taken together, we report on a methodology for performing large-scale arrayed shRNA screens, a comprehensive analysis method to nominate high-confidence hits, and a performance assessment of the TRC1 library highlighting the intracellular inefficiencies of shRNA processing in general. PMID:23198867
Warkentin, Sarah; Mais, Laís Amaral; Latorre, Maria do Rosário Dias de Oliveira; Carnell, Susan; Taddei, José Augusto de Aguiar Carrazedo
2016-07-19
Recent national surveys in Brazil have demonstrated a decrease in the consumption of traditional food and a parallel increase in the consumption of ultra-processed food, which has contributed to a rise in obesity prevalence in all age groups. Environmental factors, especially familial factors, have a strong influence on the food intake of preschool children, and this has led to the development of psychometric scales to measure parents' feeding practices. The aim of this study was to test the validity of a translated and adapted Comprehensive Feeding Practices Questionnaire in a sample of Brazilian preschool-aged children enrolled in private schools. A transcultural adaptation process was performed in order to develop a modified questionnaire (43 items). After piloting, the questionnaire was sent to parents, along with additional questions about family characteristics. Test-retest reliability was assessed in one of the schools. Factor analysis with oblique rotation was performed. Internal reliability was tested using Cronbach's alpha and correlations between factors, discriminant validity using marker variables of child's food intake, and convergent validity via correlations with parental perceptions of perceived responsibility for feeding and concern about the child's weight were also performed. The final sample consisted of 402 preschool children. Factor analysis resulted in a final questionnaire of 43 items distributed over 6 factors. Cronbach alpha values were adequate (0.74 to 0.88), between-factor correlations were low, and discriminant validity and convergent validity were acceptable. The modified CFPQ demonstrated significant internal reliability in this urban Brazilian sample. Scale validation within different cultures is essential for a more comprehensive understanding of parental feeding practices for preschoolers.
Modeling mania in preclinical settings: a comprehensive review
Sharma, Ajaykumar N.; Fries, Gabriel R.; Galvez, Juan F.; Valvassori, Samira S.; Soares, Jair C.; Carvalho, André F.; Quevedo, Joao
2015-01-01
The current pathophysiological understanding of mechanisms leading to onset and progression of bipolar manic episodes remains limited. At the same time, available animal models for mania have limited face, construct, and predictive validities. Additionally, these models fail to encompass recent pathophysiological frameworks of bipolar disorder (BD), e.g. neuroprogression. Therefore, there is a need to search for novel preclinical models for mania that could comprehensively address these limitations. Herein we review the history, validity, and caveats of currently available animal models for mania. We also review new genetic models for mania, namely knockout mice for genes involved in neurotransmission, synapse formation, and intracellular signaling pathways. Furthermore, we review recent trends in preclinical models for mania that may aid in the comprehension of mechanisms underlying the neuroprogressive and recurring nature of BD. In conclusion, the validity of animal models for mania remains limited. Nevertheless, novel (e.g. genetic) animal models as well as adaptation of existing paradigms hold promise. PMID:26545487
FRIEND, MARGARET; KEPLINGER, MELANIE
2017-01-01
Early language comprehension may be one of the most important predictors of developmental risk. The need for performance-based assessment is predicated on limitations identified in the exclusive use of parent report and on the need for a performance measure with which to assess the convergent validity of parent report of comprehension. Child performance data require the development of procedures to facilitate infant attention and compliance. Forty infants (20 at 1;4 and 20 at 1;8) acquiring English completed a standard picture book task and the same task was administered on a touch-sensitive screen. The computerized task significantly improved task attention, compliance and performance. Reliability was high, indicating that infants were not responding randomly. Convergent validity with parent report and 4-month stability was substantial. Preliminary data extending this approach to Mexican-Spanish are presented. Results are discussed in terms of the promise of this technique for clinical and research settings and the potential influences of cultural factors on performance. PMID:18300430
Initial Teacher Licensure Testing in Tennessee: Test Validation.
ERIC Educational Resources Information Center
Bowman, Harry L.; Petry, John R.
In 1988 a study was conducted to determine the validity of candidate teacher licensure examinations for use in Tennessee under the 1984 Comprehensive Education Reform Act. The Department of Education conducted a study to determine the validity of 11 previously unvalidated or extensively revised tests for certification and to make recommendations…
Measuring Standards in Primary English: The Validity of PIRLS--A Response to Mary Hilton
ERIC Educational Resources Information Center
Whetton, Chris; Twist, Liz; Sainsbury, Marian
2007-01-01
Hilton (2006) criticises the PIRLS (Progress in International Reading Literacy Study) tests and the survey conduct, raising questions about the validity of international surveys of reading. Her criticisms fall into four broad areas: cultural validity, methodological issues, construct validity and the survey in England. However, her criticisms are…
Validation of Multilevel Constructs: Validation Methods and Empirical Findings for the EDI
ERIC Educational Resources Information Center
Forer, Barry; Zumbo, Bruno D.
2011-01-01
The purposes of this paper are to highlight the foundations of multilevel construct validation, describe two methodological approaches and associated analytic techniques, and then apply these approaches and techniques to the multilevel construct validation of a widely-used school readiness measure called the Early Development Instrument (EDI;…
Somatic Sensitivity and Reflexivity as Validity Tools in Qualitative Research
ERIC Educational Resources Information Center
Green, Jill
2015-01-01
Validity is a key concept in qualitative educational research. Yet, it is often not addressed in methodological writing about dance. This essay explores validity in a postmodern world of diverse approaches to scholarship, by looking at the changing face of validity in educational qualitative research and at how new understandings of the concept…
Schneider, Christoph; Hanakam, Florian; Wiewelhove, Thimo; Döweling, Alexander; Kellmann, Michael; Meyer, Tim; Pfeiffer, Mark; Ferrauti, Alexander
2018-01-01
A comprehensive monitoring of fitness, fatigue, and performance is crucial for understanding an athlete's individual responses to training to optimize the scheduling of training and recovery strategies. Resting and exercise-related heart rate measures have received growing interest in recent decades and are considered potentially useful within multivariate response monitoring, as they provide non-invasive and time-efficient insights into the status of the autonomic nervous system (ANS) and aerobic fitness. In team sports, the practical implementation of athlete monitoring systems poses a particular challenge due to the complex and multidimensional structure of game demands and player and team performance, as well as logistic reasons, such as the typically large number of players and busy training and competition schedules. In this regard, exercise-related heart rate measures are likely the most applicable markers, as they can be routinely assessed during warm-ups using short (3-5 min) submaximal exercise protocols for an entire squad with common chest strap-based team monitoring devices. However, a comprehensive and meaningful monitoring of the training process requires the accurate separation of various types of responses, such as strain, recovery, and adaptation, which may all affect heart rate measures. Therefore, additional information on the training context (such as the training phase, training load, and intensity distribution) combined with multivariate analysis, which includes markers of (perceived) wellness and fatigue, should be considered when interpreting changes in heart rate indices. The aim of this article is to outline current limitations of heart rate monitoring, discuss methodological considerations of univariate and multivariate approaches, illustrate the influence of different analytical concepts on assessing meaningful changes in heart rate responses, and provide case examples for contextualizing heart rate measures using simple heuristics. To overcome current knowledge deficits and methodological inconsistencies, future investigations should systematically evaluate the validity and usefulness of the various approaches available to guide and improve the implementation of decision-support systems in (team) sports practice.
Schneider, Christoph; Hanakam, Florian; Wiewelhove, Thimo; Döweling, Alexander; Kellmann, Michael; Meyer, Tim; Pfeiffer, Mark; Ferrauti, Alexander
2018-01-01
A comprehensive monitoring of fitness, fatigue, and performance is crucial for understanding an athlete's individual responses to training to optimize the scheduling of training and recovery strategies. Resting and exercise-related heart rate measures have received growing interest in recent decades and are considered potentially useful within multivariate response monitoring, as they provide non-invasive and time-efficient insights into the status of the autonomic nervous system (ANS) and aerobic fitness. In team sports, the practical implementation of athlete monitoring systems poses a particular challenge due to the complex and multidimensional structure of game demands and player and team performance, as well as logistic reasons, such as the typically large number of players and busy training and competition schedules. In this regard, exercise-related heart rate measures are likely the most applicable markers, as they can be routinely assessed during warm-ups using short (3–5 min) submaximal exercise protocols for an entire squad with common chest strap-based team monitoring devices. However, a comprehensive and meaningful monitoring of the training process requires the accurate separation of various types of responses, such as strain, recovery, and adaptation, which may all affect heart rate measures. Therefore, additional information on the training context (such as the training phase, training load, and intensity distribution) combined with multivariate analysis, which includes markers of (perceived) wellness and fatigue, should be considered when interpreting changes in heart rate indices. The aim of this article is to outline current limitations of heart rate monitoring, discuss methodological considerations of univariate and multivariate approaches, illustrate the influence of different analytical concepts on assessing meaningful changes in heart rate responses, and provide case examples for contextualizing heart rate measures using simple heuristics. To overcome current knowledge deficits and methodological inconsistencies, future investigations should systematically evaluate the validity and usefulness of the various approaches available to guide and improve the implementation of decision-support systems in (team) sports practice. PMID:29904351
ERIC Educational Resources Information Center
Smith, Karen; And Others
Procedures for validating data reported by students and parents on an application for Basic Educational Opportunity Grants were developed in 1978 for the U.S. Office of Education (OE). Validation activities include: validation of flagged Student Eligibility Reports (SERs) for students whose schools are part of the Alternate Disbursement System;…
Chen, Xin-Lin; Zhong, Liang-Huan; Wen, Yi; Liu, Tian-Wen; Li, Xiao-Ying; Hou, Zheng-Kun; Hu, Yue; Mo, Chuan-Wei; Liu, Feng-Bin
2017-09-15
This review aims to critically appraise and compare the measurement properties of inflammatory bowel disease (IBD)-specific health-related quality of life instruments. Medline, EMBASE and ISI Web of Knowledge were searched from their inception to May 2016. IBD-specific instruments for patients with Crohn's disease, ulcerative colitis or IBD were enrolled. The basic characteristics and domains of the instruments were collected. The methodological quality of measurement properties and measurement properties of the instruments were assessed. Fifteen IBD-specific instruments were included, which included twelve instruments for adult IBD patients and three for paediatric IBD patients. All of the instruments were developed in North American and European countries. The following common domains were identified: IBD-related symptoms, physical, emotional and social domain. The methodological quality was satisfactory for content validity; fair in internal consistency, reliability, structural validity, hypotheses testing and criterion validity; and poor in measurement error, cross-cultural validity and responsiveness. For adult IBD patients, the IBDQ-32 and its short version (SIBDQ) had good measurement properties and were the most widely used worldwide. For paediatric IBD patients, the IMPACT-III had good measurement properties and had more translated versions. Most methodological quality should be promoted, especially measurement error, cross-cultural validity and responsiveness. The IBDQ-32 was the most widely used instrument with good reliability and validity, followed by the SIBDQ and IMPACT-III. Further validation studies are necessary to support the use of other instruments.
Windsor, B; Popovich, I; Jordan, V; Showell, M; Shea, B; Farquhar, C
2012-12-01
Are there differences in the methodological quality of Cochrane systematic reviews (CRs) and non-Cochrane systematic reviews (NCRs) of assisted reproductive technologies? CRs on assisted reproduction are of higher methodological quality than similar reviews published in other journals. The quality of systematic reviews varies. This was a cross-sectional study of 30 CR and 30 NCR systematic reviews that were randomly selected from the eligible reviews identified from a literature search for the years 2007-2011. We extracted data on the reporting and methodological characteristics of the included systematic reviews. We assessed the methodological quality of the reviews using the 11-domain Measurement Tool to Assess the Methodological Quality of Systematic Reviews (AMSTAR) tool and subsequently compared CR and NCR systematic reviews. The AMSTAR quality assessment found that CRs were superior to NCRs. For 10 of 11 AMSTAR domains, the requirements were met in >50% of CRs, but only 4 of 11 domains showed requirements being met in >50% of NCRs. The strengths of CRs are the a priori study design, comprehensive literature search, explicit lists of included and excluded studies and assessments of internal validity. Significant failings in the CRs were found in duplicate study selection and data extraction (67% meeting requirements), assessment for publication bias (53% meeting requirements) and reporting of conflicts of interest (47% meeting requirements). NCRs were more likely to contain methodological weaknesses as the majority of the domains showed <40% of reviews meeting requirements, e.g. a priori study design (17%), duplicate study selection and data extraction (17%), assessment of study quality (27%), study quality in the formulation of conclusions (23%) and reporting of conflict of interests (10%). The AMSTAR assessment can only judge what is reported by authors. Although two of the five authors are involved in the production of CRs, the risk of bias was reduced by not involving these authors in the assessment of the systematic review quality. Not all systematic reviews are equal. The reader needs to consider the quality of the systematic review when they consider the results and the conclusions of a systematic review. There are no conflicts with any commercial organization. Funding was provided for the students by the summer studentship programme of the Faculty of Medical and Health Sciences of the University of Auckland.
Measurement properties of tools measuring mental health knowledge: a systematic review.
Wei, Yifeng; McGrath, Patrick J; Hayden, Jill; Kutcher, Stan
2016-08-23
Mental health literacy has received great attention recently to improve mental health knowledge, decrease stigma and enhance help-seeking behaviors. We conducted a systematic review to critically appraise the qualities of studies evaluating the measurement properties of mental health knowledge tools and the quality of included measurement properties. We searched PubMed, PsycINFO, EMBASE, CINAHL, the Cochrane Library, and ERIC for studies addressing psychometrics of mental health knowledge tools and published in English. We applied the COSMIN checklist to assess the methodological quality of each study as "excellent", "good", "fair", or "indeterminate". We ranked the level of evidence of the overall quality of each measurement property across studies as "strong", "moderate", "limited", "conflicting", or "unknown". We identified 16 mental health knowledge tools in 17 studies, addressing reliability, validity, responsiveness or measurement errors. The methodological quality of included studies ranged from "poor" to "excellent" including 6 studies addressing the content validity, internal consistency or structural validity demonstrating "excellent" quality. We found strong evidence of the content validity or internal consistency of 6 tools; moderate evidence of the internal consistency, the content validity or the reliability of 8 tools; and limited evidence of the reliability, the structural validity, the criterion validity, or the construct validity of 12 tools. Both the methodological qualities of included studies and the overall evidence of measurement properties are mixed. Based on the current evidence, we recommend that researchers consider using tools with measurement properties of strong or moderate evidence that also reached the threshold for positive ratings according to COSMIN checklist.
Aldekhayel, Salah A; Alselaim, Nahar A; Magzoub, Mohi Eldin; Al-Qattan, Mohammad M; Al-Namlah, Abdullah M; Tamim, Hani; Al-Khayal, Abdullah; Al-Habdan, Sultan I; Zamakhshary, Mohammed F
2012-10-24
Script Concordance Test (SCT) is a new assessment tool that reliably assesses clinical reasoning skills. Previous descriptions of developing SCT-question banks were merely subjective. This study addresses two gaps in the literature: 1) conducting the first phase of a multistep validation process of SCT in Plastic Surgery, and 2) providing an objective methodology to construct a question bank based on SCT. After developing a test blueprint, 52 test items were written. Five validation questions were developed and a validation survey was established online. Seven reviewers were asked to answer this survey. They were recruited from two countries, Saudi Arabia and Canada, to improve the test's external validity. Their ratings were transformed into percentages. Analysis was performed to compare reviewers' ratings by looking at correlations, ranges, means, medians, and overall scores. Scores of reviewers' ratings were between 76% and 95% (mean 86% ± 5). We found poor correlations between reviewers (Pearson's: +0.38 to -0.22). Ratings of individual validation questions ranged between 0 and 4 (on a scale 1-5). Means and medians of these ranges were computed for each test item (mean: 0.8 to 2.4; median: 1 to 3). A subset of test items comprising 27 items was generated based on a set of inclusion and exclusion criteria. This study proposes an objective methodology for validation of SCT-question bank. Analysis of validation survey is done from all angles, i.e., reviewers, validation questions, and test items. Finally, a subset of test items is generated based on a set of criteria.
A comprehensive framework for the assessment of new end uses in recycled water schemes.
Chen, Zhuo; Ngo, Huu Hao; Guo, Wenshan; Lim, Richard; Wang, Xiaochang C; O'Halloran, Kelly; Listowski, Andrzej; Corby, Nigel; Miechel, Clayton
2014-02-01
Nowadays, recycled water has provided sufficient flexibility to satisfy short-term freshwater needs and increase the reliability of long-term water supplies in many water scarce areas, which becomes an essential component of integrated water resources management. However, the current applications of recycled water are still quite limited that are mainly associated with non-potable purposes such as irrigation, industrial uses, toilet flushing and car washing. There is a large potential to exploit and develop new end uses of recycled water in both urban and rural areas. This can greatly contribute to freshwater savings, wastewater reduction and water sustainability. Consequently, the paper identified the potentials for the development of three recycled water new end uses, household laundry, livestock feeding and servicing, and swimming pool, in future water use market. To validate the strengths of these new applications, a conceptual decision analytic framework was proposed. This can be able to facilitate the optional management strategy selection process and thereafter provide guidance on the future end use studies within a larger context of the community, processes, and models in decision-making. Moreover, as complex evaluation criteria were selected and taken into account to narrow down the multiple management alternatives, the methodology can successfully add transparency, objectivity and comprehensiveness to the assessment. Meanwhile, the proposed approach could also allow flexibility to adapt to particular circumstances of each case under study. © 2013.
ERIC Educational Resources Information Center
Amendum, Steven J.; Conradi, Kristin; Hiebert, Elfrieda
2018-01-01
Prompted by the advent of new standards for increased text complexity in elementary classrooms in the USA, the current integrative review investigates the relationships between the level of text difficulty and elementary students' reading fluency and reading comprehension. After application of content and methodological criteria, a total of 26…
Does mood influence text processing and comprehension? Evidence from an eye-movement study.
Scrimin, Sara; Mason, Lucia
2015-09-01
Previous research has indicated that mood influences cognitive processes. However, there is scarce data regarding the link between everyday emotional states and readers' text processing and comprehension. We aim to extend current research on the effects of mood induction on science text processing and comprehension, using eye-tracking methodology. We investigated whether a positive-, negative-, and neutral-induced mood influences online processing, as revealed by indices of visual behaviour during reading, and offline text comprehension, as revealed by post-test questions. We were also interested in the link between text processing and comprehension. Seventy-eight undergraduate students randomly assigned to three mood-induction conditions. Students were mood-induced by watching a video clip. They were then asked to read a scientific text while eye movements were registered. Pre- and post-reading knowledge was assessed through open-ended questions. Experimentally induced moods lead readers to process an expository text differently. Overall, students in a positive mood spent significantly longer on the text processing than students in the negative and neutral moods. Eye-movement patterns indicated more effective processing related to longer proportion of look-back fixation times in positive-induced compared with negative-induced readers. Students in a positive mood also comprehended the text better, learning more factual knowledge, compared with students in the negative group. Only for the positive-induced readers did the more purposeful second-pass reading positively predict text comprehension. New insights are given on the effects of normal mood variations and students' text processing and comprehension by the use of eye-tracking methodology. Important implications for the role of emotional states in educational settings are highlighted. © 2015 The British Psychological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Katherine R.; Wall, Anna M.; Dobson, Patrick F.
This paper reviews a methodology being developed for reporting geothermal resources and project progress. The goal is to provide the U.S. Department of Energy's (DOE) Geothermal Technologies Office (GTO) with a consistent and comprehensible means of evaluating the impacts of its funding programs. This framework will allow the GTO to assess the effectiveness of research, development, and deployment (RD&D) funding, prioritize funding requests, and demonstrate the value of RD&D programs to the U.S. Congress and the public. Standards and reporting codes used in other countries and energy sectors provide guidance to develop the relevant geothermal methodology, but industry feedback andmore » our analysis suggest that the existing models have drawbacks that should be addressed. In order to formulate a comprehensive metric for use by the GTO, we analyzed existing resource assessments and reporting methodologies for the geothermal, mining, and oil and gas industries, and sought input from industry, investors, academia, national labs, and other government agencies. Using this background research as a guide, we describe a methodology for evaluating and reporting on GTO funding according to resource grade (geological, technical and socio-economic) and project progress. This methodology would allow GTO to target funding, measure impact by monitoring the progression of projects, or assess geological potential of targeted areas for development.« less
NASA Astrophysics Data System (ADS)
Lee, Jay; Wu, Fangji; Zhao, Wenyu; Ghaffari, Masoud; Liao, Linxia; Siegel, David
2014-01-01
Much research has been conducted in prognostics and health management (PHM), an emerging field in mechanical engineering that is gaining interest from both academia and industry. Most of these efforts have been in the area of machinery PHM, resulting in the development of many algorithms for this particular application. The majority of these algorithms concentrate on applications involving common rotary machinery components, such as bearings and gears. Knowledge of this prior work is a necessity for any future research efforts to be conducted; however, there has not been a comprehensive overview that details previous and on-going efforts in PHM. In addition, a systematic method for developing and deploying a PHM system has yet to be established. Such a method would enable rapid customization and integration of PHM systems for diverse applications. To address these gaps, this paper provides a comprehensive review of the PHM field, followed by an introduction of a systematic PHM design methodology, 5S methodology, for converting data to prognostics information. This methodology includes procedures for identifying critical components, as well as tools for selecting the most appropriate algorithms for specific applications. Visualization tools are presented for displaying prognostics information in an appropriate fashion for quick and accurate decision making. Industrial case studies are included in this paper to show how this methodology can help in the design of an effective PHM system.
Wightman, Jade; Julio, Flávia; Virués-Ortega, Javier
2014-05-01
Experimental functional analysis is an assessment methodology to identify the environmental factors that maintain problem behavior in individuals with developmental disabilities and in other populations. Functional analysis provides the basis for the development of reinforcement-based approaches to treatment. This article reviews the procedures, validity, and clinical implementation of the methodological variations of functional analysis and function-based interventions. We present six variations of functional analysis methodology in addition to the typical functional analysis: brief functional analysis, single-function tests, latency-based functional analysis, functional analysis of precursors, and trial-based functional analysis. We also present the three general categories of function-based interventions: extinction, antecedent manipulation, and differential reinforcement. Functional analysis methodology is a valid and efficient approach to the assessment of problem behavior and the selection of treatment strategies.
Investigation of Effective Material Properties of Stony Meteorites
NASA Technical Reports Server (NTRS)
Agrawal, Parul; Carlozzi, Alex; Bryson, Kathryn
2016-01-01
To assess the threat posed by an asteroid entering Earth's atmosphere, one must predict if, when, and how it fragments during entry. A comprehensive understanding of the Asteroid material properties is needed to achieve this objective. At present, the meteorite material found on Earth are the only objects from an entering asteroid that can be used as representative material and be tested inside a laboratory setting. Therefore, unit cell models are developed to determine the effective material properties of stony meteorites and in turn deduce the properties of asteroids. The unit cell is representative volume that accounts for diverse minerals, porosity, and matrix composition inside a meteorite. The various classes under investigation includes H-class, L-class, and LL-class chondrites. The effective mechanical properties such as Young's Modulus and Poisson's Ratio of the unit cell are calculated by performing several hundreds of Monte-Carlo simulations. Terrestrial analogs such as Basalt and Gabbro are being used to validate the unit cell methodology.
First-of-A-Kind Control Room Modernization Project Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Kenneth David
This project plan describes a comprehensive approach to the design of an end-state concept for a modernized control room for Palo Verde. It describes the collaboration arrangement between the DOE LWRS Program Control Room Modernization Project and the APS Palo Verde Nuclear Generating Station. It further describes the role of other collaborators, including the Institute for Energy Technology (IFE) and the Electric Power Research Institute (EPRI). It combines advanced tools, methodologies, and facilities to enable a science-based approach to the validation of applicable engineering and human factors principles for nuclear plant control rooms. It addresses the required project results andmore » documentation to demonstrate compliance with regulatory requirements. It describes the project tasks that will be conducted in the project, and the deliverable reports that will be developed through these tasks. This project plan will be updated as new tasks are added and as project milestones are completed. It will serve as an ongoing description on the project both for project participants and for industry stakeholders.« less
Oliveira, Bárbara L; Godinho, Daniela; O'Halloran, Martin; Glavin, Martin; Jones, Edward; Conceição, Raquel C
2018-05-19
Currently, breast cancer often requires invasive biopsies for diagnosis, motivating researchers to design and develop non-invasive and automated diagnosis systems. Recent microwave breast imaging studies have shown how backscattered signals carry relevant information about the shape of a tumour, and tumour shape is often used with current imaging modalities to assess malignancy. This paper presents a comprehensive analysis of microwave breast diagnosis systems which use machine learning to learn characteristics of benign and malignant tumours. The state-of-the-art, the main challenges still to overcome and potential solutions are outlined. Specifically, this work investigates the benefit of signal pre-processing on diagnostic performance, and proposes a new set of extracted features that capture the tumour shape information embedded in a signal. This work also investigates if a relationship exists between the antenna topology in a microwave system and diagnostic performance. Finally, a careful machine learning validation methodology is implemented to guarantee the robustness of the results and the accuracy of performance evaluation.
A computational framework to characterize and compare the geometry of coronary networks.
Bulant, C A; Blanco, P J; Lima, T P; Assunção, A N; Liberato, G; Parga, J R; Ávila, L F R; Pereira, A C; Feijóo, R A; Lemos, P A
2017-03-01
This work presents a computational framework to perform a systematic and comprehensive assessment of the morphometry of coronary arteries from in vivo medical images. The methodology embraces image segmentation, arterial vessel representation, characterization and comparison, data storage, and finally analysis. Validation is performed using a sample of 48 patients. Data mining of morphometric information of several coronary arteries is presented. Results agree to medical reports in terms of basic geometric and anatomical variables. Concerning geometric descriptors, inter-artery and intra-artery correlations are studied. Data reported here can be useful for the construction and setup of blood flow models of the coronary circulation. Finally, as an application example, similarity criterion to assess vasculature likelihood based on geometric features is presented and used to test geometric similarity among sibling patients. Results indicate that likelihood, measured through geometric descriptors, is stronger between siblings compared with non-relative patients. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Comprehensive Oculomotor Behavioral Response Assessment (COBRA)
NASA Technical Reports Server (NTRS)
Stone, Leland S. (Inventor); Liston, Dorion B. (Inventor)
2017-01-01
An eye movement-based methodology and assessment tool may be used to quantify many aspects of human dynamic visual processing using a relatively simple and short oculomotor task, noninvasive video-based eye tracking, and validated oculometric analysis techniques. By examining the eye movement responses to a task including a radially-organized appropriately randomized sequence of Rashbass-like step-ramp pursuit-tracking trials, distinct performance measurements may be generated that may be associated with, for example, pursuit initiation (e.g., latency and open-loop pursuit acceleration), steady-state tracking (e.g., gain, catch-up saccade amplitude, and the proportion of the steady-state response consisting of smooth movement), direction tuning (e.g., oblique effect amplitude, horizontal-vertical asymmetry, and direction noise), and speed tuning (e.g., speed responsiveness and noise). This quantitative approach may provide fast and results (e.g., a multi-dimensional set of oculometrics and a single scalar impairment index) that can be interpreted by one without a high degree of scientific sophistication or extensive training.
On state-of-charge determination for lithium-ion batteries
NASA Astrophysics Data System (ADS)
Li, Zhe; Huang, Jun; Liaw, Bor Yann; Zhang, Jianbo
2017-04-01
Accurate estimation of state-of-charge (SOC) of a battery through its life remains challenging in battery research. Although improved precisions continue to be reported at times, almost all are based on regression methods empirically, while the accuracy is often not properly addressed. Here, a comprehensive review is set to address such issues, from fundamental principles that are supposed to define SOC to methodologies to estimate SOC for practical use. It covers topics from calibration, regression (including modeling methods) to validation in terms of precision and accuracy. At the end, we intend to answer the following questions: 1) can SOC estimation be self-adaptive without bias? 2) Why Ah-counting is a necessity in almost all battery-model-assisted regression methods? 3) How to establish a consistent framework of coupling in multi-physics battery models? 4) To assess the accuracy in SOC estimation, statistical methods should be employed to analyze factors that contribute to the uncertainty. We hope, through this proper discussion of the principles, accurate SOC estimation can be widely achieved.
Defining science literacy: A pedagogical approach
NASA Astrophysics Data System (ADS)
Brilakis, Kathryn
A functional knowledge of science is required to capably evaluate the validity of conflicting positions on topics such as fracking, climate change, and the safety of genetically modified food. Scientifically illiterate individuals are at risk of favoring the persuasive arguments of those championing partisan, anti-science agendas. In an effort to enhance the scientific literacy of community college students and equip them with the skill set necessary to make informed decisions, this study generated a pedagogical definition of science literacy using survey methodology and then utilized the definition to construct an accessible, comprehensive, and pragmatic web-based science literacy program. In response to an email solicitation, college and university science educators submitted lists of topics within their specialty they considered essential when assessing science literacy. Their responses were tabulated and those topics cited most frequently by the participating physicists, biologists, chemists and geoscientists were assembled into a definition of science literacy. This definition was translated into a modular, web-based course suitable for both online and classroom learning published as: www.scienceliteracyforum.com.
On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology
NASA Astrophysics Data System (ADS)
Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela
2016-08-01
We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1-3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ˜ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.
Open-source framework for power system transmission and distribution dynamics co-simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Fan, Rui; Daily, Jeff
The promise of the smart grid entails more interactions between the transmission and distribution networks, and there is an immediate need for tools to provide the comprehensive modelling and simulation required to integrate operations at both transmission and distribution levels. Existing electromagnetic transient simulators can perform simulations with integration of transmission and distribution systems, but the computational burden is high for large-scale system analysis. For transient stability analysis, currently there are only separate tools for simulating transient dynamics of the transmission and distribution systems. In this paper, we introduce an open source co-simulation framework “Framework for Network Co-Simulation” (FNCS), togethermore » with the decoupled simulation approach that links existing transmission and distribution dynamic simulators through FNCS. FNCS is a middleware interface and framework that manages the interaction and synchronization of the transmission and distribution simulators. Preliminary testing results show the validity and capability of the proposed open-source co-simulation framework and the decoupled co-simulation methodology.« less
Focus-on-form instructional methods promote deaf college students' improvement in English grammar.
Berent, Gerald P; Kelly, Ronald R; Aldersley, Stephen; Schmitz, Kathryn L; Khalsa, Baldev Kaur; Panara, John; Keenan, Susan
2007-01-01
Focus-on-form English teaching methods are designed to facilitate second-language learners' noticing of target language input, where "noticing" is an acquisitional prerequisite for the comprehension, processing, and eventual integration of new grammatical knowledge. While primarily designed for teaching hearing second-language learners, many focus-on-form methods lend themselves to visual presentation. This article reports the results of classroom research on the visually based implementation of focus-on-form methods with deaf college students learning English. Two of 3 groups of deaf students received focus-on-form instruction during a 10-week remedial grammar course; a third control group received grammatical instruction that did not involve focus-on-form methods. The 2 experimental groups exhibited significantly greater improvement in English grammatical knowledge relative to the control group. These results validate the efficacy of visually based focus-on-form English instruction for deaf students of English and set the stage for the continual search for innovative and effective English teaching methodologies.
Complexity, Representation and Practice: Case Study as Method and Methodology
ERIC Educational Resources Information Center
Miles, Rebecca
2015-01-01
While case study is considered a common approach to examining specific and particular examples in research disciplines such as law, medicine and psychology, in the social sciences case study is often treated as a lesser, flawed or undemanding methodology which is less valid, reliable or theoretically rigorous than other methodologies. Building on…
DOT National Transportation Integrated Search
2015-02-01
The Maryland State Highway Administration (SHA) has initiated major planning efforts to improve transportation : efficiency, safety, and sustainability on critical highway corridors through its Comprehensive Highway Corridor : (CHC) program. This pro...
Competing Activation during Fantasy Text Comprehension
ERIC Educational Resources Information Center
Creer, Sarah D.; Cook, Anne E.; O'Brien, Edward J.
2018-01-01
During comprehension, readers' general world knowledge and contextual information compete for influence during integration and validation. Fantasy narratives, in which general world knowledge often conflicts with fantastical events, provide a setting to examine this competition. Experiment 1 showed that with sufficient elaboration, contextual…
Reading Comprehension Improvement with Individualized Cognitive Profiles and Metacognition
ERIC Educational Resources Information Center
Allen, Kathleen D.; Hancock, Thomas E.
2008-01-01
This study models improving classroom reading instruction through valid assessment and individualized metacomprehension. Individualized cognitive profiles of Woodcock-Johnson III cognitive abilities correlated with reading comprehension were used during classroom independent reading for judgments of learning, feedback, self-reflection, and…
Information security system quality assessment through the intelligent tools
NASA Astrophysics Data System (ADS)
Trapeznikov, E. V.
2018-04-01
The technology development has shown the automated system information security comprehensive analysis necessity. The subject area analysis indicates the study relevance. The research objective is to develop the information security system quality assessment methodology based on the intelligent tools. The basis of the methodology is the information security assessment model in the information system through the neural network. The paper presents the security assessment model, its algorithm. The methodology practical implementation results in the form of the software flow diagram are represented. The practical significance of the model being developed is noted in conclusions.
Mansutti, Irene; Saiani, Luisa; Grassetti, Luca; Palese, Alvisa
2017-03-01
The clinical learning environment is fundamental to nursing education paths, capable of affecting learning processes and outcomes. Several instruments have been developed in nursing education, aimed at evaluating the quality of the clinical learning environments; however, no systematic review of the psychometric properties and methodological quality of these studies has been performed to date. The aims of the study were: 1) to identify validated instruments evaluating the clinical learning environments in nursing education; 2) to evaluate critically the methodological quality of the psychometric property estimation used; and 3) to compare psychometric properties across the instruments available. A systematic review of the literature (using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines) and an evaluation of the methodological quality of psychometric properties (using the COnsensus-based Standards for the selection of health Measurement INstruments guidelines). The Medline and CINAHL databases were searched. Eligible studies were those that satisfied the following criteria: a) validation studies of instruments evaluating the quality of clinical learning environments; b) in nursing education; c) published in English or Italian; d) before April 2016. The included studies were evaluated for the methodological quality of the psychometric properties measured and then compared in terms of both the psychometric properties and the methodological quality of the processes used. The search strategy yielded a total of 26 studies and eight clinical learning environment evaluation instruments. A variety of psychometric properties have been estimated for each instrument, with differing qualities in the methodology used. Concept and construct validity were poorly assessed in terms of their significance and rarely judged by the target population (nursing students). Some properties were rarely considered (e.g., reliability, measurement error, criterion validity), whereas others were frequently estimated, but using different coefficients and statistical analyses (e.g., internal consistency, structural validity), thus rendering comparison across instruments difficult. Moreover, the methodological quality adopted in the property assessments was poor or fair in most studies, compromising the goodness of the psychometric values estimated. Clinical learning placements represent the key strategies in educating the future nursing workforce: instruments evaluating the quality of the settings, as well as their capacity to promote significant learning, are strongly recommended. Studies estimating psychometric properties, using an increased quality of research methodologies are needed in order to support nursing educators in the process of clinical placements accreditation and quality improvement. Copyright © 2017 Elsevier Ltd. All rights reserved.
Library of molecular associations: curating the complex molecular basis of liver diseases.
Buchkremer, Stefan; Hendel, Jasmin; Krupp, Markus; Weinmann, Arndt; Schlamp, Kai; Maass, Thorsten; Staib, Frank; Galle, Peter R; Teufel, Andreas
2010-03-20
Systems biology approaches offer novel insights into the development of chronic liver diseases. Current genomic databases supporting systems biology analyses are mostly based on microarray data. Although these data often cover genome wide expression, the validity of single microarray experiments remains questionable. However, for systems biology approaches addressing the interactions of molecular networks comprehensive but also highly validated data are necessary. We have therefore generated the first comprehensive database for published molecular associations in human liver diseases. It is based on PubMed published abstracts and aimed to close the gap between genome wide coverage of low validity from microarray data and individual highly validated data from PubMed. After an initial text mining process, the extracted abstracts were all manually validated to confirm content and potential genetic associations and may therefore be highly trusted. All data were stored in a publicly available database, Library of Molecular Associations http://www.medicalgenomics.org/databases/loma/news, currently holding approximately 1260 confirmed molecular associations for chronic liver diseases such as HCC, CCC, liver fibrosis, NASH/fatty liver disease, AIH, PBC, and PSC. We furthermore transformed these data into a powerful resource for molecular liver research by connecting them to multiple biomedical information resources. Together, this database is the first available database providing a comprehensive view and analysis options for published molecular associations on multiple liver diseases.
Validity and reliability of four language mapping paradigms.
Wilson, Stephen M; Bautista, Alexa; Yen, Melodie; Lauderdale, Stefanie; Eriksson, Dana K
2017-01-01
Language areas of the brain can be mapped in individual participants with functional MRI. We investigated the validity and reliability of four language mapping paradigms that may be appropriate for individuals with acquired aphasia: sentence completion, picture naming, naturalistic comprehension, and narrative comprehension. Five neurologically normal older adults were scanned on each of the four paradigms on four separate occasions. Validity was assessed in terms of whether activation patterns reflected the known typical organization of language regions, that is, lateralization to the left hemisphere, and involvement of the left inferior frontal gyrus and the left middle and/or superior temporal gyri. Reliability (test-retest reproducibility) was quantified in terms of the Dice coefficient of similarity, which measures overlap of activations across time points. We explored the impact of different absolute and relative voxelwise thresholds, a range of cluster size cutoffs, and limitation of analyses to a priori potential language regions. We found that the narrative comprehension and sentence completion paradigms offered the best balance of validity and reliability. However, even with optimal combinations of analysis parameters, there were many scans on which known features of typical language organization were not demonstrated, and test-retest reproducibility was only moderate for realistic parameter choices. These limitations in terms of validity and reliability may constitute significant limitations for many clinical or research applications that depend on identifying language regions in individual participants.
2017-09-01
with new methodologies of intratumoral phylogenetic analyses, will yield pivotal information in elucidating the key genes involved evolution of PCa...combined with both clinical and experimental genetic data produced by this study may empower patients and doctors to make personalized treatment decisions...sequencing, paired with new methodologies of intratumoral phylogenetic analyses, will yield pivotal information in elucidating the key genes involved
ERIC Educational Resources Information Center
Yamamoto, Tosh; Tagami, Masanori; Nakazawa, Minoru
2012-01-01
This paper purports to demonstrate a problem solving case in the course of the development of methodology, in which the quality of the negotiation practicum is maintained or raised without sacrificing the class contact hours for the lessons for reading comprehension skills, on which the essence of negotiation practicum is solely based. In this…
Developing and Implementing the Data Mining Algorithms in RAVEN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea
The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantificationmore » analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.« less
Come and Get It! A Discussion of Family Mealtime Literature and Factors Affecting Obesity Risk123
Martin-Biggers, Jennifer; Spaccarotella, Kim; Berhaupt-Glickstein, Amanda; Hongu, Nobuko; Worobey, John; Byrd-Bredbenner, Carol
2014-01-01
The L.E.A.D. (Locate, Evaluate, and Assemble Evidence to Inform Decisions) framework of the Institute of Medicine guided the assembly of transdisciplinary evidence for this comprehensive, updated review of family meal research, conducted with the goal of informing continued work in this area. More frequent family meals are associated with greater consumption of healthy foods in children, adolescents, and adults. Adolescents and children who consume fewer family meals consume more unhealthy food. School-aged children and adolescents who consume more family meals have greater intakes of typically underconsumed nutrients. Increased family meal frequency may decrease risk of overweight or obesity in children and adolescents. Frequent family meals also may protect against eating disorders and negative health behaviors in adolescents and young adults. Psychosocial benefits include improved perceptions of family relationships. However, the benefits of having a family meal can be undermined if the family consumes fast food, watches television at the meal, or has a more chaotic atmosphere. Although these findings are intriguing, inconsistent research methodology and instrumentation and limited use of validation studies make comparisons between studies difficult. Future research should use consistent methodology, examine these associations across a wide range of ages, clarify the effects of the mealtime environment and feeding styles, and develop strategies to help families promote healthful mealtime habits. PMID:24829470
Manzoli, L; Mensorio, M; Di Virgilio, M; Rosetti, A; Angeli, G; Panell, M; Cicchetti, A; Di Stanislao, F; Siliquini, Roberta
2007-01-01
Current epidemiological data suggest that the number of preventive interventions aimed at controlling alcohol, drug, food abuse and smoking achieved only partial success, especially in young individuals. In order to improve preventive action efficacy, the literature suggests the adoption of contents and communication instruments specifically targeted to different groups of individuals. The Valentino Project is a comprehensive survey on the characteristics of abuse of a representative sample of 3000 young workers (aged 18-35 years)from the Abruzzo Region of Italy. This paper describes its main methodological issues and the complete version of the questionnaire HW-80 (Healthy-Worker 80), that will be administered. HW-80 questionnaire includes 80 items on demographic characteristics, self-reported health, job-related stress, work organization, pattern of abuse, physical activity and others, and several of these items have been taken or derived from repeatedly validated questionnaires (SF-12, CAGE, Job-Strain, Effort-Reward, EU-DAP, etc.). The aims of the Valentino Project are to quantify the prevalence of obesity, alcohol use, smoking and drug addiction in diverse typologies of workers, and to describe their pattern of use. The ultimate purpose is to provide the necessary knowledge for the development of preventive strategies targeted to different professions, in order to maximize their efficacy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baggu, Murali; Giraldez, Julieta; Harris, Tom
In an effort to better understand the impacts of high penetrations of photovoltaic (PV) generators on distribution systems, Arizona Public Service and its partners completed a multi-year project to develop the tools and knowledge base needed to safely and reliably integrate high penetrations of utility- and residential-scale PV. Building upon the APS Community Power Project-Flagstaff Pilot, this project investigates the impact of PV on a representative feeder in northeast Flagstaff. To quantify and catalog the effects of the estimated 1.3 MW of PV that will be installed on the feeder (both smaller units at homes and large, centrally located systems),more » high-speed weather and electrical data acquisition systems and digital 'smart' meters were designed and installed to facilitate monitoring and to build and validate comprehensive, high-resolution models of the distribution system. These models are being developed to analyze the impacts of PV on distribution circuit protection systems (including coordination and anti-islanding), predict voltage regulation and phase balance issues, and develop volt/VAr control schemes. This paper continues from a paper presented at the 2014 IEEE PVSC conference that described feeder model evaluation and high penetration advanced scenario analysis, specifically feeder reconfiguration. This paper presents results from Phase 5 of the project. Specifically, the paper discusses tool automation; interconnection assessment methodology and cost benefit analysis.« less
Acoustic Seabed Characterization of the Porcupine Bank, Irish Margin
NASA Astrophysics Data System (ADS)
O'Toole, Ronan; Monteys, Xavier
2010-05-01
The Porcupine Bank represents a large section of continental shelf situated west of the Irish landmass, located in water depths ranging between 150 and 500m. Under the Irish National Seabed Survey (INSS 1999-2006) this area was comprehensively mapped, generating multiple acoustic datasets including high resolution multibeam echosounder data. The unique nature of the area's datasets in terms of data density, consistency and geographic extent has allowed the development of a large-scale integrated physical characterization of the Porcupine Bank for multidisciplinary applications. Integrated analysis of backscatter and bathymetry data has resulted in a baseline delineation of sediment distribution, seabed geology and geomorphological features on the bank, along with an inclusive set of related database information. The methodology used incorporates a variety of statistical techniques which are necessary in isolating sonar system artefacts and addressing sonar geometry related issues. A number of acoustic backscatter parameters at several angles of incidence have been analysed in order to complement the characterization for both surface and subsurface sediments. Acoustic sub bottom records have also been incorporated in order to investigate the physical characteristics of certain features on the Porcupine Bank. Where available, groundtruthing information in terms of sediment samples, video footage and cores has been applied to add physical descriptors and validation to the characterization. Extensive mapping of different rock outcrops, sediment drifts, seabed features and other geological classes has been achieved using this methodology.
Come and get it! A discussion of family mealtime literature and factors affecting obesity risk.
Martin-Biggers, Jennifer; Spaccarotella, Kim; Berhaupt-Glickstein, Amanda; Hongu, Nobuko; Worobey, John; Byrd-Bredbenner, Carol
2014-05-01
The L.E.A.D. (Locate, Evaluate, and Assemble Evidence to Inform Decisions) framework of the Institute of Medicine guided the assembly of transdisciplinary evidence for this comprehensive, updated review of family meal research, conducted with the goal of informing continued work in this area. More frequent family meals are associated with greater consumption of healthy foods in children, adolescents, and adults. Adolescents and children who consume fewer family meals consume more unhealthy food. School-aged children and adolescents who consume more family meals have greater intakes of typically underconsumed nutrients. Increased family meal frequency may decrease risk of overweight or obesity in children and adolescents. Frequent family meals also may protect against eating disorders and negative health behaviors in adolescents and young adults. Psychosocial benefits include improved perceptions of family relationships. However, the benefits of having a family meal can be undermined if the family consumes fast food, watches television at the meal, or has a more chaotic atmosphere. Although these findings are intriguing, inconsistent research methodology and instrumentation and limited use of validation studies make comparisons between studies difficult. Future research should use consistent methodology, examine these associations across a wide range of ages, clarify the effects of the mealtime environment and feeding styles, and develop strategies to help families promote healthful mealtime habits. © 2014 American Society for Nutrition.
Bayesian cross-entropy methodology for optimal design of validation experiments
NASA Astrophysics Data System (ADS)
Jiang, X.; Mahadevan, S.
2006-07-01
An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.
Validation of source approval of HMA surface mix aggregate : final report.
DOT National Transportation Integrated Search
2016-04-01
The main focus of this research project was to develop methodologies for the validation of source approval of hot : mix asphalt surface mix aggregate. In order to further enhance the validation process, a secondary focus was also to : create a spectr...
34 CFR 462.11 - What must an application contain?
Code of Federal Regulations, 2010 CFR
2010-07-01
... the methodology and procedures used to measure the reliability of the test. (h) Construct validity... previous test, and results from validity, reliability, and equating or standard-setting studies undertaken... NRS educational functioning levels (content validity). Documentation of the extent to which the items...
Justice, Lindsey B; Cooper, David S; Henderson, Carla; Brown, James; Simon, Katherine; Clark, Lindsey; Fleckenstein, Elizabeth; Benscoter, Alexis; Nelson, David P
2016-07-01
To improve communication during daily cardiac ICU multidisciplinary rounds. Quality improvement methodology. Twenty-five-bed cardiac ICUs in an academic free-standing pediatric hospital. All patients admitted to the cardiac ICU. Implementation of visual display of patient daily goals through a write-down and read-back process. The Rounds Effectiveness Assessment and Communication Tool was developed based on the previously validated Patient Knowledge Assessment Tool to evaluate comprehension of patient daily goals. Rounds were assessed for each patient by the bedside nurse, nurse practitioner or fellow, and attending physician, and answers were compared to determine percent agreement per day. At baseline, percent agreement for patient goals was only 62%. After initial implementation of the daily goal write-down/read-back process, which was written on paper by the bedside nurse, the Rounds Effectiveness Assessment and Communication Tool survey revealed no improvement. With adaptation of the intervention so goals were written on whiteboards for visual display during rounds, the percent agreement improved to 85%. Families were also asked to complete a survey (1-6 Likert scale) of their satisfaction with rounds and understanding of daily goals before and after the intervention. Family survey results improved from a mean of 4.6-5.7. Parent selection of the best possible score for each question was 19% at baseline and 75% after the intervention. Visual display of patient daily goals via a write-down/read-back process improves comprehension of goals by all team members and improves parent satisfaction. The daily goal whiteboard facilitates consistent development of a comprehensive plan of care for each patient, fosters goal-directed care, and provides a checklist for providers and parents to review throughout the day.
When is good, good enough? Methodological pragmatism for sustainable guideline development.
Browman, George P; Somerfield, Mark R; Lyman, Gary H; Brouwers, Melissa C
2015-03-06
Continuous escalation in methodological and procedural rigor for evidence-based processes in guideline development is associated with increasing costs and production delays that threaten sustainability. While health research methodologists are appropriately responsible for promoting increasing rigor in guideline development, guideline sponsors are responsible for funding such processes. This paper acknowledges that other stakeholders in addition to methodologists should be more involved in negotiating trade-offs between methodological procedures and efficiency in guideline production to produce guidelines that are 'good enough' to be trustworthy and affordable under specific circumstances. The argument for reasonable methodological compromise to meet practical circumstances is consistent with current implicit methodological practice. This paper proposes a conceptual tool as a framework to be used by different stakeholders in negotiating, and explicitly reporting, reasonable compromises for trustworthy as well as cost-worthy guidelines. The framework helps fill a transparency gap in how methodological choices in guideline development are made. The principle, 'when good is good enough' can serve as a basis for this approach. The conceptual tool 'Efficiency-Validity Methodological Continuum' acknowledges trade-offs between validity and efficiency in evidence-based guideline development and allows for negotiation, guided by methodologists, of reasonable methodological compromises among stakeholders. Collaboration among guideline stakeholders in the development process is necessary if evidence-based guideline development is to be sustainable.
NASA Astrophysics Data System (ADS)
Campanelli, Monica; Mascitelli, Alessandra; Sanò, Paolo; Diémoz, Henri; Estellés, Victor; Federico, Stefano; Iannarelli, Anna Maria; Fratarcangeli, Francesca; Mazzoni, Augusto; Realini, Eugenio; Crespi, Mattia; Bock, Olivier; Martínez-Lozano, Jose A.; Dietrich, Stefano
2018-01-01
The estimation of the precipitable water vapour content (W) with high temporal and spatial resolution is of great interest to both meteorological and climatological studies. Several methodologies based on remote sensing techniques have been recently developed in order to obtain accurate and frequent measurements of this atmospheric parameter. Among them, the relative low cost and easy deployment of sun-sky radiometers, or sun photometers, operating in several international networks, allowed the development of automatic estimations of W from these instruments with high temporal resolution. However, the great problem of this methodology is the estimation of the sun-photometric calibration parameters. The objective of this paper is to validate a new methodology based on the hypothesis that the calibration parameters characterizing the atmospheric transmittance at 940 nm are dependent on vertical profiles of temperature, air pressure and moisture typical of each measurement site. To obtain the calibration parameters some simultaneously seasonal measurements of W, from independent sources, taken over a large range of solar zenith angle and covering a wide range of W, are needed. In this work yearly GNSS/GPS datasets were used for obtaining a table of photometric calibration constants and the methodology was applied and validated in three European ESR-SKYNET network sites, characterized by different atmospheric and climatic conditions: Rome, Valencia and Aosta. Results were validated against the GNSS/GPS and AErosol RObotic NETwork (AERONET) W estimations. In both the validations the agreement was very high, with a percentage RMSD of about 6, 13 and 8 % in the case of GPS intercomparison at Rome, Aosta and Valencia, respectively, and of 8 % in the case of AERONET comparison in Valencia. Analysing the results by W classes, the present methodology was found to clearly improve W estimation at low W content when compared against AERONET in terms of % bias, bringing the agreement with the GPS (considered the reference one) from a % bias of 5.76 to 0.52.
QESA: Quarantine Extraterrestrial Sample Analysis Methodology
NASA Astrophysics Data System (ADS)
Simionovici, A.; Lemelle, L.; Beck, P.; Fihman, F.; Tucoulou, R.; Kiryukhina, K.; Courtade, F.; Viso, M.
2018-04-01
Our nondestructive, nm-sized, hyperspectral analysis methodology of combined X-rays/Raman/IR probes in BSL4 quarantine, renders our patented mini-sample holder ideal for detecting extraterrestrial life. Our Stardust and Archean results validate it.
Exploring a Framework for Consequential Validity for Performance-Based Assessments
ERIC Educational Resources Information Center
Kim, Su Jung
2017-01-01
This study explores a new comprehensive framework for understanding elements of validity, specifically for performance assessments that are administered within specific and dynamic contexts. The adoption of edTPA is a good empirical case for examining the concept of consequential validity because this assessment has been implemented at the state…
ERIC Educational Resources Information Center
Deng, Feng; Chai, Ching Sing; So, Hyo-Jeong; Qian, Yangyi; Chen, Lingling
2017-01-01
While various quantitative measures for assessing teachers' technological pedagogical content knowledge (TPACK) have developed rapidly, few studies to date have comprehensively validated the structure of TPACK through various criteria of validity especially for content specific areas. In this paper, we examined how the TPACK survey measure is…
ERIC Educational Resources Information Center
Klesius, Janell P.; Homan, Susan P.
1985-01-01
The article reviews validity and reliability studies on the informal reading inventory, a diagnostic instrument to identify reading grade-level placement and strengths and weaknesses in work recognition and comprehension. Gives suggestions to improve the validity and reliability of existing inventories and to evaluate them in newly published…
Variety and Drift in the Functions and Purposes of Assessment in K-12 Education
ERIC Educational Resources Information Center
Ho, Andrew D.
2014-01-01
Background/Context: The target of assessment validation is not an assessment but the use of an assessment for a purpose. Although the validation literature often provides examples of assessment purposes, comprehensive reviews of these purposes are rare. Additionally, assessment purposes posed for validation are generally described as discrete and…
ERIC Educational Resources Information Center
Huesman, Ronald L., Jr.; Frisbie, David A.
This study investigated the effect of extended-time limits in terms of performance levels and score comparability for reading comprehension scores on the Iowa Tests of Basic Skills (ITBS). The first part of the study compared the average reading comprehension scores on the ITBS of 61 sixth-graders with learning disabilities and 397 non learning…
ERIC Educational Resources Information Center
Landon, Laura L.
2017-01-01
This study examines the application of the Simple View of Reading (SVR), a reading comprehension theory focusing on word recognition and linguistic comprehension, to English Language Learners' (ELLs') English reading development. This study examines the concurrent and predictive validity of two components of the SVR, oral language and word-level…
Evaluating building performance in healthcare facilities: an organizational perspective.
Steinke, Claudia; Webster, Lynn; Fontaine, Marie
2010-01-01
Using the environment as a strategic tool is one of the most cost-effective and enduring approaches for improving public health; however, it is one that requires multiple perspectives. The purpose of this article is to highlight an innovative methodology that has been developed for conducting comprehensive performance evaluations in public sector health facilities in Canada. The building performance evaluation methodology described in this paper is a government initiative. The project team developed a comprehensive building evaluation process for all new capital health projects that would respond to the aforementioned need for stakeholders to be more accountable and to better integrate the larger organizational strategy of facilities. The Balanced Scorecard, which is a multiparadigmatic, performance-based business framework, serves as the underlying theoretical framework for this initiative. It was applied in the development of the conceptual model entitled the Building Performance Evaluation Scorecard, which provides the following benefits: (1) It illustrates a process to link facilities more effectively to the overall mission and goals of an organization; (2) It is both a measurement and a management system that has the ability to link regional facilities to measures of success and larger business goals; (3) It provides a standardized methodology that ensures consistency in assessing building performance; and (4) It is more comprehensive than traditional building evaluations. The methodology presented in this paper is both a measurement and management system that integrates the principles of evidence-based design with the practices of pre- and post-occupancy evaluation. It promotes accountability and continues throughout the life cycle of a project. The advantage of applying this framework is that it engages health organizations in clarifying a vision and strategy for their facilities and helps translate those strategies into action and measurable performance outcomes.
Sutton, Patrice
2014-01-01
Background: Synthesizing what is known about the environmental drivers of health is instrumental to taking prevention-oriented action. Methods of research synthesis commonly used in environmental health lag behind systematic review methods developed in the clinical sciences over the past 20 years. Objectives: We sought to develop a proof of concept of the “Navigation Guide,” a systematic and transparent method of research synthesis in environmental health. Discussion: The Navigation Guide methodology builds on best practices in research synthesis in evidence-based medicine and environmental health. Key points of departure from current methods of expert-based narrative review prevalent in environmental health include a prespecified protocol, standardized and transparent documentation including expert judgment, a comprehensive search strategy, assessment of “risk of bias,” and separation of the science from values and preferences. Key points of departure from evidence-based medicine include assigning a “moderate” quality rating to human observational studies and combining diverse evidence streams. Conclusions: The Navigation Guide methodology is a systematic and rigorous approach to research synthesis that has been developed to reduce bias and maximize transparency in the evaluation of environmental health information. Although novel aspects of the method will require further development and validation, our findings demonstrated that improved methods of research synthesis under development at the National Toxicology Program and under consideration by the U.S. Environmental Protection Agency are fully achievable. The institutionalization of robust methods of systematic and transparent review would provide a concrete mechanism for linking science to timely action to prevent harm. Citation: Woodruff TJ, Sutton P. 2014. The Navigation Guide systematic review methodology: a rigorous and transparent method for translating environmental health science into better health outcomes. Environ Health Perspect 122:1007–1014; http://dx.doi.org/10.1289/ehp.1307175 PMID:24968373
Quality of reporting of surveys in critical care journals: a methodologic review.
Duffett, Mark; Burns, Karen E; Adhikari, Neill K; Arnold, Donald M; Lauzier, François; Kho, Michelle E; Meade, Maureen O; Hayani, Omar; Koo, Karen; Choong, Karen; Lamontagne, François; Zhou, Qi; Cook, Deborah J
2012-02-01
Adequate reporting is needed to judge methodologic quality and assess the risk of bias of surveys. The objective of this study is to describe the methodology and quality of reporting of surveys published in five critical care journals. All issues (1996-2009) of the American Journal of Respiratory and Critical Care Medicine, Critical Care, Critical Care Medicine, Intensive Care Medicine, and Pediatric Critical Care Medicine. Two reviewers hand-searched all issues in duplicate. We included publications of self-administered questionnaires of health professionals and excluded surveys that were part of a multi-method study or measured the effect of an intervention. Data were abstracted in duplicate. We included 151 surveys. The frequency of survey publication increased at an average rate of 0.38 surveys per 1000 citations per year from 1996-2009 (p for trend = 0.001). The median number of respondents and reported response rates were 217 (interquartile range 90 to 402) and 63.3% (interquartile range 45.0% to 81.0%), respectively. Surveys originated predominantly from North America (United States [40.4%] and Canada [18.5%]). Surveys most frequently examined stated practice (78.8%), attitudes or opinions (60.3%), and less frequently knowledge (9.9%). The frequency of reporting on the survey design and methods were: 1) instrument development: domains (59.1%), item generation (33.1%), item reduction (12.6%); 2) instrument testing: pretesting or pilot testing (36.2%) and assessments of clarity (25.2%) or clinical sensibility (15.7%); and 3) clinimetric properties: qualitative or quantitative description of at least one of face, content, construct validity, intra- or inter-rater reliability, or consistency (28.5%). The reporting of five key elements of survey design and conduct did not significantly change over time. Surveys, primarily conducted in North America and focused on self-reported practice, are increasingly published in highly cited critical care journals. More uniform and comprehensive reporting will facilitate assessment of methodologic quality.
Validation of source approval of HMA surface mix aggregate using spectrometer : final report.
DOT National Transportation Integrated Search
2016-04-01
The main focus of this research project was to develop methodologies for the validation of source approval of hot : mix asphalt surface mix aggregate. In order to further enhance the validation process, a secondary focus was also to : create a spectr...
Validation of hot-poured crack sealant performance-based guidelines.
DOT National Transportation Integrated Search
2017-06-01
This report summarizes a comprehensive research effort to validate thresholds for performance-based guidelines and : grading system for hot-poured asphalt crack sealants. A series of performance tests were established in earlier research and : includ...
Kaech Moll, Veronika M; Escorpizo, Reuben; Portmann Bergamaschi, Ruth; Finger, Monika E
2016-08-01
The Comprehensive ICF Core Set for vocational rehabilitation (VR) is a list of essential categories on functioning based on the World Health Organization (WHO) International Classification of Functioning, Disability and Health (ICF), which describes a standard for interdisciplinary assessment, documentation, and communication in VR. The aim of this study was to examine the content validity of the Comprehensive ICF Core Set for VR from the perspective of physical therapists. A 3-round email survey was performed using the Delphi method. A convenience sample of international physical therapists working in VR with work experience of ≥2 years were asked to identify aspects they consider as relevant when evaluating or treating clients in VR. Responses were linked to the ICF categories and compared with the Comprehensive ICF Core Set for VR. Sixty-two physical therapists from all 6 WHO world regions responded with 3,917 statements that were subsequently linked to 338 ICF categories. Fifteen (17%) of the 90 categories in the Comprehensive ICF Core Set for VR were confirmed by the physical therapists in the sample. Twenty-two additional ICF categories were identified that were not included in the Comprehensive ICF Core Set for VR. Vocational rehabilitation in physical therapy is not well defined in every country and might have resulted in the small sample size. Therefore, the results cannot be generalized to all physical therapists practicing in VR. The content validity of the ICF Core Set for VR is insufficient from solely a physical therapist perspective. The results of this study could be used to define a physical therapy-specific set of ICF categories to develop and guide physical therapist clinical practice in VR. © 2016 American Physical Therapy Association.
NASA Astrophysics Data System (ADS)
Wetzel, Angela Payne
Previous systematic reviews indicate a lack of reporting of reliability and validity evidence in subsets of the medical education literature. Psychology and general education reviews of factor analysis also indicate gaps between current and best practices; yet, a comprehensive review of exploratory factor analysis in instrument development across the continuum of medical education had not been previously identified. Therefore, the purpose for this study was critical review of instrument development articles employing exploratory factor or principal component analysis published in medical education (2006--2010) to describe and assess the reporting of methods and validity evidence based on the Standards for Educational and Psychological Testing and factor analysis best practices. Data extraction of 64 articles measuring a variety of constructs that have been published throughout the peer-reviewed medical education literature indicate significant errors in the translation of exploratory factor analysis best practices to current practice. Further, techniques for establishing validity evidence tend to derive from a limited scope of methods including reliability statistics to support internal structure and support for test content. Instruments reviewed for this study lacked supporting evidence based on relationships with other variables and response process, and evidence based on consequences of testing was not evident. Findings suggest a need for further professional development within the medical education researcher community related to (1) appropriate factor analysis methodology and reporting and (2) the importance of pursuing multiple sources of reliability and validity evidence to construct a well-supported argument for the inferences made from the instrument. Medical education researchers and educators should be cautious in adopting instruments from the literature and carefully review available evidence. Finally, editors and reviewers are encouraged to recognize this gap in best practices and subsequently to promote instrument development research that is more consistent through the peer-review process.
Dreier, Maren; Borutta, Birgit; Stahmeyer, Jona; Krauth, Christian; Walter, Ulla
2010-06-14
HEALTH CARE POLICY BACKGROUND: Findings from scientific studies form the basis for evidence-based health policy decisions. Quality assessments to evaluate the credibility of study results are an essential part of health technology assessment reports and systematic reviews. Quality assessment tools (QAT) for assessing the study quality examine to what extent study results are systematically distorted by confounding or bias (internal validity). The tools can be divided into checklists, scales and component ratings. What QAT are available to assess the quality of interventional studies or studies in the field of health economics, how do they differ from each other and what conclusions can be drawn from these results for quality assessments? A systematic search of relevant databases from 1988 onwards is done, supplemented by screening of the references, of the HTA reports of the German Agency for Health Technology Assessment (DAHTA) and an internet search. The selection of relevant literature, the data extraction and the quality assessment are carried out by two independent reviewers. The substantive elements of the QAT are extracted using a modified criteria list consisting of items and domains specific to randomized trials, observational studies, diagnostic studies, systematic reviews and health economic studies. Based on the number of covered items and domains, more and less comprehensive QAT are distinguished. In order to exchange experiences regarding problems in the practical application of tools, a workshop is hosted. A total of eight systematic methodological reviews is identified as well as 147 QAT: 15 for systematic reviews, 80 for randomized trials, 30 for observational studies, 17 for diagnostic studies and 22 for health economic studies. The tools vary considerably with regard to the content, the performance and quality of operationalisation. Some tools do not only include the items of internal validity but also the items of quality of reporting and external validity. No tool covers all elements or domains. Design-specific generic tools are presented, which cover most of the content criteria. The evaluation of QAT by using content criteria is difficult, because there is no scientific consensus on the necessary elements of internal validity, and not all of the generally accepted elements are based on empirical evidence. Comparing QAT with regard to contents neglects the operationalisation of the respective parameters, for which the quality and precision are important for transparency, replicability, the correct assessment and interrater reliability. QAT, which mix items on the quality of reporting and internal validity, should be avoided. There are different, design-specific tools available which can be preferred for quality assessment, because of its wider coverage of substantive elements of internal validity. To minimise the subjectivity of the assessment, tools with a detailed and precise operationalisation of the individual elements should be applied. For health economic studies, tools should be developed and complemented with instructions, which define the appropriateness of the criteria. Further research is needed to identify study characteristics that influence the internal validity of studies.
McAlpine, Jessica N; Greimel, Elfriede; Brotto, Lori A; Nout, Remy A; Shash, Emad; Avall-Lundqvist, Elisabeth; Friedlander, Michael L; Joly, Florence
2014-11-01
Quality of life (QoL) in endometrial cancer (EC) is understudied. Incorporation of QoL questionnaires and patient-reported outcomes in clinical trials has been inconsistent, and the tools and interpretation of these measures are unfamiliar to most practitioners. In 2012, the Gynecologic Cancer InterGroup Symptom Benefit Working Group convened for a brainstorming collaborative session to address deficiencies and work toward improving the quality and quantity of QoL research in women with EC. Through literature review and international expert contributions, we compiled a comprehensive appraisal of current generic and disease site-specific QoL assessment tools, strengths and weaknesses of these measures, assessment of sexual health, statistical considerations, and an exploration of the unique array of histopathologic and clinical factors that may influence QoL outcomes in women with EC. This collaborative composition is the first publication specific to EC that addresses methodology in QoL research and the components necessary to achieve high quality QoL data in clinical trials. Future recommendations regarding (1) the incorporation of patient-reported outcomes in all clinical trials in EC, (2) definition of an a priori hypothesis, (3) utilization of validated tools and consideration of new tools corresponding to new therapies or specific symptoms, (4) publication within the same time frame as clinical outcome data, and (5) attempt to correct for disease site-specific potential confounders are presented. Improved understanding of methodology in QoL research and an increased undertaking of EC-specific QoL research in clinical trials are imperative if we are to improve outcomes in women with EC.
Closing the Certification Gaps in Adaptive Flight Control Software
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
2008-01-01
Over the last five decades, extensive research has been performed to design and develop adaptive control systems for aerospace systems and other applications where the capability to change controller behavior at different operating conditions is highly desirable. Although adaptive flight control has been partially implemented through the use of gain-scheduled control, truly adaptive control systems using learning algorithms and on-line system identification methods have not seen commercial deployment. The reason is that the certification process for adaptive flight control software for use in national air space has not yet been decided. The purpose of this paper is to examine the gaps between the state-of-the-art methodologies used to certify conventional (i.e., non-adaptive) flight control system software and what will likely to be needed to satisfy FAA airworthiness requirements. These gaps include the lack of a certification plan or process guide, the need to develop verification and validation tools and methodologies to analyze adaptive controller stability and convergence, as well as the development of metrics to evaluate adaptive controller performance at off-nominal flight conditions. This paper presents the major certification gap areas, a description of the current state of the verification methodologies, and what further research efforts will likely be needed to close the gaps remaining in current certification practices. It is envisioned that closing the gap will require certain advances in simulation methods, comprehensive methods to determine learning algorithm stability and convergence rates, the development of performance metrics for adaptive controllers, the application of formal software assurance methods, the application of on-line software monitoring tools for adaptive controller health assessment, and the development of a certification case for adaptive system safety of flight.
Lerner, E Brooke; Garrison, Herbert G; Nichol, Graham; Maio, Ronald F; Lookman, Hunaid A; Sheahan, William D; Franz, Timothy R; Austad, James D; Ginster, Aaron M; Spaite, Daniel W
2012-02-01
Calculating the cost of an emergency medical services (EMS) system using a standardized method is important for determining the value of EMS. This article describes the development of a methodology for calculating the cost of an EMS system to its community. This includes a tool for calculating the cost of EMS (the "cost workbook") and detailed directions for determining cost (the "cost guide"). The 12-step process that was developed is consistent with current theories of health economics, applicable to prehospital care, flexible enough to be used in varying sizes and types of EMS systems, and comprehensive enough to provide meaningful conclusions. It was developed by an expert panel (the EMS Cost Analysis Project [EMSCAP] investigator team) in an iterative process that included pilot testing the process in three diverse communities. The iterative process allowed ongoing modification of the toolkit during the development phase, based upon direct, practical, ongoing interaction with the EMS systems that were using the toolkit. The resulting methodology estimates EMS system costs within a user-defined community, allowing either the number of patients treated or the estimated number of lives saved by EMS to be assessed in light of the cost of those efforts. Much controversy exists about the cost of EMS and whether the resources spent for this purpose are justified. However, the existence of a validated toolkit that provides a standardized process will allow meaningful assessments and comparisons to be made and will supply objective information to inform EMS and community officials who are tasked with determining the utilization of scarce societal resources. © 2012 by the Society for Academic Emergency Medicine.
A new approach for the assessment of temporal clustering of extratropical wind storms
NASA Astrophysics Data System (ADS)
Schuster, Mareike; Eddounia, Fadoua; Kuhnel, Ivan; Ulbrich, Uwe
2017-04-01
A widely-used methodology to assess the clustering of storms in a region is based on dispersion statistics of a simple homogeneous Poisson process. This clustering measure is determined by the ratio of the variance and the mean of the local storm statistics per grid point. Resulting values larger than 1, i.e. when the variance is larger than the mean, indicate clustering; while values lower than 1 indicate a sequencing of storms that is more regular than a random process. However, a disadvantage of this methodology is that the characteristics are valid for a pre-defined climatological time period, and it is not possible to identify a temporal variability of clustering. Also, the absolute value of the dispersion statistics is not particularly intuitive. We have developed an approach to describe temporal clustering of storms which offers a more intuitive comprehension, and at the same time allows to assess temporal variations. The approach is based on the local distribution of waiting times between the occurrence of two individual storm events, the former being computed through the post-processing of individual windstorm tracks which in turn are obtained by an objective tracking algorithm. Based on this distribution a threshold can be set, either by the waiting time expected from a random process or by a quantile of the observed distribution. Thus, it can be determined if two consecutive wind storm events count as part of a (temporal) cluster. We analyze extratropical wind storms in a reanalysis dataset and compare the results of the traditional clustering measure with our new methodology. We assess what range of clustering events (in terms of duration and frequency) is covered and identify if the historically known clustered seasons are detectable by the new clustering measure in the reanalysis.
Miguel, Susana; Caldeira, Sílvia; Vieira, Margarida
2018-04-01
This article describes the adequacy of the Q methodology as a new option for the validation of nursing diagnoses related to subjective foci. Discussion paper about the characteristics of the Q methodology. This method has been used in nursing research particularly related to subjective concepts and includes both a quantitative and qualitative dimension. The Q methodology seems to be an adequate and innovative method for the clinical validation of nursing diagnoses. The validation of nursing diagnoses related to subjective foci using the Q methodology could improve the level of evidence and provide nurses with clinical indicators for clinical reasoning and for the planning of effective interventions. Descrever a adequação da metodologia Q como uma nova opção para a validação clínica de diagnósticos de enfermagem relacionados com focos subjetivos. MÉTODOS: Artigo de discussão sobre as características da metodologia Q. Este método tem sido utilizado na pesquisa em enfermagem relacionada com conceitos subjetivos e inclui em simultâneo uma vertente qualitativa e quantitativa. CONCLUSÕES: A metodologia Q parece ser uma opção metodológica adequada para a validação clínica de diagnósticos de enfermagem. IMPLICAÇÕES PARA A PRÁTICA: A utilização da metodologia Q na validação clínica de diagnósticos de enfermagem relacionados com focos subjetivos pode melhorar o nível e evidência e facilitar o raciocínio clínico dos enfermeiros, ao providenciar indicadores clínicos também necessários ao desenvolvimento de intervenções efetivas. © 2016 NANDA International, Inc.
Psychometric evaluation of commonly used game-specific skills tests in rugby: A systematic review
Oorschot, Sander; Chiwaridzo, Matthew; CM Smits-Engelsman, Bouwien
2017-01-01
Objectives To (1) give an overview of commonly used game-specific skills tests in rugby and (2) evaluate available psychometric information of these tests. Methods The databases PubMed, MEDLINE CINAHL and Africa Wide information were systematically searched for articles published between January 1995 and March 2017. First, commonly used game-specific skills tests were identified. Second, the available psychometrics of these tests were evaluated and the methodological quality of the studies assessed using the Consensus-based Standards for the selection of health Measurement Instruments checklist. Studies included in the first step had to report detailed information on the construct and testing procedure of at least one game-specific skill, and studies included in the second step had additionally to report at least one psychometric property evaluating reliability, validity or responsiveness. Results 287 articles were identified in the first step, of which 30 articles met the inclusion criteria and 64 articles were identified in the second step of which 10 articles were included. Reactive agility, tackling and simulated rugby games were the most commonly used tests. All 10 studies reporting psychometrics reported reliability outcomes, revealing mainly strong evidence. However, all studies scored poor or fair on methodological quality. Four studies reported validity outcomes in which mainly moderate evidence was indicated, but all articles had fair methodological quality. Conclusion Game-specific skills tests indicated mainly high reliability and validity evidence, but the studies lacked methodological quality. Reactive agility seems to be a promising domain, but the specific tests need further development. Future high methodological quality studies are required in order to develop valid and reliable test batteries for rugby talent identification. Trial registration number PROSPERO CRD42015029747. PMID:29259812
Methodological challenges when doing research that includes ethnic minorities: a scoping review.
Morville, Anne-Le; Erlandsson, Lena-Karin
2016-11-01
There are challenging methodological issues in obtaining valid and reliable results on which to base occupational therapy interventions for ethnic minorities. The aim of this scoping review is to describe the methodological problems within occupational therapy research, when ethnic minorities are included. A thorough literature search yielded 21 articles obtained from the scientific databases PubMed, Cinahl, Web of Science and PsychInfo. Analysis followed Arksey and O'Malley's framework for scoping reviews, applying content analysis. The results showed methodological issues concerning the entire research process from defining and recruiting samples, the conceptual understanding, lack of appropriate instruments, data collection using interpreters to analyzing data. In order to avoid excluding the ethnic minorities from adequate occupational therapy research and interventions, development of methods for the entire research process is needed. It is a costly and time-consuming process, but the results will be valid and reliable, and therefore more applicable in clinical practice.
Experimental Validation of an Integrated Controls-Structures Design Methodology
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Walz, Joseph E.
1996-01-01
The first experimental validation of an integrated controls-structures design methodology for a class of large order, flexible space structures is described. Integrated redesign of the controls-structures-interaction evolutionary model, a laboratory testbed at NASA Langley, was described earlier. The redesigned structure was fabricated, assembled in the laboratory, and experimentally tested against the original structure. Experimental results indicate that the structure redesigned using the integrated design methodology requires significantly less average control power than the nominal structure with control-optimized designs, while maintaining the required line-of-sight pointing performance. Thus, the superiority of the integrated design methodology over the conventional design approach is experimentally demonstrated. Furthermore, amenability of the integrated design structure to other control strategies is evaluated, both analytically and experimentally. Using Linear-Quadratic-Guassian optimal dissipative controllers, it is observed that the redesigned structure leads to significantly improved performance with alternate controllers as well.
NASA Astrophysics Data System (ADS)
Pawar, Sumedh; Sharma, Atul
2018-01-01
This work presents mathematical model and solution methodology for a multiphysics engineering problem on arc formation during welding and inside a nozzle. A general-purpose commercial CFD solver ANSYS FLUENT 13.0.0 is used in this work. Arc formation involves strongly coupled gas dynamics and electro-dynamics, simulated by solution of coupled Navier-Stoke equations, Maxwell's equations and radiation heat-transfer equation. Validation of the present numerical methodology is demonstrated with an excellent agreement with the published results. The developed mathematical model and the user defined functions (UDFs) are independent of the geometry and are applicable to any system that involves arc-formation, in 2D axisymmetric coordinates system. The high-pressure flow of SF6 gas in the nozzle-arc system resembles arc chamber of SF6 gas circuit breaker; thus, this methodology can be extended to simulate arcing phenomenon during current interruption.
Payload training methodology study
NASA Technical Reports Server (NTRS)
1990-01-01
The results of the Payload Training Methodology Study (PTMS) are documented. Methods and procedures are defined for the development of payload training programs to be conducted at the Marshall Space Flight Center Payload Training Complex (PCT) for the Space Station Freedom program. The study outlines the overall training program concept as well as the six methodologies associated with the program implementation. The program concept outlines the entire payload training program from initial identification of training requirements to the development of detailed design specifications for simulators and instructional material. The following six methodologies are defined: (1) The Training and Simulation Needs Assessment Methodology; (2) The Simulation Approach Methodology; (3) The Simulation Definition Analysis Methodology; (4) The Simulator Requirements Standardization Methodology; (5) The Simulator Development Verification Methodology; and (6) The Simulator Validation Methodology.
NASA Astrophysics Data System (ADS)
Sarout, Joël.
2012-04-01
For the first time, a comprehensive and quantitative analysis of the domains of validity of popular wave propagation theories for porous/cracked media is provided. The case of a simple, yet versatile rock microstructure is detailed. The microstructural parameters controlling the applicability of the scattering theories, the effective medium theories, the quasi-static (Gassmann limit) and dynamic (inertial) poroelasticity are analysed in terms of pores/cracks characteristic size, geometry and connectivity. To this end, a new permeability model is devised combining the hydraulic radius and percolation concepts. The predictions of this model are compared to published micromechanical models of permeability for the limiting cases of capillary tubes and penny-shaped cracks. It is also compared to published experimental data on natural rocks in these limiting cases. It explicitly accounts for pore space topology around the percolation threshold and far above it. Thanks to this permeability model, the scattering, squirt-flow and Biot cut-off frequencies are quantitatively compared. This comparison leads to an explicit mapping of the domains of validity of these wave propagation theories as a function of the rock's actual microstructure. How this mapping impacts seismic, geophysical and ultrasonic wave velocity data interpretation is discussed. The methodology demonstrated here and the outcomes of this analysis are meant to constitute a quantitative guide for the selection of the most suitable modelling strategy to be employed for prediction and/or interpretation of rocks elastic properties in laboratory-or field-scale applications when information regarding the rock's microstructure is available.
Bio-Optical Measurement and Modeling of the California Current and Polar Oceans. Chapter 13
NASA Technical Reports Server (NTRS)
Mitchell, B. Greg
2001-01-01
This Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) project contract supports in situ ocean optical observations in the California Current, Southern Ocean, Indian Ocean as well as merger of other in situ data sets we have collected on various global cruises supported by separate grants or contracts. The principal goals of our research are to validate standard or experimental products through detailed bio-optical and biogeochemical measurements, and to combine ocean optical observations with advanced radiative transfer modeling to contribute to satellite vicarious radiometric calibration and advanced algorithm development. In collaboration with major oceanographic ship-based observation programs funded by various agencies (CalCOFI, US JGOFS, NOAA AMLR, INDOEX and Japan/East Sea) our SIMBIOS effort has resulted in data from diverse bio-optical provinces. For these global deployments we generate a high-quality, methodologically consistent, data set encompassing a wide-range of oceanic conditions. Global data collected in recent years have been integrated with our on-going CalCOFI database and have been used to evaluate Sea-Viewing Wide Field-of-view Sensor (SeaWiFS) algorithms and to carry out validation studies. The combined database we have assembled now comprises more than 700 stations and includes observations for the clearest oligotrophic waters, highly eutrophic blooms, red-tides and coastal case two conditions. The data has been used to validate water-leaving radiance estimated with SeaWiFS as well as bio optical algorithms for chlorophyll pigments. The comprehensive data is utilized for development of experimental algorithms (e.g., high-low latitude pigment transition, phytoplankton absorption, and cDOM).
Geytenbeek, Joke J; Mokkink, Lidwine B; Knol, Dirk L; Vermeulen, R Jeroen; Oostrom, Kim J
2014-09-01
In clinical practice, a variety of diagnostic tests are available to assess a child's comprehension of spoken language. However, none of these tests have been designed specifically for use with children who have severe motor impairments and who experience severe difficulty when using speech to communicate. This article describes the process of investigating the reliability and validity of the Computer-Based Instrument for Low Motor Language Testing (C-BiLLT), which was specifically developed to assess spoken Dutch language comprehension in children with cerebral palsy and complex communication needs. The study included 806 children with typical development, and 87 nonspeaking children with cerebral palsy and complex communication needs, and was designed to provide information on the psychometric qualities of the C-BiLLT. The potential utility of the C-BiLLT as a measure of spoken Dutch language comprehension abilities for children with cerebral palsy and complex communication needs is discussed.
Review of evaluation on ecological carrying capacity: The progress and trend of methodology
NASA Astrophysics Data System (ADS)
Wang, S. F.; Xu, Y.; Liu, T. J.; Ye, J. M.; Pan, B. L.; Chu, C.; Peng, Z. L.
2018-02-01
The ecological carrying capacity (ECC) has been regarded as an important reference to indicate the level of regional sustainable development since the very beginning of twenty-first century. By a brief review of the main progress in ECC evaluation methodologies in recent five years, this paper systematically discusses the features and differences of these methods and expounds the current states and future development trend of ECC methodology. The result shows that further exploration in terms of the dynamic, comprehensive and intelligent assessment technologies needs to be provided in order to form a unified and scientific ECC methodology system and to produce a reliable basis for environmental-economic decision-makings.