Minimum reporting standards for clinical research on groin pain in athletes
Delahunt, Eamonn; Thorborg, Kristian; Khan, Karim M; Robinson, Philip; Hölmich, Per; Weir, Adam
2015-01-01
Groin pain in athletes is a priority area for sports physiotherapy and sports medicine research. Heterogeneous studies with low methodological quality dominate research related to groin pain in athletes. Low-quality studies undermine the external validity of research findings and limit the ability to generalise findings to the target patient population. Minimum reporting standards for research on groin pain in athletes are overdue. We propose a set of minimum reporting standards based on best available evidence to be utilised in future research on groin pain in athletes. Minimum reporting standards are provided in relation to: (1) study methodology, (2) study participants and injury history, (3) clinical examination, (4) clinical assessment and (5) radiology. Adherence to these minimum reporting standards will strengthen the quality and transparency of research conducted on groin pain in athletes. This will allow an easier comparison of outcomes across studies in the future. PMID:26031644
This report is a standardized methodology description for the determination of strong acidity of fine particles (less than 2.5 microns) in ambient air using annular denuder technology. his methodology description includes two parts: art A - Standard Method and Part B - Enhanced M...
NASA Astrophysics Data System (ADS)
Skouloudis, Antonis; Evangelinos, Konstantinos; Kourmousis, Fotis
2009-08-01
The purpose of this article is twofold. First, evaluation scoring systems for triple bottom line (TBL) reports to date are examined and potential methodological weaknesses and problems are highlighted. In this context, a new assessment methodology is presented based explicitly on the most widely acknowledged standard on non-financial reporting worldwide, the Global Reporting Initiative (GRI) guidelines. The set of GRI topics and performance indicators was converted into scoring criteria while the generic scoring devise was set from 0 to 4 points. Secondly, the proposed benchmark tool was applied to the TBL reports published by Greek companies. Results reveal major gaps in reporting practices, stressing the need for the further development of internal systems and processes in order to collect essential non-financial performance data. A critical overview of the structure and rationale of the evaluation tool in conjunction with the Greek case study is discussed while recommendations for future research on the field of this relatively new form of reporting are suggested.
Skouloudis, Antonis; Evangelinos, Konstantinos; Kourmousis, Fotis
2009-08-01
The purpose of this article is twofold. First, evaluation scoring systems for triple bottom line (TBL) reports to date are examined and potential methodological weaknesses and problems are highlighted. In this context, a new assessment methodology is presented based explicitly on the most widely acknowledged standard on non-financial reporting worldwide, the Global Reporting Initiative (GRI) guidelines. The set of GRI topics and performance indicators was converted into scoring criteria while the generic scoring devise was set from 0 to 4 points. Secondly, the proposed benchmark tool was applied to the TBL reports published by Greek companies. Results reveal major gaps in reporting practices, stressing the need for the further development of internal systems and processes in order to collect essential non-financial performance data. A critical overview of the structure and rationale of the evaluation tool in conjunction with the Greek case study is discussed while recommendations for future research on the field of this relatively new form of reporting are suggested.
2012-04-18
Rigorous methodological standards help to ensure that medical research produces information that is valid and generalizable, and are essential in patient-centered outcomes research (PCOR). Patient-centeredness refers to the extent to which the preferences, decision-making needs, and characteristics of patients are addressed, and is the key characteristic differentiating PCOR from comparative effectiveness research. The Patient Protection and Affordable Care Act signed into law in 2010 created the Patient-Centered Outcomes Research Institute (PCORI), which includes an independent, federally appointed Methodology Committee. The Methodology Committee is charged to develop methodological standards for PCOR. The 4 general areas identified by the committee in which standards will be developed are (1) prioritizing research questions, (2) using appropriate study designs and analyses, (3) incorporating patient perspectives throughout the research continuum, and (4) fostering efficient dissemination and implementation of results. A Congressionally mandated PCORI methodology report (to be issued in its first iteration in May 2012) will begin to provide standards in each of these areas, and will inform future PCORI funding announcements and review criteria. The work of the Methodology Committee is intended to enable generation of information that is relevant and trustworthy for patients, and to enable decisions that improve patient-centered outcomes.
ERIC Educational Resources Information Center
Herman, Joan L.
2009-01-01
In this report, Joan Herman, director for the National Center for Research, on Evaluation, Standards, & Student Testing (CRESST) recommends that the new generation of science standards be based on lessons learned from current practice and on recent examples of standards-development methodology. In support of this, recent, promising efforts to…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimura, C.Y.; Glaser, R.E.; Mensing, R.W.
1996-08-01
The Aircraft Crash Risk Analysis Methodology (ACRAM) Panel has been formed by the US Department of Energy Office of Defense Programs (DOE/DP) for the purpose of developing a standard methodology for determining the risk from aircraft crashes onto DOE ground facilities. In order to accomplish this goal, the ACRAM panel has been divided into four teams, the data development team, the model evaluation team, the structural analysis team, and the consequence team. Each team, consisting of at least one member of the ACRAM plus additional DOE and DOE contractor personnel, specializes in the development of the methodology assigned to thatmore » team. This report documents the work performed by the data development team and provides the technical basis for the data used by the ACRAM Standard for determining the aircraft crash frequency. This report should be used to provide the generic data needed to calculate the aircraft crash frequency into the facility under consideration as part of the process for determining the aircraft crash risk to ground facilities as given by the DOE Standard Aircraft Crash Risk Assessment Methodology (ACRAM). Some broad guidance is presented on how to obtain the needed site-specific and facility specific data but this data is not provided by this document.« less
GEOTHERMAL EFFLUENT SAMPLING WORKSHOP
This report outlines the major recommendations resulting from a workshop to identify gaps in existing geothermal effluent sampling methodologies, define needed research to fill those gaps, and recommend strategies to lead to a standardized sampling methodology.
Byrne, Jillian L S; Yee, Tamara; O'Connor, Kathleen; Dyson, Michele P; Ball, Geoff D C
2017-04-01
To assess registration and reporting details of randomized controlled trials (RCTs) published from 2011 to 2016 across four obesity journals. All issues from four leading obesity journals were searched systematically for RCTs from January 2011 to June 2016. Data on registration status were extracted from manuscripts, online trial registries, and a trial database; corresponding authors were contacted for registration details, when necessary. The methodological reporting of RCTs was assessed on specific criteria from the Consolidated Standards of Reporting Trials. A total of 223 RCTs were reviewed. Three-quarters (n = 170) were registered publicly; 94 (55.3%) reported registration details in the manuscript, and 82 (48.2%) were registered prospectively. Newer RCTs were more likely to be registered prospectively than older RCTs (2014-2016: 57.3% vs. 2011-2013: 39.2%; c 2 = 5.5, P = 0.02). Assessment on the Consolidated Standards of Reporting Trials demonstrated that less than half of all studies reported data collection dates (n = 108; 48.4%) or included "randomized trial" in the title (n = 89; 39.9%). The methodological reporting of RCTs published in obesity journals is suboptimal, despite current guidelines and policies. To complement existing standards, editorial boards should incorporate mandatory fields within the online manuscript submission process to enhance the quality, transparency, and comprehensiveness of reporting RCTs in obesity journals. © 2017 The Obesity Society.
Systematic review of the methodological and reporting quality of case series in surgery.
Agha, R A; Fowler, A J; Lee, S-Y; Gundogan, B; Whitehurst, K; Sagoo, H K; Jeong, K J L; Altman, D G; Orgill, D P
2016-09-01
Case series are an important and common study type. No guideline exists for reporting case series and there is evidence of key data being missed from such reports. The first step in the process of developing a methodologically sound reporting guideline is a systematic review of literature relevant to the reporting deficiencies of case series. A systematic review of methodological and reporting quality in surgical case series was performed. The electronic search strategy was developed by an information specialist and included MEDLINE, Embase, Cochrane Methods Register, Science Citation Index and Conference Proceedings Citation index, from the start of indexing to 5 November 2014. Independent screening, eligibility assessments and data extraction were performed. Included articles were then analysed for five areas of deficiency: failure to use standardized definitions, missing or selective data (including the omission of whole cases or important variables), transparency or incomplete reporting, whether alternative study designs were considered, and other issues. Database searching identified 2205 records. Through the process of screening and eligibility assessments, 92 articles met inclusion criteria. Frequencies of methodological and reporting issues identified were: failure to use standardized definitions (57 per cent), missing or selective data (66 per cent), transparency or incomplete reporting (70 per cent), whether alternative study designs were considered (11 per cent) and other issues (52 per cent). The methodological and reporting quality of surgical case series needs improvement. The data indicate that evidence-based guidelines for the conduct and reporting of case series may be useful. © 2016 BJS Society Ltd Published by John Wiley & Sons Ltd.
Yu, Dan-Dan; Xie, Yan-Ming; Liao, Xing; Zhi, Ying-Jie; Jiang, Jun-Jie; Chen, Wei
2018-02-01
To evaluate the methodological quality and reporting quality of randomized controlled trials(RCTs) published in China Journal of Chinese Materia Medica, we searched CNKI and China Journal of Chinese Materia webpage to collect RCTs since the establishment of the magazine. The Cochrane risk of bias assessment tool was used to evaluate the methodological quality of RCTs. The CONSORT 2010 list was adopted as reporting quality evaluating tool. Finally, 184 RCTs were included and evaluated methodologically, of which 97 RCTs were evaluated with reporting quality. For the methodological evaluating, 62 trials(33.70%) reported the random sequence generation; 9(4.89%) trials reported the allocation concealment; 25(13.59%) trials adopted the method of blinding; 30(16.30%) trials reported the number of patients withdrawing, dropping out and those lost to follow-up;2 trials (1.09%) reported trial registration and none of the trial reported the trial protocol; only 8(4.35%) trials reported the sample size estimation in details. For reporting quality appraising, 3 reporting items of 25 items were evaluated with high-quality,including: abstract, participants qualified criteria, and statistical methods; 4 reporting items with medium-quality, including purpose, intervention, random sequence method, and data collection of sites and locations; 9 items with low-quality reporting items including title, backgrounds, random sequence types, allocation concealment, blindness, recruitment of subjects, baseline data, harms, and funding;the rest of items were of extremely low quality(the compliance rate of reporting item<10%). On the whole, the methodological and reporting quality of RCTs published in the magazine are generally low. Further improvement in both methodological and reporting quality for RCTs of traditional Chinese medicine are warranted. It is recommended that the international standards and procedures for RCT design should be strictly followed to conduct high-quality trials. At the same time, in order to improve the reporting quality of randomized controlled trials, CONSORT standards should be adopted in the preparation of research reports and submissions. Copyright© by the Chinese Pharmaceutical Association.
Assessing quality of reports on randomized clinical trials in nursing journals.
Parent, Nicole; Hanley, James A
2009-01-01
Several surveys have presented the quality of reports on randomized clinical trials (RCTs) published in general and specialty medical journals. The aim of these surveys was to raise scientific consciousness on methodological aspects pertaining to internal and external validity. These reviews have suggested that the methodological quality could be improved. We conducted a survey of reports on RCTs published in nursing journals to assess their methodological quality. The features we considered included sample size, flow of participants, assessment of baseline comparability, randomization, blinding, and statistical analysis. We collected data from all reports of RCTs published between January 1994 and December 1997 in Applied Nursing Research, Heart & Lung and Nursing Research. We hand-searched the journals and included all 54 articles in which authors reported that individuals have been randomly allocated to distinct groups. We collected data using a condensed form of the Consolidated Standards of Reporting Trials (CONSORT) statement for structured reporting of RCTs (Begg et al., 1996). Sample size calculations were included in only 22% of the reports. Only 48% of the reports provided information about the type of randomization, and a mere 22% described blinding strategies. Comparisons of baseline characteristics using hypothesis tests were abusively produced in more than 76% of the reports. Excessive use and unstructured reports of significance testing were common (59%), and all reports failed to provide magnitude of treatment differences with confidence intervals. Better methodological quality in reports of RCTs will contribute to increase the standards of nursing research.
The role of reporting standards in producing robust literature reviews
NASA Astrophysics Data System (ADS)
Haddaway, Neal Robert; Macura, Biljana
2018-06-01
Literature reviews can help to inform decision-making, yet they may be subject to fatal bias if not conducted rigorously as `systematic reviews'. Reporting standards help authors to provide sufficient methodological detail to allow verification and replication, clarifying when key steps, such as critical appraisal, have been omitted.
[Methods for evaluating diagnostic tests in Enfermedades Infecciosas y Microbiología Clínica].
Ramos, J M; Hernández, I
1998-04-01
In the field of infectious diseases and clinical microbiology, the evaluation of diagnostic tests (DT) is an important research area. The specific difficulties of this type of research has motivated that have not caught the severity methodological of others areas of clinical research. This article try to asses and characterize the methodology of articles about DT published in Enfermedades Infecciosas y Microbiología Clínica (EIMC) journal. Forty-five articles was selected in the EIMC journal during the 1990-1996 period, because of determinate the sensitivity and specificity of different DT. Methodological standards, extensively accepted was used. In all of articles, except one (98%) the gold standard was specified yours use, however in 4 studies (9%) include the DT in the gold standard (incorporation bias). The correct description of DT was reported in 75% of cases, but only in 11% cases the reproducibility of test was evaluated. The description of source of reference population, standard of inclusion and spectrum of composition was described in 58, 33 and 40% of articles, respectively. In 33% of studies presented workup bias, only 6% commented blind-analysis of results, and 11% presented indeterminate test results. Half of the studies reported test indexes for clinical subgroups, only one article (2%) provided numerical precision for test indexes, and only 7% reported receiver operating characteristics curves. The methodological quality of DT research in the EIMC journal may improve in different aspects of design and presentation of results.
NASA Technical Reports Server (NTRS)
1983-01-01
Standard descriptions for solar thermal power plants are established and uniform costing methodologies for nondevelopmental balance of plant (BOP) items are developed. The descriptions and methodologies developed are applicable to the major systems. These systems include the central receiver, parabolic dish, parabolic trough, hemispherical bowl, and solar pond. The standard plant is defined in terms of four categories comprising (1) solar energy collection, (2) power conversion, (3) energy storage, and (4) balance of plant. Each of these categories is described in terms of the type and function of components and/or subsystems within the category. A detailed description is given for the BOP category. BOP contains a number of nondevelopmental items that are common to all solar thermal systems. A standard methodology for determining the costs of these nondevelopmental BOP items is given. The methodology is presented in the form of cost equations involving cost factors such as unit costs. A set of baseline values for the normalized cost factors is also given.
Experimental uncertainty and drag measurements in the national transonic facility
NASA Technical Reports Server (NTRS)
Batill, Stephen M.
1994-01-01
This report documents the results of a study which was conducted in order to establish a framework for the quantitative description of the uncertainty in measurements conducted in the National Transonic Facility (NTF). The importance of uncertainty analysis in both experiment planning and reporting results has grown significantly in the past few years. Various methodologies have been proposed and the engineering community appears to be 'converging' on certain accepted practices. The practical application of these methods to the complex wind tunnel testing environment at the NASA Langley Research Center was based upon terminology and methods established in the American National Standards Institute (ANSI) and the American Society of Mechanical Engineers (ASME) standards. The report overviews this methodology.
Software production methodology tested project
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1976-01-01
The history and results of a 3 1/2-year study in software development methodology are reported. The findings of this study have become the basis for DSN software development guidelines and standard practices. The article discusses accomplishments, discoveries, problems, recommendations and future directions.
Methodology issues in implementation science.
Newhouse, Robin; Bobay, Kathleen; Dykes, Patricia C; Stevens, Kathleen R; Titler, Marita
2013-04-01
Putting evidence into practice at the point of care delivery requires an understanding of implementation strategies that work, in what context and how. To identify methodological issues in implementation science using 4 studies as cases and make recommendations for further methods development. Four cases are presented and methodological issues identified. For each issue raised, evidence on the state of the science is described. Issues in implementation science identified include diverse conceptual frameworks, potential weaknesses in pragmatic study designs, and the paucity of standard concepts and measurement. Recommendations to advance methods in implementation include developing a core set of implementation concepts and metrics, generating standards for implementation methods including pragmatic trials, mixed methods designs, complex interventions and measurement, and endorsing reporting standards for implementation studies.
Reiner, Bruce I
2018-04-01
Uncertainty in text-based medical reports has long been recognized as problematic, frequently resulting in misunderstanding and miscommunication. One strategy for addressing the negative clinical ramifications of report uncertainty would be the creation of a standardized methodology for characterizing and quantifying uncertainty language, which could provide both the report author and reader with context related to the perceived level of diagnostic confidence and accuracy. A number of computerized strategies could be employed in the creation of this analysis including string search, natural language processing and understanding, histogram analysis, topic modeling, and machine learning. The derived uncertainty data offers the potential to objectively analyze report uncertainty in real time and correlate with outcomes analysis for the purpose of context and user-specific decision support at the point of care, where intervention would have the greatest clinical impact.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Institute for Materials Research (IMR), one of the major organizational units of the National Bureau of Standards, conducts research to provide a better understanding of the basic properties of materials and develops methodology and standards for measuring their properties to help ensure effective utilization of technologically important materials by the nation's scientific, commercial, and industrial communities. This report covers activities of the Institute during the 12 months preceding the Panel meeting on January 26-27, 1976.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Institute for Materials Research (IMR), one of the major organizational units of the National Bureau of Standards, conducts research to provide a better understanding of the basic properties of materials and develops methodology and standards for measuring their properties to help ensure effective utilization of technologically important materials by the nation's scientific, commercial, and industrial communities. This report covers activities of the Institute during the 12 months preceding the Panel meeting on January 25-26, 1977.
Appendix B: Methodology. [2014 Teacher Prep Review
ERIC Educational Resources Information Center
Greenberg, Julie; Walsh, Kate; McKee, Arthur
2014-01-01
The "NCTQ Teacher Prep Review" evaluates the quality of programs that provide preservice preparation of public school teachers. This appendix describes the scope, methodology, timeline, staff, and standards involved in the production of "Teacher Prep Review 2014." Data collection, validation, and analysis for the report are…
Ashton, Carol M; Wray, Nelda P; Jarman, Anna F; Kolman, Jacob M; Wenner, Danielle M; Brody, Baruch A
2013-01-01
Background If trials of therapeutic interventions are to serve society’s interests, they must be of high methodological quality and must satisfy moral commitments to human subjects. The authors set out to develop a clinical-trials compendium in which standards for the ethical treatment of human subjects are integrated with standards for research methods. Methods The authors rank-ordered the world’s nations and chose the 31 with >700 active trials as of 24 July 2008. Governmental and other authoritative entities of the 31 countries were searched, and 1004 English-language documents containing ethical and/or methodological standards for clinical trials were identified. The authors extracted standards from 144 of those: 50 designated as ‘core’, 39 addressing trials of invasive procedures and a 5% sample (N=55) of the remainder. As the integrating framework for the standards we developed a coherent taxonomy encompassing all elements of a trial’s stages. Findings Review of the 144 documents yielded nearly 15 000 discrete standards. After duplicates were removed, 5903 substantive standards remained, distributed in the taxonomy as follows: initiation, 1401 standards, 8 divisions; design, 1869 standards, 16 divisions; conduct, 1473 standards, 8 divisions; analysing and reporting results, 997 standards, four divisions; and post-trial standards, 168 standards, 5 divisions. Conclusions The overwhelming number of source documents and standards uncovered in this study was not anticipated beforehand and confirms the extraordinary complexity of the clinical trials enterprise. This taxonomy of multinational ethical and methodological standards may help trialists and overseers improve the quality of clinical trials, particularly given the globalisation of clinical research. PMID:21429960
Ashton, Carol M; Wray, Nelda P; Jarman, Anna F; Kolman, Jacob M; Wenner, Danielle M; Brody, Baruch A
2011-06-01
If trials of therapeutic interventions are to serve society's interests, they must be of high methodological quality and must satisfy moral commitments to human subjects. The authors set out to develop a clinical-trials compendium in which standards for the ethical treatment of human subjects are integrated with standards for research methods. The authors rank-ordered the world's nations and chose the 31 with >700 active trials as of 24 July 2008. Governmental and other authoritative entities of the 31 countries were searched, and 1004 English-language documents containing ethical and/or methodological standards for clinical trials were identified. The authors extracted standards from 144 of those: 50 designated as 'core', 39 addressing trials of invasive procedures and a 5% sample (N=55) of the remainder. As the integrating framework for the standards we developed a coherent taxonomy encompassing all elements of a trial's stages. Review of the 144 documents yielded nearly 15 000 discrete standards. After duplicates were removed, 5903 substantive standards remained, distributed in the taxonomy as follows: initiation, 1401 standards, 8 divisions; design, 1869 standards, 16 divisions; conduct, 1473 standards, 8 divisions; analysing and reporting results, 997 standards, four divisions; and post-trial standards, 168 standards, 5 divisions. The overwhelming number of source documents and standards uncovered in this study was not anticipated beforehand and confirms the extraordinary complexity of the clinical trials enterprise. This taxonomy of multinational ethical and methodological standards may help trialists and overseers improve the quality of clinical trials, particularly given the globalisation of clinical research.
Magness, Scott T.; Puthoff, Brent J.; Crissey, Mary Ann; Dunn, James; Henning, Susan J.; Houchen, Courtney; Kaddis, John S.; Kuo, Calvin J.; Li, Linheng; Lynch, John; Martin, Martin G.; May, Randal; Niland, Joyce C.; Olack, Barbara; Qian, Dajun; Stelzner, Matthias; Swain, John R.; Wang, Fengchao; Wang, Jiafang; Wang, Xinwei; Yan, Kelley; Yu, Jian
2013-01-01
Fluorescence-activated cell sorting (FACS) is an essential tool for studies requiring isolation of distinct intestinal epithelial cell populations. Inconsistent or lack of reporting of the critical parameters associated with FACS methodologies has complicated interpretation, comparison, and reproduction of important findings. To address this problem a comprehensive multicenter study was designed to develop guidelines that limit experimental and data reporting variability and provide a foundation for accurate comparison of data between studies. Common methodologies and data reporting protocols for tissue dissociation, cell yield, cell viability, FACS, and postsort purity were established. Seven centers tested the standardized methods by FACS-isolating a specific crypt-based epithelial population (EpCAM+/CD44+) from murine small intestine. Genetic biomarkers for stem/progenitor (Lgr5 and Atoh 1) and differentiated cell lineages (lysozyme, mucin2, chromogranin A, and sucrase isomaltase) were interrogated in target and control populations to assess intra- and intercenter variability. Wilcoxon's rank sum test on gene expression levels showed limited intracenter variability between biological replicates. Principal component analysis demonstrated significant intercenter reproducibility among four centers. Analysis of data collected by standardized cell isolation methods and data reporting requirements readily identified methodological problems, indicating that standard reporting parameters facilitate post hoc error identification. These results indicate that the complexity of FACS isolation of target intestinal epithelial populations can be highly reproducible between biological replicates and different institutions by adherence to common cell isolation methods and FACS gating strategies. This study can be considered a foundation for continued method development and a starting point for investigators that are developing cell isolation expertise to study physiology and pathophysiology of the intestinal epithelium. PMID:23928185
ERIC Educational Resources Information Center
Greenberg, Julie; Walsh, Kate; McKee, Arthur
2014-01-01
The "NCTQ Teacher Prep Review" evaluates the quality of programs that provide preservice preparation of public school teachers. As part of the "Review," this appendix reports on a pilot study of new standards for assessing the quality of alternative certification programs. Background and methodology for alternative…
School Climate Reports from Norwegian Teachers: A Methodological and Substantive Study.
ERIC Educational Resources Information Center
Kallestad, Jan Helge; Olweus, Dan; Alsaker, Francoise
1998-01-01
Explores methodological and substantive issues relating to school climate, using a dataset derived from 42 Norwegian schools at two points of time and a standard definition of organizational climate. Identifies and analyzes four school-climate dimensions. Three dimensions (collegial communication, orientation to change, and teacher influence over…
Public Relations Telephone Surveys: Avoiding Methodological Debacles.
ERIC Educational Resources Information Center
Stone, Gerald C.
1996-01-01
Reports that a study revealed a serious methodological flaw in interviewer bias in telephone surveys. States that most surveys, using standard detection measures, would not find the defect, but outcomes were so misleading that a campaign using the results would be doomed. Warns about practitioner telephone surveys; suggests special precautions if…
Grabitz, Clara R; Button, Katherine S; Munafò, Marcus R; Newbury, Dianne F; Pernet, Cyril R; Thompson, Paul A; Bishop, Dorothy V M
2018-01-01
Genetics and neuroscience are two areas of science that pose particular methodological problems because they involve detecting weak signals (i.e., small effects) in noisy data. In recent years, increasing numbers of studies have attempted to bridge these disciplines by looking for genetic factors associated with individual differences in behavior, cognition, and brain structure or function. However, different methodological approaches to guarding against false positives have evolved in the two disciplines. To explore methodological issues affecting neurogenetic studies, we conducted an in-depth analysis of 30 consecutive articles in 12 top neuroscience journals that reported on genetic associations in nonclinical human samples. It was often difficult to estimate effect sizes in neuroimaging paradigms. Where effect sizes could be calculated, the studies reporting the largest effect sizes tended to have two features: (i) they had the smallest samples and were generally underpowered to detect genetic effects, and (ii) they did not fully correct for multiple comparisons. Furthermore, only a minority of studies used statistical methods for multiple comparisons that took into account correlations between phenotypes or genotypes, and only nine studies included a replication sample or explicitly set out to replicate a prior finding. Finally, presentation of methodological information was not standardized and was often distributed across Methods sections and Supplementary Material, making it challenging to assemble basic information from many studies. Space limits imposed by journals could mean that highly complex statistical methods were described in only a superficial fashion. In summary, methods that have become standard in the genetics literature-stringent statistical standards, use of large samples, and replication of findings-are not always adopted when behavioral, cognitive, or neuroimaging phenotypes are used, leading to an increased risk of false-positive findings. Studies need to correct not just for the number of phenotypes collected but also for the number of genotypes examined, genetic models tested, and subsamples investigated. The field would benefit from more widespread use of methods that take into account correlations between the factors corrected for, such as spectral decomposition, or permutation approaches. Replication should become standard practice; this, together with the need for larger sample sizes, will entail greater emphasis on collaboration between research groups. We conclude with some specific suggestions for standardized reporting in this area.
ERIC Educational Resources Information Center
Alves, Paulo; Uhomoibhi, James
2010-01-01
Purpose: This paper seeks to investigate and report on the status of identity management systems and e-learning standards across Europe for promoting mobility, collaboration and the sharing of contents and services in higher education institutions. Design/methodology/approach: The present research work examines existing e-learning standards and…
Pantex Falling Man - Independent Review Panel Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertolini, Louis; Brannon, Nathan; Olson, Jared
2014-11-01
Consolidated Nuclear Security (CNS) Pantex took the initiative to organize a Review Panel of subject matter experts to independently assess the adequacy of the Pantex Tripping Man Analysis methodology. The purpose of this report is to capture the details of the assessment including the scope, approach, results, and detailed Appendices. Along with the assessment of the analysis methodology, the panel evaluated the adequacy with which the methodology was applied as well as congruence with Department of Energy (DOE) standards 3009 and 3016. The approach included the review of relevant documentation, interactive discussion with Pantex staff, and the iterative process ofmore » evaluating critical lines of inquiry.« less
Kiluk, Brian D.; Sugarman, Dawn E.; Nich, Charla; Gibbons, Carly J.; Martino, Steve; Rounsaville, Bruce J.; Carroll, Kathleen M.
2013-01-01
Objective Computer-assisted therapies offer a novel, cost-effective strategy for providing evidence-based therapies to a broad range of individuals with psychiatric disorders. However, the extent to which the growing body of randomized trials evaluating computer-assisted therapies meets current standards of methodological rigor for evidence-based interventions is not clear. Method A methodological analysis of randomized clinical trials of computer-assisted therapies for adult psychiatric disorders, published between January 1990 and January 2010, was conducted. Seventy-five studies that examined computer-assisted therapies for a range of axis I disorders were evaluated using a 14-item methodological quality index. Results Results indicated marked heterogeneity in study quality. No study met all 14 basic quality standards, and three met 13 criteria. Consistent weaknesses were noted in evaluation of treatment exposure and adherence, rates of follow-up assessment, and conformity to intention-to-treat principles. Studies utilizing weaker comparison conditions (e.g., wait-list controls) had poorer methodological quality scores and were more likely to report effects favoring the computer-assisted condition. Conclusions While several well-conducted studies have indicated promising results for computer-assisted therapies, this emerging field has not yet achieved a level of methodological quality equivalent to those required for other evidence-based behavioral therapies or pharmacotherapies. Adoption of more consistent standards for methodological quality in this field, with greater attention to potential adverse events, is needed before computer-assisted therapies are widely disseminated or marketed as evidence based. PMID:21536689
Methodologic ramifications of paying attention to sex and gender differences in clinical research.
Prins, Martin H; Smits, Kim M; Smits, Luc J
2007-01-01
Methodologic standards for studies on sex and gender differences should be developed to improve reporting of studies and facilitate their inclusion in systematic reviews. The essence of these studies lies within the concept of effect modification. This article reviews important methodologic issues in the design and reporting of pharmacogenetic studies. Differences in effect based on sex or gender should preferably be expressed in absolute terms (risk differences) to facilitate clinical decisions on treatment. Information on the distribution of potential effect modifiers or prognostic factors should be available to prevent a biased comparison of differences in effect between genotypes. Other considerations included the possibility of selective nonavailability of biomaterial and the choice of a statistical model to study effect modification. To ensure high study quality, additional methodologic issues should be taken into account when designing and reporting studies on sex and gender differences.
Methodological strategies in using home sleep apnea testing in research and practice.
Miller, Jennifer N; Schulz, Paula; Pozehl, Bunny; Fiedler, Douglas; Fial, Alissa; Berger, Ann M
2017-11-14
Home sleep apnea testing (HSAT) has increased due to improvements in technology, accessibility, and changes in third party reimbursement requirements. Research studies using HSAT have not consistently reported procedures and methodological challenges. This paper had two objectives: (1) summarize the literature on use of HSAT in research of adults and (2) identify methodological strategies to use in research and practice to standardize HSAT procedures and information. Search strategy included studies of participants undergoing sleep testing for OSA using HSAT. MEDLINE via PubMed, CINAHL, and Embase with the following search terms: "polysomnography," "home," "level III," "obstructive sleep apnea," and "out of center testing." Research articles that met inclusion criteria (n = 34) inconsistently reported methods and methodological challenges in terms of: (a) participant sampling; (b) instrumentation issues; (c) clinical variables; (d) data processing; and (e) patient acceptability. Ten methodological strategies were identified for adoption when using HSAT in research and practice. Future studies need to address the methodological challenges summarized in this paper as well as identify and report consistent HSAT procedures and information.
Applications of cost-effectiveness methodologies in behavioral medicine.
Kaplan, Robert M; Groessl, Erik J
2002-06-01
In 1996, the Panel on Cost-Effectiveness in Health and Medicine developed standards for cost-effectiveness analysis. The standards include the use of a societal perspective, that treatments be evaluated in comparison with the best available alternative (rather than with no care at all), and that health benefits be expressed in standardized units. Guidelines for cost accounting were also offered. Among 24,562 references on cost-effectiveness in Medline between 1995 and 2000, only a handful were relevant to behavioral medicine. Only 19 studies published between 1983 and 2000 met criteria for further evaluation. Among analyses that were reported, only 2 studies were found consistent with the Panel's criteria for high-quality analyses, although more recent studies were more likely to meet methodological standards. There are substantial opportunities to advance behavioral medicine by performing standardized cost-effectiveness analyses.
Sensitivity assessment of sea lice to chemotherapeutants: Current bioassays and best practices.
Marín, S L; Mancilla, J; Hausdorf, M A; Bouchard, D; Tudor, M S; Kane, F
2017-12-18
Traditional bioassays are still necessary to test sensitivity of sea lice species to chemotherapeutants, but the methodology applied by the different scientists has varied over time in respect to that proposed in "Sea lice resistance to chemotherapeutants: A handbook in resistance management" (2006). These divergences motivated the organization of a workshop during the Sea Lice 2016 conference "Standardization of traditional bioassay process by sharing best practices." There was an agreement by the attendants to update the handbook. The objective of this article is to provide a baseline analysis of the methodology for traditional bioassays and to identify procedures that need to be addressed to standardize the protocol. The methodology was divided into the following steps: bioassay design; material and equipment; sea lice collection, transportation and laboratory reception; preparation of dilution; parasite exposure; response evaluation; data analysis; and reporting. Information from the presentations of the workshop, and also from other studies, allowed for the identification of procedures inside a given step that need to be standardized as they were reported to be performed differently by the different working groups. Bioassay design and response evaluation were the targeted steps where more procedures need to be analysed and agreed upon. © 2017 John Wiley & Sons Ltd.
Fidalgo, Bruno M R; Crabb, David P; Lawrenson, John G
2015-05-01
To evaluate methodological and reporting quality of diagnostic accuracy studies of perimetry in glaucoma and to determine whether there had been any improvement since the publication of the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines. A systematic review of English language articles published between 1993 and 2013 reporting the diagnostic accuracy of perimetry in glaucoma. Articles were appraised for methodological quality using the 14-item Quality assessment tool for diagnostic accuracy studies (QUADAS) and evaluated for quality of reporting by applying the STARD checklist. Fifty-eight articles were appraised. Overall methodological quality of these studies was moderate with a median number of QUADAS items rated as 'yes' equal to nine (out of a maximum of 14) (IQR 7-10). The studies were often poorly reported; median score of STARD items fully reported was 11 out of 25 (IQR 10-14). A comparison of the studies published in 10-year periods before and after the publication of the STARD checklist in 2003 found quality of reporting had not substantially improved. Methodological and reporting quality of diagnostic accuracy studies of perimetry is sub-optimal and appears not to have improved substantially following the development of the STARD reporting guidance. This observation is consistent with previous studies in ophthalmology and in other medical specialities. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.
50 CFR 648.18 - Standardized bycatch reporting methodology.
Code of Federal Regulations, 2010 CFR
2010-10-01
... ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES General...: Atlantic Bluefish; Atlantic Herring; Atlantic Salmon; Deep-Sea Red Crab; Mackerel, Squid, and Butterfish...
High-Penetration Photovoltaic Planning Methodologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, David Wenzhong; Muljadi, Eduard; Tian, Tian
The main objective of this report is to provide an overview of select U.S. utility methodologies for performing high-penetration photovoltaic (HPPV) system planning and impact studies. This report covers the Federal Energy Regulatory Commission's orders related to photovoltaic (PV) power system interconnection, particularly the interconnection processes for the Large Generation Interconnection Procedures and Small Generation Interconnection Procedures. In addition, it includes U.S. state interconnection standards and procedures. The procedures used by these regulatory bodies consider the impacts of HPPV power plants on the networks. Technical interconnection requirements for HPPV voltage regulation include aspects of power monitoring, grounding, synchronization, connection tomore » the overall distribution system, back-feeds, disconnecting means, abnormal operating conditions, and power quality. This report provides a summary of mitigation strategies to minimize the impact of HPPV. Recommendations and revisions to the standards may take place as the penetration level of renewables on the grid increases and new technologies develop in future years.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Arvind S.
2001-03-05
A new methodology to predict the Upper Shelf Energy (USE) of standard Charpy specimens (Full size) based on subsize specimens has been developed. The prediction methodology uses Finite Element Modeling (FEM) to model the fracture behavior. The inputs to FEM are the tensile properties of material and subsize Charpy specimen test data.
Hiligsmann, Mickaël; Cooper, Cyrus; Guillemin, Francis; Hochberg, Marc C; Tugwell, Peter; Arden, Nigel; Berenbaum, Francis; Boers, Maarten; Boonen, Annelies; Branco, Jaime C; Maria-Luisa, Brandi; Bruyère, Olivier; Gasparik, Andrea; Kanis, John A; Kvien, Tore K; Martel-Pelletier, Johanne; Pelletier, Jean-Pierre; Pinedo-Villanueva, Rafael; Pinto, Daniel; Reiter-Niesert, Susanne; Rizzoli, René; Rovati, Lucio C; Severens, Johan L; Silverman, Stuart; Reginster, Jean-Yves
2014-12-01
General recommendations for a reference case for economic studies in rheumatic diseases were published in 2002 in an initiative to improve the comparability of cost-effectiveness studies in the field. Since then, economic evaluations in osteoarthritis (OA) continue to show considerable heterogeneity in methodological approach. To develop a reference case specific for economic studies in OA, including the standard optimal care, with which to judge new pharmacologic and non-pharmacologic interventions. Four subgroups of an ESCEO expert working group on economic assessments (13 experts representing diverse aspects of clinical research and/or economic evaluations) were charged with producing lists of recommendations that would potentially improve the comparability of economic analyses in OA: outcome measures, comparators, costs and methodology. These proposals were discussed and refined during a face-to-face meeting in 2013. They are presented here in the format of the recommendations of the recently published Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement, so that an initiative on economic analysis methodology might be consolidated with an initiative on reporting standards. Overall, three distinct reference cases are proposed, one for each hand, knee and hip OA; with diagnostic variations in the first two, giving rise to different treatment options: interphalangeal or thumb-based disease for hand OA and the presence or absence of joint malalignment for knee OA. A set of management strategies is proposed, which should be further evaluated to help establish a consensus on the "standard optimal care" in each proposed reference case. The recommendations on outcome measures, cost itemisation and methodological approaches are also provided. The ESCEO group proposes a set of disease-specific recommendations on the conduct and reporting of economic evaluations in OA that could help the standardisation and comparability of studies that evaluate therapeutic strategies of OA in terms of costs and effectiveness. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Texas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Texas. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Minnesota
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Minnesota. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Indiana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Indiana. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Florida
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Florida. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Maine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Maine. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Vermont
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Vermont. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Michigan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Michigan. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Alabama
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Alabama. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of New Hampshire
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of New Hampshire. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology usedmore » in the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of New Mexico
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of New Mexico. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology usedmore » in the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Colorado
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Colorado. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Washington
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Washington. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Montana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Montana. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the District of Columbia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the District of Columbia. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Massachusetts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Massachusetts. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Oregon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Oregon. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Wisconsin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Wisconsin. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Ohio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Ohio. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of South Carolina
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of South Carolina. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology usedmore » in the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of North Carolina
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of North Carolina. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology usedmore » in the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Iowa
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Iowa. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Methodological integrative review of the work sampling technique used in nursing workload research.
Blay, Nicole; Duffield, Christine M; Gallagher, Robyn; Roche, Michael
2014-11-01
To critically review the work sampling technique used in nursing workload research. Work sampling is a technique frequently used by researchers and managers to explore and measure nursing activities. However, work sampling methods used are diverse making comparisons of results between studies difficult. Methodological integrative review. Four electronic databases were systematically searched for peer-reviewed articles published between 2002-2012. Manual scanning of reference lists and Rich Site Summary feeds from contemporary nursing journals were other sources of data. Articles published in the English language between 2002-2012 reporting on research which used work sampling to examine nursing workload. Eighteen articles were reviewed. The review identified that the work sampling technique lacks a standardized approach, which may have an impact on the sharing or comparison of results. Specific areas needing a shared understanding included the training of observers and subjects who self-report, standardization of the techniques used to assess observer inter-rater reliability, sampling methods and reporting of outcomes. Work sampling is a technique that can be used to explore the many facets of nursing work. Standardized reporting measures would enable greater comparison between studies and contribute to knowledge more effectively. Author suggestions for the reporting of results may act as guidelines for researchers considering work sampling as a research method. © 2014 John Wiley & Sons Ltd.
Zhai, Xiao; Wang, Yiran; Mu, Qingchun; Chen, Xiao; Huang, Qin; Wang, Qijin; Li, Ming
2015-07-01
To appraise the current reporting methodological quality of randomized clinical trials (RCTs) in 3 leading diabetes journals.We systematically searched the literature for RCTs in Diabetes Care, Diabetes and Diabetologia from 2011 to 2013.Characteristics were extracted based on Consolidated Standards of Reporting Trials (CONSORT) statement. Generation of allocation, concealment of allocation, intention-to-treat (ITT) analysis and handling of dropouts were defined as primary outcome and "low risk of bias." Sample size calculation, type of intervention, country, number of patients, funding source were also revealed and descriptively reported. Trials were compared among journals, study years, and other characters.A total of 305 RCTs were enrolled in this study. One hundred eight (35.4%) trials reported adequate generation of allocation, 87 (28.5%) trials reported adequate concealment of allocation, 53 (23.8%) trials used ITT analysis, and 130 (58.3%) trials were adequate in handling of dropouts. Only 15 (4.9%) were "low risk of bias" trials. Studies at a large scale (n > 100) or from European presented with more "low risk of bias" trials than those at a small scale (n ≤ 100) or from other regions. No improvements were found in these 3 years.This study shows that methodological reporting quality of RCTs in the major diabetes journals remains suboptimal. It can be further improved to meet and keep up with the standards of the CONSORT statement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, Yuri V.; Lu, Shuai
2008-07-15
This report presents a methodology developed to study the future impact of wind on BPA power system load following and regulation requirements. The methodology uses historical data and stochastic processes to simulate the load balancing processes in the BPA power system, by mimicking the actual power system operations. Therefore, the results are close to reality, yet the study based on this methodology is convenient to conduct. Compared with the proposed methodology, existing methodologies for doing similar analysis include dispatch model simulation and standard deviation evaluation on load and wind data. Dispatch model simulation is constrained by the design of themore » dispatch program, and standard deviation evaluation is artificial in separating the load following and regulation requirements, both of which usually do not reflect actual operational practice. The methodology used in this study provides not only capacity requirement information, it also analyzes the ramp rate requirements for system load following and regulation processes. The ramp rate data can be used to evaluate generator response/maneuverability requirements, which is another necessary capability of the generation fleet for the smooth integration of wind energy. The study results are presented in an innovative way such that the increased generation capacity or ramp requirements are compared for two different years, across 24 hours a day. Therefore, the impact of different levels of wind energy on generation requirements at different times can be easily visualized.« less
Hajibandeh, S; Hajibandeh, S; Antoniou, G A; Green, P A; Maden, M; Torella, F
2015-11-01
Randomised controlled trials (RCTs) are subject to bias if they lack methodological quality. Moreover, optimal and transparent reporting of RCT findings aids their critical appraisal and interpretation. The aim of this study was to ascertain whether the methodological and reporting quality of RCTs in vascular and endovascular surgery is improving. The most recent 75 and oldest 75 RCTs published in leading journals over a 10-year period (2003-2012) were identified. The reporting quality and methodological quality data of the old and new RCTs were extracted and compared. The former was analysed using the Consolidated Standards of Reporting Trials (CONSORT) statement, the latter with the Scottish Intercollegiate Guidelines Network (SIGN) checklist. Reporting quality measured by CONSORT was better in the new studies than in the old studies (0.68 [95% CI, 0.66-0.7] vs. 0.60 [95% CI, 0.58-0.62], p < .001); however, both new and old studies had similar methodological quality measured by SIGN (0.9 [IQR 0.1] vs. .09 [IQR: 0.2], p = .787). Unlike clinical items, the methodological items of the CONSORT statement were not well reported in old and new RCTs. More trials in the new group were endovascular related (33.33% vs. 17.33%, p = .038) and industry sponsored (28% vs. 6.67%, p = .001). Despite some progress, there remains room for improvement in the reporting quality of RCTs in vascular and endovascular surgery. The methodological quality of recent RCTs is similar to that of trials performed >10 years ago. Copyright © 2015 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Katherine R.; Wall, Anna M.; Dobson, Patrick F.
This paper reviews existing methodologies and reporting codes used to describe extracted energy resources such as coal and oil and describes a comparable proposed methodology to describe geothermal resources. The goal is to provide the U.S. Department of Energy's (DOE) Geothermal Technologies Office (GTO) with a consistent and comprehensible means of assessing the impacts of its funding programs. This framework will allow for GTO to assess the effectiveness of research, development, and deployment (RD&D) funding, prioritize funding requests, and demonstrate the value of RD&D programs to the U.S. Congress. Standards and reporting codes used in other countries and energy sectorsmore » provide guidance to inform development of a geothermal methodology, but industry feedback and our analysis suggest that the existing models have drawbacks that should be addressed. In order to formulate a comprehensive metric for use by GTO, we analyzed existing resource assessments and reporting methodologies for the geothermal, mining, and oil and gas industries, and we sought input from industry, investors, academia, national labs, and other government agencies. Using this background research as a guide, we describe a methodology for assessing and reporting on GTO funding according to resource knowledge and resource grade (or quality). This methodology would allow GTO to target funding or measure impact by progression of projects or geological potential for development.« less
Chan, Leighton; Heinemann, Allen W; Roberts, Jason
2014-01-01
Note from the AJOT Editor-in-Chief: Since 2010, the American Journal of Occupational Therapy (AJOT) has adopted reporting standards based on the Consolidated Standards of Reporting Trials (CONSORT) Statement and American Psychological Association (APA) guidelines in an effort to publish transparent clinical research that can be easily evaluated for methodological and analytical rigor (APA Publications and Communications Board Working Group on Journal Article Reporting Standards, 2008; Moher, Schulz, & Altman, 2001). AJOT has now joined 28 other major rehabilitation and disability journals in a collaborative initiative to enhance clinical research reporting standards through adoption of the EQUATOR Network reporting guidelines, described below. Authors will now be required to use these guidelines in the preparation of manuscripts that will be submitted to AJOT. Reviewers will also use these guidelines to evaluate the quality and rigor of all AJOT submissions. By adopting these standards we hope to further enhance the quality and clinical applicability of articles to our readers. Copyright © 2014 by the American Occupational Therapy Association, Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-29
... Drive, East Falmouth, MA 02536. A list of approved observer service providers shall be distributed to... Bycatch Reporting Methodology Regulations AGENCY: National Marine Fisheries Service (NMFS), National... fisheries. This action also makes changes to the regulations regarding observer service provider approval...
2011-01-01
Background It was still unclear whether the methodological reporting quality of randomized controlled trials (RCTs) in major hepato-gastroenterology journals improved after the Consolidated Standards of Reporting Trials (CONSORT) Statement was revised in 2001. Methods RCTs in five major hepato-gastroenterology journals published in 1998 or 2008 were retrieved from MEDLINE using a high sensitivity search method and their reporting quality of methodological details were evaluated based on the CONSORT Statement and Cochrane Handbook for Systematic Reviews of interventions. Changes of the methodological reporting quality between 2008 and 1998 were calculated by risk ratios with 95% confidence intervals. Results A total of 107 RCTs published in 2008 and 99 RCTs published in 1998 were found. Compared to those in 1998, the proportion of RCTs that reported sequence generation (RR, 5.70; 95%CI 3.11-10.42), allocation concealment (RR, 4.08; 95%CI 2.25-7.39), sample size calculation (RR, 3.83; 95%CI 2.10-6.98), incomplete outecome data addressed (RR, 1.81; 95%CI, 1.03-3.17), intention-to-treat analyses (RR, 3.04; 95%CI 1.72-5.39) increased in 2008. Blinding and intent-to-treat analysis were reported better in multi-center trials than in single-center trials. The reporting of allocation concealment and blinding were better in industry-sponsored trials than in public-funded trials. Compared with historical studies, the methodological reporting quality improved with time. Conclusion Although the reporting of several important methodological aspects improved in 2008 compared with those published in 1998, which may indicate the researchers had increased awareness of and compliance with the revised CONSORT statement, some items were still reported badly. There is much room for future improvement. PMID:21801429
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Arizona. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Hawaii. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Connecticut. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Toward a new culture in verified quantum operations
NASA Astrophysics Data System (ADS)
Flammia, Steve
Measuring error rates of quantum operations has become an indispensable component in any aspiring platform for quantum computation. As the quality of controlled quantum operations increases, the demands on the accuracy and precision with which we measure these error rates also grows. However, well-meaning scientists that report these error measures are faced with a sea of non-standardized methodologies and are often asked during publication for only coarse information about how their estimates were obtained. Moreover, there are serious incentives to use methodologies and measures that will continually produce numbers that improve with time to show progress. These problems will only get exacerbated as our typical error rates go from 1 in 100 to 1 in 1000 or less. This talk will survey existing challenges presented by the current paradigm and offer some suggestions for solutions than can help us move toward fair and standardized methods for error metrology in quantum computing experiments, and towards a culture that values full disclose of methodologies and higher standards for data analysis.
A methodology for highway asset valuation in Indiana.
DOT National Transportation Integrated Search
2012-11-01
The Government Accounting Standards Board (GASB) requires transportation agencies to report the values of their tangible assets. : Numerous valuation methods exist which use different underlying concepts and data items. These traditional methods have...
Standardized and Repeatable Technology Evaluation for Cybersecurity Acquisition
2017-02-01
methodology for evaluating cybersecurity technologies. In this report, we introduce the Department of Defense (DoD)-centric and Independent Technology...Evaluation Capability (DITEC), an experimental decision support service within the U.S. DoD which aims to provide a standardized framework for...13 5.3.1 The Technology Matching Tool: A Recommender System for Security Non - Experts
Does Maltreatment Beget Maltreatment? A Systematic Review of the Intergenerational Literature
Thornberry, Terence P.; Knight, Kelly E.; Lovegrove, Peter J.
2014-01-01
In this paper, we critically review the literature testing the cycle of maltreatment hypothesis which posits continuity in maltreatment across adjacent generations. That is, we examine whether a history of maltreatment victimization is a significant risk factor for the later perpetration of maltreatment. We begin by establishing 11 methodological criteria that studies testing this hypothesis should meet. They include such basic standards as using representative samples, valid and reliable measures, prospective designs, and different reporters for each generation. We identify 47 studies that investigated this issue and then evaluate them with regard to the 11 methodological criteria. Overall, most of these studies report findings consistent with the cycle of maltreatment hypothesis. Unfortunately, at the same time, few of them satisfy the basic methodological criteria that we established; indeed, even the stronger studies in this area only meet about half of them. Moreover, the methodologically stronger studies present mixed support for the hypothesis. As a result, the positive association often reported in the literature appears to be based largely on the methodologically weaker designs. Based on our systematic methodological review, we conclude that this small and methodologically weak body of literature does not provide a definitive test of the cycle of maltreatment hypothesis. We conclude that it is imperative to develop more robust and methodologically adequate assessments of this hypothesis to more accurately inform the development of prevention and treatment programs. PMID:22673145
A normative price for a manufactured product: The SAMICS methodology. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.
1979-01-01
A summary for the Solar Array Manufacturing Industry Costing Standards report contains a discussion of capabilities and limitations, a non-technical overview of the methodology, and a description of the input data which must be collected. It also describes the activities that were and are being taken to ensure validity of the results and contains an up-to-date bibliography of related documents.
Risk Metrics for Android (trademark) Devices
2017-02-01
allows for easy distribution of malware. This report surveys malware distribution methodologies , then describes current work being done to determine the...given a standard weight of wi = 1. Two data sets were used for testing this methodology . Because the authors are Chinese, they chose to download apps...Order Analysis excels at handling non -obfuscated apps, but may not be able to detect malware that employs encryption or dynamically changes its payload
Methodological Issues in Trials of Complementary and Alternative Medicine Interventions
Sikorskii, Alla; Wyatt, Gwen; Victorson, David; Faulkner, Gwen; Rahbar, Mohammad Hossein
2010-01-01
Background Complementary and alternative medicine (CAM) use is widespread among cancer patients. Information on safety and efficacy of CAM therapies is needed for both patients and health care providers. Well-designed randomized clinical trials (RCTs) of CAM therapy interventions can inform both clinical research and practice. Objectives To review important issues that affect the design of RCTs for CAM interventions. Methods Using the methods component of the Consolidated Standards for Reporting Trials (CONSORT) as a guiding framework, and a National Cancer Institute-funded reflexology study as an exemplar, methodological issues related to participants, intervention, objectives, outcomes, sample size, randomization, blinding, and statistical methods were reviewed. Discussion Trials of CAM interventions designed and implemented according to appropriate methodological standards will facilitate the needed scientific rigor in CAM research. Interventions in CAM can be tested using proposed methodology, and the results of testing will inform nursing practice in providing safe and effective supportive care and improving the well-being of patients. PMID:19918155
ERIC Educational Resources Information Center
German Federal Inst. for Vocational Training Affairs, Berlin (Germany).
Representatives from 13 Central and Eastern European countries, the European Centre for the Development of Vocational Training, and the Organization for Economic Cooperation and Development met for 2 days in Berlin to continue European Training Foundation (ETF) efforts to design a methodology for formulating standards in vocational training (VT)…
The secret lives of experiments: methods reporting in the fMRI literature.
Carp, Joshua
2012-10-15
Replication of research findings is critical to the progress of scientific understanding. Accordingly, most scientific journals require authors to report experimental procedures in sufficient detail for independent researchers to replicate their work. To what extent do research reports in the functional neuroimaging literature live up to this standard? The present study evaluated methods reporting and methodological choices across 241 recent fMRI articles. Many studies did not report critical methodological details with regard to experimental design, data acquisition, and analysis. Further, many studies were underpowered to detect any but the largest statistical effects. Finally, data collection and analysis methods were highly flexible across studies, with nearly as many unique analysis pipelines as there were studies in the sample. Because the rate of false positive results is thought to increase with the flexibility of experimental designs, the field of functional neuroimaging may be particularly vulnerable to false positives. In sum, the present study documented significant gaps in methods reporting among fMRI studies. Improved methodological descriptions in research reports would yield significant benefits for the field. Copyright © 2012 Elsevier Inc. All rights reserved.
Benefits estimates of highway capital improvements with uncertain parameters.
DOT National Transportation Integrated Search
2006-01-01
This report warrants consideration in the development of goals, performance measures, and standard cost-benefit methodology required of transportation agencies by the Virginia 2006 Appropriations Act. The Virginia Department of Transportation has beg...
12 CFR 252.141 - Authority and purpose.
Code of Federal Regulations, 2013 CFR
2013-01-01
... (CONTINUED) ENHANCED PRUDENTIAL STANDARDS (REGULATION YY) Company-Run Stress Test Requirements for Covered... stress tests. This subpart also establishes definitions of stress test and related terms, methodologies for conducting stress tests, and reporting and disclosure requirements. ...
12 CFR 252.141 - Authority and purpose.
Code of Federal Regulations, 2014 CFR
2014-01-01
... (CONTINUED) ENHANCED PRUDENTIAL STANDARDS (REGULATION YY) Company-Run Stress Test Requirements for Covered... stress tests. This subpart also establishes definitions of stress test and related terms, methodologies for conducting stress tests, and reporting and disclosure requirements. ...
MITSI project : final local evaluation report
DOT National Transportation Integrated Search
2003-01-01
The mission statement for the MITSI project was facilitating National Standards Compliance migration for NaviGAtor, conducting National Architecture mapping for MARTA and E911, and evaluating CORBA as a methodology for exchanging data. This involved ...
Faggion, Clovis Mariano; Aranda, Luisiana; Diaz, Karla Tatiana; Shih, Ming-Chieh; Tu, Yu-Kang; Alarcón, Marco Antonio
2016-01-01
Information on precision of treatment-effect estimates is pivotal for understanding research findings. In animal experiments, which provide important information for supporting clinical trials in implant dentistry, inaccurate information may lead to biased clinical trials. The aim of this methodological study was to determine whether sample size calculation, standard errors, and confidence intervals for treatment-effect estimates are reported accurately in publications describing animal experiments in implant dentistry. MEDLINE (via PubMed), Scopus, and SciELO databases were searched to identify reports involving animal experiments with dental implants published from September 2010 to March 2015. Data from publications were extracted into a standardized form with nine items related to precision of treatment estimates and experiment characteristics. Data selection and extraction were performed independently and in duplicate, with disagreements resolved by discussion-based consensus. The chi-square and Fisher exact tests were used to assess differences in reporting according to study sponsorship type and impact factor of the journal of publication. The sample comprised reports of 161 animal experiments. Sample size calculation was reported in five (2%) publications. P values and confidence intervals were reported in 152 (94%) and 13 (8%) of these publications, respectively. Standard errors were reported in 19 (12%) publications. Confidence intervals were better reported in publications describing industry-supported animal experiments (P = .03) and with a higher impact factor (P = .02). Information on precision of estimates is rarely reported in publications describing animal experiments in implant dentistry. This lack of information makes it difficult to evaluate whether the translation of animal research findings to clinical trials is adequate.
Methods for the guideline-based development of quality indicators--a systematic review
2012-01-01
Background Quality indicators (QIs) are used in many healthcare settings to measure, compare, and improve quality of care. For the efficient development of high-quality QIs, rigorous, approved, and evidence-based development methods are needed. Clinical practice guidelines are a suitable source to derive QIs from, but no gold standard for guideline-based QI development exists. This review aims to identify, describe, and compare methodological approaches to guideline-based QI development. Methods We systematically searched medical literature databases (Medline, EMBASE, and CINAHL) and grey literature. Two researchers selected publications reporting methodological approaches to guideline-based QI development. In order to describe and compare methodological approaches used in these publications, we extracted detailed information on common steps of guideline-based QI development (topic selection, guideline selection, extraction of recommendations, QI selection, practice test, and implementation) to predesigned extraction tables. Results From 8,697 hits in the database search and several grey literature documents, we selected 48 relevant references. The studies were of heterogeneous type and quality. We found no randomized controlled trial or other studies comparing the ability of different methodological approaches to guideline-based development to generate high-quality QIs. The relevant publications featured a wide variety of methodological approaches to guideline-based QI development, especially regarding guideline selection and extraction of recommendations. Only a few studies reported patient involvement. Conclusions Further research is needed to determine which elements of the methodological approaches identified, described, and compared in this review are best suited to constitute a gold standard for guideline-based QI development. For this research, we provide a comprehensive groundwork. PMID:22436067
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Katherine R.; Wall, Anna M.; Dobson, Patrick F.
This paper reviews a methodology being developed for reporting geothermal resources and project progress. The goal is to provide the U.S. Department of Energy's (DOE) Geothermal Technologies Office (GTO) with a consistent and comprehensible means of evaluating the impacts of its funding programs. This framework will allow the GTO to assess the effectiveness of research, development, and deployment (RD&D) funding, prioritize funding requests, and demonstrate the value of RD&D programs to the U.S. Congress and the public. Standards and reporting codes used in other countries and energy sectors provide guidance to develop the relevant geothermal methodology, but industry feedback andmore » our analysis suggest that the existing models have drawbacks that should be addressed. In order to formulate a comprehensive metric for use by the GTO, we analyzed existing resource assessments and reporting methodologies for the geothermal, mining, and oil and gas industries, and sought input from industry, investors, academia, national labs, and other government agencies. Using this background research as a guide, we describe a methodology for evaluating and reporting on GTO funding according to resource grade (geological, technical and socio-economic) and project progress. This methodology would allow GTO to target funding, measure impact by monitoring the progression of projects, or assess geological potential of targeted areas for development.« less
42 CFR 440.340 - Actuarial report for benchmark-equivalent coverage.
Code of Federal Regulations, 2010 CFR
2010-10-01
... individual who is a member of the American Academy of Actuaries (AAA). (2) Using generally accepted actuarial principles and methodologies of the AAA. (3) Using a standardized set of utilization and price factors. (4...
An Overview of Meta-Analyses of Danhong Injection for Unstable Angina.
Zhang, Xiaoxia; Wang, Hui; Chang, Yanxu; Wang, Yuefei; Lei, Xiang; Fu, Shufei; Zhang, Junhua
2015-01-01
Objective. To systematically collect evidence and evaluate the effects of Danhong injection (DHI) for unstable angina (UA). Methods. A comprehensive search was conducted in seven electronic databases up to January 2015. The methodological and reporting quality of included studies was assessed by using AMSTAR and PRISMA. Result. Five articles were included. The conclusions suggest that DHI plus conventional medicine treatment was effective for UA pectoris treatment, could alleviate symptoms of angina and ameliorate electrocardiograms. Flaws of the original studies and systematic reviews weaken the strength of evidence. Limitations of the methodology quality include performing an incomprehensive literature search, lacking detailed characteristics, ignoring clinical heterogeneity, and not assessing publication bias and other forms of bias. The flaws of reporting systematic reviews included the following: not providing a structured summary, no standardized search strategy. For the pooled findings, researchers took statistical heterogeneity into consideration, but clinical and methodology heterogeneity were ignored. Conclusion. DHI plus conventional medicine treatment generally appears to be effective for UA treatment. However, the evidence is not hard enough due to methodological flaws in original clinical trials and systematic reviews. Furthermore, rigorous designed randomized controlled trials are also needed. The methodology and reporting quality of systematic reviews should be improved.
An Overview of Meta-Analyses of Danhong Injection for Unstable Angina
Zhang, Xiaoxia; Chang, Yanxu; Wang, Yuefei; Lei, Xiang; Fu, Shufei; Zhang, Junhua
2015-01-01
Objective. To systematically collect evidence and evaluate the effects of Danhong injection (DHI) for unstable angina (UA). Methods. A comprehensive search was conducted in seven electronic databases up to January 2015. The methodological and reporting quality of included studies was assessed by using AMSTAR and PRISMA. Result. Five articles were included. The conclusions suggest that DHI plus conventional medicine treatment was effective for UA pectoris treatment, could alleviate symptoms of angina and ameliorate electrocardiograms. Flaws of the original studies and systematic reviews weaken the strength of evidence. Limitations of the methodology quality include performing an incomprehensive literature search, lacking detailed characteristics, ignoring clinical heterogeneity, and not assessing publication bias and other forms of bias. The flaws of reporting systematic reviews included the following: not providing a structured summary, no standardized search strategy. For the pooled findings, researchers took statistical heterogeneity into consideration, but clinical and methodology heterogeneity were ignored. Conclusion. DHI plus conventional medicine treatment generally appears to be effective for UA treatment. However, the evidence is not hard enough due to methodological flaws in original clinical trials and systematic reviews. Furthermore, rigorous designed randomized controlled trials are also needed. The methodology and reporting quality of systematic reviews should be improved. PMID:26539221
Standardizing economic analysis in prevention will require substantial effort.
Guyll, Max
2014-12-01
It is exceedingly difficult to compare results of economic analyses across studies due to variations in assumptions, methodology, and outcome measures, a fact which surely decreases the impact and usefulness of prevention-related economic research. Therefore, Crowley et al. (Prevention Science, 2013) are precisely correct in their call for increased standardization and have usefully highlighted the issues that must be addressed. However, having made the need clear, the questions become what form the solution should take, and how should it be implemented. The present discussion outlines the rudiments of a comprehensive framework for promoting standardized methodology in the estimation of economic outcomes, as encouraged by Crowley et al. In short, a single, standard, reference case approach should be clearly articulated, and all economic research should be encouraged to apply that standard approach, with results from compliant analyses being reported in a central archive. Properly done, the process would increase the ability of those without specialized training to contribute to the body of economic research pertaining to prevention, and the most difficult tasks of predicting and monetizing distal outcomes would be readily completed through predetermined models. These recommendations might be viewed as somewhat forcible, insomuch as they advocate for prescribing the details of a standard methodology and establishing a means of verifying compliance. However, it is unclear that the best practices proposed by Crowley et al. will be widely adopted in the absence of a strong and determined approach.
Bruni, Aline Thaís; Velho, Jesus Antonio; Ferreira, Arthur Serra Lopes; Tasso, Maria Júlia; Ferrari, Raíssa Santos; Yoshida, Ricardo Luís; Dias, Marcos Salvador; Leite, Vitor Barbanti Pereira
2014-08-01
This study uses statistical techniques to evaluate reports on suicide scenes; it utilizes 80 reports from different locations in Brazil, randomly collected from both federal and state jurisdictions. We aimed to assess a heterogeneous group of cases in order to obtain an overall perspective of the problem. We evaluated variables regarding the characteristics of the crime scene, such as the detected traces (blood, instruments and clothes) that were found and we addressed the methodology employed by the experts. A qualitative approach using basic statistics revealed a wide distribution as to how the issue was addressed in the documents. We examined a quantitative approach involving an empirical equation and we used multivariate procedures to validate the quantitative methodology proposed for this empirical equation. The methodology successfully identified the main differences in the information presented in the reports, showing that there is no standardized method of analyzing evidences. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Diard, A; Becker, F; Pichot, O
2017-05-01
The quality standards of the French Society of Vascular Medicine for the ultrasound assessment of lower limb arteries in vascular medicine practice are based on the principle that these examinations have to meet two requirements: technical know-how (knowledge of devices and methodologies); medical know-how (level of examination matching the indication and purpose of the examination, interpretation and critical analysis of results). To describe an optimal level of examination adjusted to the indication or clinical hypothesis; to establish harmonious practices, methodologies, terminologies, results description and report; to provide good practice reference points and to promote a high quality process. The three levels of examination, indications and objectives for each level; the reference standard examination (level 2) and its variants according to indications; the minimal content of the exam report, the medical conclusion letter to the corresponding physician (synthesis, conclusion and management suggestions); commented glossary (anatomy, hemodynamics, signs and symptoms); technical basis; device settings. Here, we discuss duplex ultrasound for the supervision of the aortic stent grafts. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Protocol-developing meta-ethnography reporting guidelines (eMERGe).
France, E F; Ring, N; Noyes, J; Maxwell, M; Jepson, R; Duncan, E; Turley, R; Jones, D; Uny, I
2015-11-25
Designing and implementing high-quality health care services and interventions requires robustly synthesised evidence. Syntheses of qualitative research studies can provide evidence of patients' experiences of health conditions; intervention feasibility, appropriateness and acceptability to patients; and advance understanding of health care issues. The unique, interpretive, theory-based meta-ethnography synthesis approach is suited to conveying patients' views and developing theory to inform service design and delivery. However, meta-ethnography reporting is often poor quality, which discourages trust in, and use of, meta-ethnography findings. Users of evidence syntheses require reports that clearly articulate analytical processes and findings. Tailored research reporting guidelines can raise reporting standards but none exists for meta-ethnography. This study aims to create an evidence-based meta-ethnography reporting guideline articulating the methodological standards and depth of reporting required to improve reporting quality. The mixed-methods design of this National Institute of Health Research-funded study (http://www.stir.ac.uk/emerge/) follows good practice in research reporting guideline development comprising: (1) a methodological systematic review (PROSPERO registration: CRD42015024709) to identify recommendations and guidance in conducting/reporting meta-ethnography; (2) a review and audit of published meta-ethnographies to identify good practice principles and develop standards in conduct/reporting; (3) an online workshop and Delphi studies to agree guideline content with 45 international qualitative synthesis experts and 45 other stakeholders including patients; (4) development and wide dissemination of the guideline and its accompanying detailed explanatory document, a report template for National Institute of Health Research commissioned meta-ethnographies, and training materials on guideline use. Meta-ethnography, devised in the field of education, is now used widely in other disciplines. Methodological advances relevant to meta-ethnography conduct exist. The extent of discipline-specific adaptations of meta-ethnography and the fit of any adaptions with the underpinning philosophy of meta-ethnography require investigation. Well-reported meta-ethnography findings could inform clinical decision-making. A bespoke meta-ethnography reporting guideline is needed to improve reporting quality, but to be effective potential users must know it exists, trust it and use it. Therefore, a rigorous study has been designed to develop and promote a guideline. By raising reporting quality, the guideline will maximise the likelihood that high-quality meta-ethnographies will contribute robust evidence to improve health care and patient outcomes.
Systematic Review of Childhood Sedentary Behavior Questionnaires: What do We Know and What is Next?
Hidding, Lisan M; Altenburg, Teatske M; Mokkink, Lidwine B; Terwee, Caroline B; Chinapaw, Mai J M
2017-04-01
Accurate measurement of child sedentary behavior is necessary for monitoring trends, examining health effects, and evaluating the effectiveness of interventions. We therefore aimed to summarize studies examining the measurement properties of self-report or proxy-report sedentary behavior questionnaires for children and adolescents under the age of 18 years. Additionally, we provided an overview of the characteristics of the evaluated questionnaires. We performed systematic literature searches in the EMBASE, PubMed, and SPORTDiscus electronic databases. Studies had to report on at least one measurement property of a questionnaire assessing sedentary behavior. Questionnaire data were extracted using a standardized checklist, i.e. the Quality Assessment of Physical Activity Questionnaire (QAPAQ) checklist, and the methodological quality of the included studies was rated using a standardized tool, i.e. the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. Forty-six studies on 46 questionnaires met our inclusion criteria, of which 33 examined test-retest reliability, nine examined measurement error, two examined internal consistency, 22 examined construct validity, eight examined content validity, and two examined structural validity. The majority of the included studies were of fair or poor methodological quality. Of the studies with at least a fair methodological quality, six scored positive on test-retest reliability, and two scored positive on construct validity. None of the questionnaires included in this review were considered as both valid and reliable. High-quality studies on the most promising questionnaires are required, with more attention to the content validity of the questionnaires. PROSPERO registration number: CRD42016035963.
Heliport noise model : methodology - draft report
DOT National Transportation Integrated Search
1988-04-30
The Heliport Noise Model (HNM) is the United States standard for predicting civil helicopter noise exposure in the vicinity of heliports and airports. HNM Version 1 is the culmination of several years of work in helicopter noise research, field measu...
Studies to determine the effectiveness of longitudinal channelizing devices in work zones.
DOT National Transportation Integrated Search
2011-01-01
This report describes the methodology and results of analyses performed to determine whether the following longitudinal : channelizing device (LCD) applications improve the traffic safety and operations of work zones relative to the use of standard :...
Santaguida, Pasqualina; Oremus, Mark; Walker, Kathryn; Wishart, Laurie R; Siegel, Karen Lohmann; Raina, Parminder
2012-04-01
A "review of reviews" was undertaken to assess methodological issues in studies evaluating nondrug rehabilitation interventions in stroke patients. MEDLINE, CINAHL, PsycINFO, and the Cochrane Database of Systematic Reviews were searched from January 2000 to January 2008 within the stroke rehabilitation setting. Electronic searches were supplemented by reviews of reference lists and citations identified by experts. Eligible studies were systematic reviews; excluded citations were narrative reviews or reviews of reviews. Review characteristics and criteria for assessing methodological quality of primary studies within them were extracted. The search yielded 949 English-language citations. We included a final set of 38 systematic reviews. Cochrane reviews, which have a standardized methodology, were generally of higher methodological quality than non-Cochrane reviews. Most systematic reviews used standardized quality assessment criteria for primary studies, but not all were comprehensive. Reviews showed that primary studies had problems with randomization, allocation concealment, and blinding. Baseline comparability, adverse events, and co-intervention or contamination were not consistently assessed. Blinding of patients and providers was often not feasible and was not evaluated as a source of bias. The eligible systematic reviews identified important methodological flaws in the evaluated primary studies, suggesting the need for improvement of research methods and reporting. Copyright © 2012 Elsevier Inc. All rights reserved.
Rater methodology for stroboscopy: a systematic review.
Bonilha, Heather Shaw; Focht, Kendrea L; Martin-Harris, Bonnie
2015-01-01
Laryngeal endoscopy with stroboscopy (LES) remains the clinical gold standard for assessing vocal fold function. LES is used to evaluate the efficacy of voice treatments in research studies and clinical practice. LES as a voice treatment outcome tool is only as good as the clinician interpreting the recordings. Research using LES as a treatment outcome measure should be evaluated based on rater methodology and reliability. The purpose of this literature review was to evaluate the rater-related methodology from studies that use stroboscopic findings as voice treatment outcome measures. Systematic literature review. Computerized journal databases were searched for relevant articles using terms: stroboscopy and treatment. Eligible articles were categorized and evaluated for the use of rater-related methodology, reporting of number of raters, types of raters, blinding, and rater reliability. Of the 738 articles reviewed, 80 articles met inclusion criteria. More than one-third of the studies included in the review did not report the number of raters who participated in the study. Eleven studies reported results of rater reliability analysis with only two studies reporting good inter- and intrarater reliability. The comparability and use of results from treatment studies that use LES are limited by a lack of rigor in rater methodology and variable, mostly poor, inter- and intrarater reliability. To improve our ability to evaluate and use the findings from voice treatment studies that use LES features as outcome measures, greater consistency of reporting rater methodology characteristics across studies and improved rater reliability is needed. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Chen, Bi-Cang; Wu, Qiu-Ying; Xiang, Cheng-Bin; Zhou, Yi; Guo, Ling-Xiang; Zhao, Neng-Jiang; Yang, Shu-Yu
2006-01-01
To evaluate the quality of reports published in recent 10 years in China about quantitative analysis of syndrome differentiation for diabetes mellitus (DM) in order to explore the methodological problems in these reports and find possible solutions. The main medical literature databases in China were searched. Thirty-one articles were included and evaluated by the principles of clinical epidemiology. There were many mistakes and deficiencies in these articles, such as clinical trial designs, diagnosis criteria for DM, standards of syndrome differentiation of DM, case inclusive and exclusive criteria, sample size and estimation, data comparability and statistical methods. It is necessary and important to improve the quality of reports concerning quantitative analysis of syndrome differentiation of DM in light of the principles of clinical epidemiology.
Wisdom, Jennifer P; Cavaleri, Mary A; Onwuegbuzie, Anthony J; Green, Carla A
2012-04-01
Methodologically sound mixed methods research can improve our understanding of health services by providing a more comprehensive picture of health services than either method can alone. This study describes the frequency of mixed methods in published health services research and compares the presence of methodological components indicative of rigorous approaches across mixed methods, qualitative, and quantitative articles. All empirical articles (n = 1,651) published between 2003 and 2007 from four top-ranked health services journals. All mixed methods articles (n = 47) and random samples of qualitative and quantitative articles were evaluated to identify reporting of key components indicating rigor for each method, based on accepted standards for evaluating the quality of research reports (e.g., use of p-values in quantitative reports, description of context in qualitative reports, and integration in mixed method reports). We used chi-square tests to evaluate differences between article types for each component. Mixed methods articles comprised 2.85 percent (n = 47) of empirical articles, quantitative articles 90.98 percent (n = 1,502), and qualitative articles 6.18 percent (n = 102). There was a statistically significant difference (χ(2) (1) = 12.20, p = .0005, Cramer's V = 0.09, odds ratio = 1.49 [95% confidence interval = 1,27, 1.74]) in the proportion of quantitative methodological components present in mixed methods compared to quantitative papers (21.94 versus 47.07 percent, respectively) but no statistically significant difference (χ(2) (1) = 0.02, p = .89, Cramer's V = 0.01) in the proportion of qualitative methodological components in mixed methods compared to qualitative papers (21.34 versus 25.47 percent, respectively). Few published health services research articles use mixed methods. The frequency of key methodological components is variable. Suggestions are provided to increase the transparency of mixed methods studies and the presence of key methodological components in published reports. © Health Research and Educational Trust.
Wisdom, Jennifer P; Cavaleri, Mary A; Onwuegbuzie, Anthony J; Green, Carla A
2012-01-01
Objectives Methodologically sound mixed methods research can improve our understanding of health services by providing a more comprehensive picture of health services than either method can alone. This study describes the frequency of mixed methods in published health services research and compares the presence of methodological components indicative of rigorous approaches across mixed methods, qualitative, and quantitative articles. Data Sources All empirical articles (n = 1,651) published between 2003 and 2007 from four top-ranked health services journals. Study Design All mixed methods articles (n = 47) and random samples of qualitative and quantitative articles were evaluated to identify reporting of key components indicating rigor for each method, based on accepted standards for evaluating the quality of research reports (e.g., use of p-values in quantitative reports, description of context in qualitative reports, and integration in mixed method reports). We used chi-square tests to evaluate differences between article types for each component. Principal Findings Mixed methods articles comprised 2.85 percent (n = 47) of empirical articles, quantitative articles 90.98 percent (n = 1,502), and qualitative articles 6.18 percent (n = 102). There was a statistically significant difference (χ2(1) = 12.20, p = .0005, Cramer's V = 0.09, odds ratio = 1.49 [95% confidence interval = 1,27, 1.74]) in the proportion of quantitative methodological components present in mixed methods compared to quantitative papers (21.94 versus 47.07 percent, respectively) but no statistically significant difference (χ2(1) = 0.02, p = .89, Cramer's V = 0.01) in the proportion of qualitative methodological components in mixed methods compared to qualitative papers (21.34 versus 25.47 percent, respectively). Conclusion Few published health services research articles use mixed methods. The frequency of key methodological components is variable. Suggestions are provided to increase the transparency of mixed methods studies and the presence of key methodological components in published reports. PMID:22092040
Noben, Cindy Yvonne; de Rijk, Angelique; Nijhuis, Frans; Kottner, Jan; Evers, Silvia
2016-06-01
To assess the exchangeability of self-reported and administrative health care resource use measurements for cost estimation. In a systematic review (NHS EED and MEDLINE), reviewers evaluate, in duplicate, the methodological reporting quality of studies comparing the validation evidence of instruments measuring health care resource use. The appraisal tool Methodological Reporting Quality (MeRQ) is developed by merging aspects form the Guidelines for Reporting Reliability and Agreement Studies and the Standards for Reporting Diagnostic Accuracy. Out of 173 studies, 35 full-text articles are assessed for eligibility. Sixteen articles are included in this study. In seven articles, more than 75% of the reporting criteria assessed by MERQ are considered "good." Most studies score at least "fair" on most of the reporting quality criteria. In the end, six studies score "good" on the minimal criteria for reporting. Varying levels of agreement among the different data sources are found, with correlations ranging from 0.14 up to 0.93 and with occurrences of both random and systematic errors. The validation evidence of the small number of studies with adequate MeRQ cautiously supports the exchangeability of both the self-reported and administrative resource use measurement methods. Copyright © 2016 Elsevier Inc. All rights reserved.
Systematic review adherence to methodological or reporting quality.
Pussegoda, Kusala; Turner, Lucy; Garritty, Chantelle; Mayhew, Alain; Skidmore, Becky; Stevens, Adrienne; Boutron, Isabelle; Sarkis-Onofre, Rafael; Bjerre, Lise M; Hróbjartsson, Asbjørn; Altman, Douglas G; Moher, David
2017-07-19
Guidelines for assessing methodological and reporting quality of systematic reviews (SRs) were developed to contribute to implementing evidence-based health care and the reduction of research waste. As SRs assessing a cohort of SRs is becoming more prevalent in the literature and with the increased uptake of SR evidence for decision-making, methodological quality and standard of reporting of SRs is of interest. The objective of this study is to evaluate SR adherence to the Quality of Reporting of Meta-analyses (QUOROM) and PRISMA reporting guidelines and the A Measurement Tool to Assess Systematic Reviews (AMSTAR) and Overview Quality Assessment Questionnaire (OQAQ) quality assessment tools as evaluated in methodological overviews. The Cochrane Library, MEDLINE®, and EMBASE® databases were searched from January 1990 to October 2014. Title and abstract screening and full-text screening were conducted independently by two reviewers. Reports assessing the quality or reporting of a cohort of SRs of interventions using PRISMA, QUOROM, OQAQ, or AMSTAR were included. All results are reported as frequencies and percentages of reports and SRs respectively. Of the 20,765 independent records retrieved from electronic searching, 1189 reports were reviewed for eligibility at full text, of which 56 reports (5371 SRs in total) evaluating the PRISMA, QUOROM, AMSTAR, and/or OQAQ tools were included. Notable items include the following: of the SRs using PRISMA, over 85% (1532/1741) provided a rationale for the review and less than 6% (102/1741) provided protocol information. For reports using QUOROM, only 9% (40/449) of SRs provided a trial flow diagram. However, 90% (402/449) described the explicit clinical problem and review rationale in the introduction section. Of reports using AMSTAR, 30% (534/1794) used duplicate study selection and data extraction. Conversely, 80% (1439/1794) of SRs provided study characteristics of included studies. In terms of OQAQ, 37% (499/1367) of the SRs assessed risk of bias (validity) in the included studies, while 80% (1112/1387) reported the criteria for study selection. Although reporting guidelines and quality assessment tools exist, reporting and methodological quality of SRs are inconsistent. Mechanisms to improve adherence to established reporting guidelines and methodological assessment tools are needed to improve the quality of SRs.
Protocol—the RAMESES II study: developing guidance and reporting standards for realist evaluation
Greenhalgh, Trisha; Wong, Geoff; Jagosh, Justin; Greenhalgh, Joanne; Manzano, Ana; Westhorp, Gill; Pawson, Ray
2015-01-01
Introduction Realist evaluation is an increasingly popular methodology in health services research. For realist evaluations (RE) this project aims to: develop quality and reporting standards and training materials; build capacity for undertaking and critically evaluating them; produce resources and training materials for lay participants, and those seeking to involve them. Methods To achieve our aims, we will: (1) Establish management and governance infrastructure; (2) Recruit an interdisciplinary Delphi panel of 35 participants with diverse relevant experience of RE; (3) Summarise current literature and expert opinion on best practice in RE; (4) Run an online Delphi panel to generate and refine items for quality and reporting standards; (5) Capture ‘real world’ experiences and challenges of RE—for example, by providing ongoing support to realist evaluations, hosting the RAMESES JISCmail list on realist research, and feeding problems and insights from these into the deliberations of the Delphi panel; (6) Produce quality and reporting standards; (7) Collate examples of the learning and training needs of researchers, students, reviewers and lay members in relation to RE; (8) Develop, deliver and evaluate training materials for RE and deliver training workshops; and (9) Develop and evaluate information and resources for patients and other lay participants in RE (eg, draft template information sheets and model consent forms) and; (10) Disseminate training materials and other resources. Planned outputs: (1) Quality and reporting standards and training materials for RE. (2) Methodological support for RE. (3) Increase in capacity to support and evaluate RE. (4) Accessible, plain-English resources for patients and the public participating in RE. Discussion The realist evaluation is a relatively new approach to evaluation and its overall place in the is not yet fully established. As with all primary research approaches, guidance on quality assurance and uniform reporting is an important step towards improving quality and consistency. PMID:26238395
Reiner, Bruce I
2017-10-01
Conventional peer review practice is compromised by a number of well-documented biases, which in turn limit standard of care analysis, which is fundamental to determination of medical malpractice. In addition to these intrinsic biases, other existing deficiencies exist in current peer review including the lack of standardization, objectivity, retrospective practice, and automation. An alternative model to address these deficiencies would be one which is completely blinded to the peer reviewer, requires independent reporting from both parties, utilizes automated data mining techniques for neutral and objective report analysis, and provides data reconciliation for resolution of finding-specific report differences. If properly implemented, this peer review model could result in creation of a standardized referenceable peer review database which could further assist in customizable education, technology refinement, and implementation of real-time context and user-specific decision support.
Guidelines for the Design and Conduct of Clinical Studies in Knee Articular Cartilage Repair
Mithoefer, Kai; Saris, Daniel B.F.; Farr, Jack; Kon, Elizaveta; Zaslav, Kenneth; Cole, Brian J.; Ranstam, Jonas; Yao, Jian; Shive, Matthew; Levine, David; Dalemans, Wilfried; Brittberg, Mats
2011-01-01
Objective: To summarize current clinical research practice and develop methodological standards for objective scientific evaluation of knee cartilage repair procedures and products. Design: A comprehensive literature review was performed of high-level original studies providing information relevant for the design of clinical studies on articular cartilage repair in the knee. Analysis of cartilage repair publications and synopses of ongoing trials were used to identify important criteria for the design, reporting, and interpretation of studies in this field. Results: Current literature reflects the methodological limitations of the scientific evidence available for articular cartilage repair. However, clinical trial databases of ongoing trials document a trend suggesting improved study designs and clinical evaluation methodology. Based on the current scientific information and standards of clinical care, detailed methodological recommendations were developed for the statistical study design, patient recruitment, control group considerations, study endpoint definition, documentation of results, use of validated patient-reported outcome instruments, and inclusion and exclusion criteria for the design and conduct of scientifically sound cartilage repair study protocols. A consensus statement among the International Cartilage Repair Society (ICRS) and contributing authors experienced in clinical trial design and implementation was achieved. Conclusions: High-quality clinical research methodology is critical for the optimal evaluation of current and new cartilage repair technologies. In addition to generally applicable principles for orthopedic study design, specific criteria and considerations apply to cartilage repair studies. Systematic application of these criteria and considerations can facilitate study designs that are scientifically rigorous, ethical, practical, and appropriate for the question(s) being addressed in any given cartilage repair research project. PMID:26069574
Methodological quality of randomised controlled trials in burns care. A systematic review.
Danilla, Stefan; Wasiak, Jason; Searle, Susana; Arriagada, Cristian; Pedreros, Cesar; Cleland, Heather; Spinks, Anneliese
2009-11-01
To evaluate the methodological quality of published randomised controlled trials (RCTs) in burn care treatment and management. Using a predetermined search strategy we searched Ovid MEDLINE (1950 to January 2008) database to identify all English RCTs related to burn care. Full text studies identified were reviewed for key demographic and methodological characteristics. Methodological trial quality was assessed using the Jadad scale. A total of 257 studies involving 14,535 patients met the inclusion criteria. The median Jadad score was 2 (out of a best possible score of 5). Information was given in the introduction and discussion sections of most RCTs, although insufficient detail was provided on randomisation, allocation concealment, and blinding. The number of RCTs increased between 1950 and 2008 (Spearman's rho=0.6129, P<0.001), although the reporting quality did not improve over the same time period (P=0.1896) and was better in RCTs with larger sample sizes (median Jadad score, 4 vs. 2 points, P<0.0001). Methodological quality did not correlate with journal impact factor (P=0.2371). The reporting standards of RCTs are highly variable and less than optimal in most cases. The advent of evidence-based medicine heralds a new approach to burns care and systematic steps are needed to improve the quality of RCTs in this field. Identifying and reviewing the existing number of RCTs not only highlights the need for burn clinicians to conduct more trials, but may also encourage burn health clinicians to consider the importance of conducting trials that follow appropriate, evidence-based standards.
Ono, Shigeshi; Lam, Stella; Nagahara, Makoto; Hoon, Dave S. B.
2015-01-01
An increasing number of studies have focused on circulating microRNAs (cmiRNA) in cancer patients’ blood for their potential as minimally-invasive biomarkers. Studies have reported the utility of assessing specific miRNAs in blood as diagnostic/prognostic biomarkers; however, the methodologies are not validated or standardized across laboratories. Unfortunately, there is often minimum limited overlap in techniques between results reported even in similar type studies on the same cancer. This hampers interpretation and reliability of cmiRNA as potential cancer biomarkers. Blood collection and processing, cmiRNA extractions, quality and quantity control of assays, defined patient population assessment, reproducibility, and reference standards all affect the cmiRNA assay results. To date, there is no reported definitive method to assess cmiRNAs. Therefore, appropriate and reliable methodologies are highly necessary in order for cmiRNAs to be used in regulated clinical diagnostic laboratories. In this review, we summarize the developments made over the past decade towards cmiRNA detection and discuss the pros and cons of the assays. PMID:26512704
Erbe, Christine; Ainslie, Michael A; de Jong, Christ A F; Racca, Roberto; Stocker, Michael
2016-01-01
As concern about anthropogenic noise and its impacts on marine fauna is increasing around the globe, data are being compared across populations, species, noise sources, geographic regions, and time. However, much of the raw and processed data are not comparable due to differences in measurement methodology, analysis and reporting, and a lack of metadata. Common protocols and more formal, international standards are needed to ensure the effectiveness of research, conservation, regulation and practice, and unambiguous communication of information and ideas. Developing standards takes time and effort, is largely driven by a few expert volunteers, and would benefit from stakeholders' contribution and support.
DoD Actions Were Not Adequate to Reduce Improper Travel Payments
2016-03-10
this audit in accordance with generally accepted government auditing standards. We considered management comments on a draft of this report when...DoD Travel Pay program were effective. See Appendix A for the scope and methodology and prior audit coverage. Background Public Law 111-204, the...conducted this performance audit from May 2015 through January 2016 in accordance with generally accepted government auditing standards. Those
Ferrante di Ruffano, Lavinia; Dinnes, Jacqueline; Sitch, Alice J; Hyde, Chris; Deeks, Jonathan J
2017-02-24
There is a growing recognition for the need to expand our evidence base for the clinical effectiveness of diagnostic tests. Many international bodies are calling for diagnostic randomized controlled trials to provide the most rigorous evidence of impact to patient health. Although these so-called test-treatment RCTs are very challenging to undertake due to their methodological complexity, they have not been subjected to a systematic appraisal of their methodological quality. The extent to which these trials may be producing biased results therefore remains unknown. We set out to address this issue by conducting a methodological review of published test-treatment trials to determine how often they implement adequate methods to limit bias and safeguard the validity of results. We ascertained all test-treatment RCTs published 2004-2007, indexed in CENTRAL, including RCTs which randomized patients to diagnostic tests and measured patient outcomes after treatment. Tests used for screening, monitoring or prognosis were excluded. We assessed adequacy of sequence generation, allocation concealment and intention-to-treat, appropriateness of primary analyses, blinding and reporting of power calculations, and extracted study characteristics including the primary outcome. One hundred three trials compared 105 control with 119 experimental interventions, and reported 150 primary outcomes. Randomization and allocation concealment were adequate in 57 and 37% of trials. Blinding was uncommon (patients 5%, clinicians 4%, outcome assessors 21%), as was an adequate intention-to-treat analysis (29%). Overall 101 of 103 trials (98%) were at risk of bias, as judged using standard Cochrane criteria. Test-treatment trials are particularly susceptible to attrition and inadequate primary analyses, lack of blinding and under-powering. These weaknesses pose much greater methodological and practical challenges to conducting reliable RCT evaluations of test-treatment strategies than standard treatment interventions. We suggest a cautious approach that first examines whether a test-treatment intervention can accommodate the methodological safeguards necessary to minimize bias, and highlight that test-treatment RCTs require different methods to ensure reliability than standard treatment trials. Please see the companion paper to this article: http://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-016-0286-0 .
Landorf, Karl B; Menz, Hylton B; Armstrong, David G; Herbert, Robert D
2015-07-01
Randomized trials must be of high methodological quality to yield credible, actionable findings. The main aim of this project was to evaluate whether there has been an improvement in the methodological quality of randomized trials published in the Journal of the American Podiatric Medical Association (JAPMA). Randomized trials published in JAPMA during a 15-year period (January 1999 to December 2013) were evaluated. The methodological quality of randomized trials was evaluated using the PEDro scale (scores range from 0 to 10, with 0 being lowest quality). Linear regression was used to assess changes in methodological quality over time. A total of 1,143 articles were published in JAPMA between January 1999 and December 2013. Of these, 44 articles were reports of randomized trials. Although the number of randomized trials published each year increased, there was only minimal improvement in their methodological quality (mean rate of improvement = 0.01 points per year). The methodological quality of the trials studied was typically moderate, with a mean ± SD PEDro score of 5.1 ± 1.5. Although there were a few high-quality randomized trials published in the journal, most (84.1%) scored between 3 and 6. Although there has been an increase in the number of randomized trials published in JAPMA, there is substantial opportunity for improvement in the methodological quality of trials published in the journal. Researchers seeking to publish reports of randomized trials should seek to meet current best-practice standards in the conduct and reporting of their trials.
[Evaluation of arguments in research reports].
Botes, A
1999-06-01
Some authors on research methodology are of opinion that research reports are based on the logic of reasoning and that such reports communicate with the reader by presenting logical, coherent arguments (Böhme, 1975:206; Mouton, 1996:69). This view implies that researchers draw specific conclusions and that such conclusions are justified by way of reasoning (Doppelt, 1998:105; Giere, 1984:26; Harre, 1965:11; Leherer & Wagner, 1983 & Pitt, 1988:7). The structure of a research report thus consists mainly of conclusions and reasons for such conclusions (Booth, Colomb & Williams, 1995:97). From this it appears that justification by means of reasoning is a standard procedure in research and research reports. Despite the fact that the logic of research is based on reasoning, that the justification of research findings by way of reasoning appears to be standard procedure and that the structure of a research report comprises arguments, the evaluation or assessment of research, as described in most textbooks on research methodology (Burns & Grove, 1993:647; Creswell, 1994:193; LoBiondo-Wood & Haber, 1994:441/481) does not focus on the arguments of research. The evaluation criteria for research reports which are set in these textbooks are related to the way in which the research process is carried out and focus on the measures for internal, external, theoretical, measurement and inferential validity. This means that criteria for the evaluation of research are comprehensive and they should be very specific in respect of each type of research (for example quantitative or qualitative). When the evaluation of research reports is focused on arguments and logic, there could probably be one set of universal standards against which all types of human science research reports can be assessed. Such a universal set of standards could possibly simplify the evaluation of research reports in the human sciences since they can be used to assess all the critical aspects of research reports. As arguments from the basic structure of research reports and are probably also important in the evaluation of research reports in the human sciences, the following questions which I want to answer, are relevant to this paper namely: What are the standards which the reasoning in research reports in the human sciences should meet? How can research reports in the human sciences be assessed or evaluated according to these standards? In answering the first question, the logical demands that are made on reasoning in research are investigated. From these demands the acceptability of the statements, relevance and support of the premises to the conclusion are set as standards for reasoning in research. In answering the second question, a research article is used to demonstrate how the macro- and micro-arguments of research reports can be assessed or evaluated according to these standards. With evaluation it is indicated that the aspects of internal, external, theoretical, measurement and inferential validity can be evaluated according to these standards.
Mechanistic evaluation of test data from LTPP flexible pavement test sections, Vol. I
DOT National Transportation Integrated Search
1996-01-01
This report summarizes the process and lessons learned from the Standardized Travel Time Surveys and Field Test project. The field tests of travel time data collection were conducted in Boston, Seattle, and Lexington in 1993. The methodologies tested...
12 CFR 252.151 - Authority and purpose.
Code of Federal Regulations, 2014 CFR
2014-01-01
... (CONTINUED) ENHANCED PRUDENTIAL STANDARDS (REGULATION YY) Company-Run Stress Test Requirements for Banking... consolidated assets of greater than $10 billion to conduct annual stress tests. This subpart also establishes definitions of stress test and related terms, methodologies for conducting stress tests, and reporting and...
12 CFR 252.151 - Authority and purpose.
Code of Federal Regulations, 2013 CFR
2013-01-01
... (CONTINUED) ENHANCED PRUDENTIAL STANDARDS (REGULATION YY) Company-Run Stress Test Requirements for Banking... consolidated assets of greater than $10 billion to conduct annual stress tests. This subpart also establishes definitions of stress test and related terms, methodologies for conducting stress tests, and reporting and...
Montedori, Alessandro; Bonacini, Maria Isabella; Casazza, Giovanni; Luchetta, Maria Laura; Duca, Piergiorgio; Cozzolino, Francesco; Abraha, Iosief
2011-02-28
Randomized controlled trials (RCTs) that use the modified intention-to-treat (mITT) approach are increasingly being published. Such trials have a preponderance of post-randomization exclusions, industry sponsorship, and favourable findings, and little is known whether in terms of these items mITT trials are different with respect to trials that report a standard intention-to-treat. To determine differences in the methodological quality, sponsorship, authors' conflicts of interest, and findings among trials with different "types" of intention-to-treat, we undertook a cross-sectional study of RCTs published in 2006 in three general medical journals (the Journal of the American Medical Association, the New England Journal of Medicine and the Lancet) and three specialty journals (Antimicrobial Agents and Chemotherapy, the American Heart Journal and the Journal of Clinical Oncology). Trials were categorized based on the "type" of intention-to-treat reporting as follows: ITT, trials reporting the use of standard ITT approach; mITT, trials reporting the use of a "modified intention-to-treat" approach; and "no ITT", trials not reporting the use of any intention-to-treat approach. Two pairs of reviewers independently extracted the data in duplicate. The strength of the associations between the "type" of intention-to-treat reporting and the quality of reporting (sample size calculation, flow-chart, lost to follow-up), the methodological quality of the trials (sequence generation, allocation concealment, and blinding), the funding source, and the findings was determined. Odds ratios (OR) were calculated with 95% confidence intervals (CI). Of the 367 RCTs included, 197 were classified as ITT, 56 as mITT, and 114 as "no ITT" trials. The quality of reporting and the methodological quality of the mITT trials were similar to those of the ITT trials; however, the mITT trials were more likely to report post-randomization exclusions (adjusted OR 3.43 [95%CI, 1.70 to 6.95]; P < 0.001). We found a strong association between trials classified as mITT and for-profit agency sponsorship (adjusted OR 7.41 [95%CI, 3.14 to 17.48]; P < .001) as well as the presence of authors' conflicts of interest (adjusted OR 5.14 [95%CI, 2.12 to 12.48]; P < .001). There was no association between mITT reporting and favourable results; in general, however, trials with for-profit agency sponsorship were significantly associated with favourable results (adjusted OR 2.30; [95%CI, 1.28 to 4.16]; P = 0.006). We found that the mITT trials were significantly more likely to perform post-randomization exclusions and were strongly associated with industry funding and authors' conflicts of interest.
SHINE: Strategic Health Informatics Networks for Europe.
Kruit, D; Cooper, P A
1994-10-01
The mission of SHINE is to construct an open systems framework for the development of regional community healthcare telematic services that support and add to the strategic business objectives of European healthcare providers and purchasers. This framework will contain a Methodology, that identifies healthcare business processes and develops a supporting IT strategy, and the Open Health Environment. This consists of an architecture and information standards that are 'open' and will be available to any organisation wishing to construct SHINE conform regional healthcare telematic services. Results are: generic models, e.g., regional healthcare business networks, IT strategies; demonstrable, e.g., pilot demonstrators, application and service prototypes; reports, e.g., SHINE Methodology, pilot specifications & evaluations; proposals, e.g., service/interface specifications, standards conformance.
[50 years of the methodology of weather forecasting for medicine].
Grigor'ev, K I; Povazhnaia, E L
2014-01-01
The materials reported in the present article illustrate the possibility of weather forecasting for the medical purposes in the historical aspect. The main characteristics of the relevant organizational and methodological approaches to meteoprophylaxis based of the standard medical forecasts are presented. The emphasis is laid on the priority of the domestic medical school in the development of the principles of diagnostics and treatment of meteosensitivity and meteotropic complications in the patients presenting with various diseases with special reference to their age-related characteristics.
Cryogenic insulation standard data and methodologies
NASA Astrophysics Data System (ADS)
Demko, J. A.; Fesmire, J. E.; Johnson, W. L.; Swanger, A. M.
2014-01-01
Although some standards exist for thermal insulation, few address the sub-ambient temperature range and cold-side temperatures below 100 K. Standards for cryogenic insulation systems require cryostat testing and data analysis that will allow the development of the tools needed by design engineers and thermal analysts for the design of practical cryogenic systems. Thus, this critically important information can provide reliable data and methodologies for industrial efficiency and energy conservation. Two Task Groups have been established in the area of cryogenic insulation systems Under ASTM International's Committee C16 on Thermal Insulation. These are WK29609 - New Standard for Thermal Performance Testing of Cryogenic Insulation Systems and WK29608 - Standard Practice for Multilayer Insulation in Cryogenic Service. The Cryogenics Test Laboratory of NASA Kennedy Space Center and the Thermal Energy Laboratory of LeTourneau University are conducting Inter-Laboratory Study (ILS) of selected insulation materials. Each lab carries out the measurements of thermal properties of these materials using identical flat-plate boil-off calorimeter instruments. Parallel testing will provide the comparisons necessary to validate the measurements and methodologies. Here we discuss test methods, some initial data in relation to the experimental approach, and the manner reporting the thermal performance data. This initial study of insulation materials for sub-ambient temperature applications is aimed at paving the way for further ILS comparative efforts that will produce standard data sets for several commercial materials. Discrepancies found between measurements will be used to improve the testing and data reduction techniques being developed as part of the future ASTM International standards.
NASA Technical Reports Server (NTRS)
Hall, Edward; Magner, James
2011-01-01
This report is provided as part of ITT s NASA Glenn Research Center Aerospace Communication Systems Technical Support (ACSTS) contract NNC05CA85C, Task 7: New ATM Requirements-Future Communications, C-Band and L-Band Communications Standard Development and was based on direction provided by FAA project-level agreements for New ATM Requirements-Future Communications. Task 7 included two subtasks. Subtask 7-1 addressed C-band (5091- to 5150-MHz) airport surface data communications standards development, systems engineering, test bed and prototype development, and tests and demonstrations to establish operational capability for the Aeronautical Mobile Airport Communications System (AeroMACS). Subtask 7-2 focused on systems engineering and development support of the L-band digital aeronautical communications system (L-DACS). Subtask 7-1 consisted of two phases. Phase I included development of AeroMACS concepts of use, requirements, architecture, and initial high-level safety risk assessment. Phase II builds on Phase I results and is presented in two volumes. Volume I is devoted to concepts of use, system requirements, and architecture, including AeroMACS design considerations. Volume II (this document) describes an AeroMACS prototype evaluation and presents final AeroMACS recommendations. This report also describes airport categorization and channelization methodologies. The purposes of the airport categorization task were (1) to facilitate initial AeroMACS architecture designs and enable budgetary projections by creating a set of airport categories based on common airport characteristics and design objectives, and (2) to offer high-level guidance to potential AeroMACS technology and policy development sponsors and service providers. A channelization plan methodology was developed because a common global methodology is needed to assure seamless interoperability among diverse AeroMACS services potentially supplied by multiple service providers.
NASA Technical Reports Server (NTRS)
Hall, Edward; Isaacs, James; Henriksen, Steve; Zelkin, Natalie
2011-01-01
This report is provided as part of ITT s NASA Glenn Research Center Aerospace Communication Systems Technical Support (ACSTS) contract NNC05CA85C, Task 7: New ATM Requirements-Future Communications, C-Band and L-Band Communications Standard Development and was based on direction provided by FAA project-level agreements for New ATM Requirements-Future Communications. Task 7 included two subtasks. Subtask 7-1 addressed C-band (5091- to 5150-MHz) airport surface data communications standards development, systems engineering, test bed and prototype development, and tests and demonstrations to establish operational capability for the Aeronautical Mobile Airport Communications System (AeroMACS). Subtask 7-2 focused on systems engineering and development support of the L-band digital aeronautical communications system (L-DACS). Subtask 7-1 consisted of two phases. Phase I included development of AeroMACS concepts of use, requirements, architecture, and initial high-level safety risk assessment. Phase II builds on Phase I results and is presented in two volumes. Volume I (this document) is devoted to concepts of use, system requirements, and architecture, including AeroMACS design considerations. Volume II describes an AeroMACS prototype evaluation and presents final AeroMACS recommendations. This report also describes airport categorization and channelization methodologies. The purposes of the airport categorization task were (1) to facilitate initial AeroMACS architecture designs and enable budgetary projections by creating a set of airport categories based on common airport characteristics and design objectives, and (2) to offer high-level guidance to potential AeroMACS technology and policy development sponsors and service providers. A channelization plan methodology was developed because a common global methodology is needed to assure seamless interoperability among diverse AeroMACS services potentially supplied by multiple service providers.
Muysoms, F E; Deerenberg, E B; Peeters, E; Agresta, F; Berrevoet, F; Campanelli, G; Ceelen, W; Champault, G G; Corcione, F; Cuccurullo, D; DeBeaux, A C; Dietz, U A; Fitzgibbons, R J; Gillion, J F; Hilgers, R-D; Jeekel, J; Kyle-Leinhase, I; Köckerling, F; Mandala, V; Montgomery, A; Morales-Conde, S; Simmermacher, R K J; Schumpelick, V; Smietański, M; Walgenbach, M; Miserez, M
2013-08-01
The literature dealing with abdominal wall surgery is often flawed due to lack of adherence to accepted reporting standards and statistical methodology. The EuraHS Working Group (European Registry of Abdominal Wall Hernias) organised a consensus meeting of surgical experts and researchers with an interest in abdominal wall surgery, including a statistician, the editors of the journal Hernia and scientists experienced in meta-analysis. Detailed discussions took place to identify the basic ground rules necessary to improve the quality of research reports related to abdominal wall reconstruction. A list of recommendations was formulated including more general issues on the scientific methodology and statistical approach. Standards and statements are available, each depending on the type of study that is being reported: the CONSORT statement for the Randomised Controlled Trials, the TREND statement for non randomised interventional studies, the STROBE statement for observational studies, the STARLITE statement for literature searches, the MOOSE statement for metaanalyses of observational studies and the PRISMA statement for systematic reviews and meta-analyses. A number of recommendations were made, including the use of previously published standard definitions and classifications relating to hernia variables and treatment; the use of the validated Clavien-Dindo classification to report complications in hernia surgery; the use of "time-to-event analysis" to report data on "freedom-of-recurrence" rather than the use of recurrence rates, because it is more sensitive and accounts for the patients that are lost to follow-up compared with other reporting methods. A set of recommendations for reporting outcome results of abdominal wall surgery was formulated as guidance for researchers. It is anticipated that the use of these recommendations will increase the quality and meaning of abdominal wall surgery research.
Poor methodological quality and reporting standards of systematic reviews in burn care management.
Wasiak, Jason; Tyack, Zephanie; Ware, Robert; Goodwin, Nicholas; Faggion, Clovis M
2017-10-01
The methodological and reporting quality of burn-specific systematic reviews has not been established. The aim of this study was to evaluate the methodological quality of systematic reviews in burn care management. Computerised searches were performed in Ovid MEDLINE, Ovid EMBASE and The Cochrane Library through to February 2016 for systematic reviews relevant to burn care using medical subject and free-text terms such as 'burn', 'systematic review' or 'meta-analysis'. Additional studies were identified by hand-searching five discipline-specific journals. Two authors independently screened papers, extracted and evaluated methodological quality using the 11-item A Measurement Tool to Assess Systematic Reviews (AMSTAR) tool and reporting quality using the 27-item Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist. Characteristics of systematic reviews associated with methodological and reporting quality were identified. Descriptive statistics and linear regression identified features associated with improved methodological quality. A total of 60 systematic reviews met the inclusion criteria. Six of the 11 AMSTAR items reporting on 'a priori' design, duplicate study selection, grey literature, included/excluded studies, publication bias and conflict of interest were reported in less than 50% of the systematic reviews. Of the 27 items listed for PRISMA, 13 items reporting on introduction, methods, results and the discussion were addressed in less than 50% of systematic reviews. Multivariable analyses showed that systematic reviews associated with higher methodological or reporting quality incorporated a meta-analysis (AMSTAR regression coefficient 2.1; 95% CI: 1.1, 3.1; PRISMA regression coefficient 6·3; 95% CI: 3·8, 8·7) were published in the Cochrane library (AMSTAR regression coefficient 2·9; 95% CI: 1·6, 4·2; PRISMA regression coefficient 6·1; 95% CI: 3·1, 9·2) and included a randomised control trial (AMSTAR regression coefficient 1·4; 95%CI: 0·4, 2·4; PRISMA regression coefficient 3·4; 95% CI: 0·9, 5·8). The methodological and reporting quality of systematic reviews in burn care requires further improvement with stricter adherence by authors to the PRISMA checklist and AMSTAR tool. © 2016 Medicalhelplines.com Inc and John Wiley & Sons Ltd.
Claassens, Lily; van Meerbeeck, Jan; Coens, Corneel; Quinten, Chantal; Ghislain, Irina; Sloan, Elizabeth K.; Wang, Xin Shelly; Velikova, Galina; Bottomley, Andrew
2011-01-01
Purpose This study is an update of a systematic review of health-related quality-of-life (HRQOL) methodology reporting in non–small-cell lung cancer (NSCLC) randomized controlled trials (RCTs). The objective was to evaluate HRQOL methodology reporting over the last decade and its benefit for clinical decision making. Methods A MEDLINE systematic literature review was performed. Eligible RCTs implemented patient-reported HRQOL assessments and regular oncology treatments for newly diagnosed adult patients with NSCLC. Included studies were published in English from August 2002 to July 2010. Two independent reviewers evaluated all included RCTs. Results Fifty-three RCTs were assessed. Of the 53 RCTs, 81% reported that there was no significant difference in overall survival (OS). However, 50% of RCTs that were unable to find OS differences reported a significant difference in HRQOL scores. The quality of HRQOL reporting has improved; both reporting of clinically significant differences and statistical testing of HRQOL have improved. A European Organisation for Research and Treatment of Cancer HRQOL questionnaire was used in 57% of the studies. However, reporting of HRQOL hypotheses and rationales for choosing HRQOL instruments were significantly less than before 2002 (P < .05). Conclusion The number of NSCLC RCTs incorporating HRQOL assessments has considerably increased. HRQOL continues to demonstrate its importance in RCTs, especially in those studies in which no OS difference is found. Despite the improved quality of HRQOL methodology reporting, certain aspects remain underrepresented. Our findings suggest need for an international standardization of HRQOL reporting similar to the CONSORT guidelines for clinical findings. PMID:21464420
Do systematic reviews on pediatric topics need special methodological considerations?
Farid-Kapadia, Mufiza; Askie, Lisa; Hartling, Lisa; Contopoulos-Ioannidis, Despina; Bhutta, Zulfiqar A; Soll, Roger; Moher, David; Offringa, Martin
2017-03-06
Systematic reviews are key tools to enable decision making by healthcare providers and policymakers. Despite the availability of the evidence based Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA-2009 and PRISMA-P 2015) statements that were developed to improve the transparency and quality of reporting of systematic reviews, uncertainty on how to deal with pediatric-specific methodological challenges of systematic reviews impairs decision-making in child health. In this paper, we identify methodological challenges specific to the design, conduct and reporting of pediatric systematic reviews, and propose a process to address these challenges. One fundamental decision at the outset of a systematic review is whether to focus on a pediatric population only, or to include both adult and pediatric populations. Both from the policy and patient care point of view, the appropriateness of interventions and comparators administered to pre-defined pediatric age subgroup is critical. Decisions need to be based on the biological plausibility of differences in treatment effects across the developmental trajectory in children. Synthesis of evidence from different trials is often impaired by the use of outcomes and measurement instruments that differ between trials and are neither relevant nor validated in the pediatric population. Other issues specific to pediatric systematic reviews include lack of pediatric-sensitive search strategies and inconsistent choices of pediatric age subgroups in meta-analyses. In addition to these methodological issues generic to all pediatric systematic reviews, special considerations are required for reviews of health care interventions' safety and efficacy in neonatology, global health, comparative effectiveness interventions and individual participant data meta-analyses. To date, there is no standard approach available to overcome this problem. We propose to develop a consensus-based checklist of essential items which researchers should consider when they are planning (PRISMA-PC-Protocol for Children) or reporting (PRISMA-C-reporting for Children) a pediatric systematic review. Available guidelines including PRISMA do not cover the complexity associated with the conduct and reporting of systematic reviews in the pediatric population; they require additional and modified standards for reporting items. Such guidance will facilitate the translation of knowledge from the literature to bedside care and policy, thereby enhancing delivery of care and improving child health outcomes.
Hernan, Amanda E; Schevon, Catherine A; Worrell, Gregory A; Galanopoulou, Aristea S; Kahane, Philippe; de Curtis, Marco; Ikeda, Akio; Quilichini, Pascale; Williamson, Adam; Garcia-Cairasco, Norberto; Scott, Rod C; Timofeev, Igor
2017-11-01
This paper is a result of work of the AES/ILAE Translational Task Force of the International League Against Epilepsy. The aim is to provide acceptable standards and interpretation of results of electrophysiological depth recordings in vivo in control rodents. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Specification of Energy Assessment Methodologies to Satisfy ISO 50001 Energy Management Standard
NASA Astrophysics Data System (ADS)
Kanneganti, Harish
Energy management has become more crucial for industrial sector as a way to lower their cost of production and in reducing their carbon footprint. Environmental regulations also force the industrial sector to increase the efficiency of their energy usage. Hence industrial sector started relying on energy management consultancies for improvements in energy efficiency. With the development of ISO 50001 standard, the entire energy management took a new dimension involving top level management and getting their commitment on energy efficiency. One of the key requirements of ISO 50001 is to demonstrate continual improvement in their (industry) energy efficiency. The major aim of this work is to develop an energy assessment methodology and reporting format to tailor the needs of ISO 50001. The developed methodology integrates the energy reduction aspect of an energy assessment with the requirements of sections 4.4.3 (Energy Review) to 4.4.6 (Objectives, Targets and Action Plans) in ISO 50001 and thus helping the facilities in easy implementation of ISO 50001.
Hill, Anne E; Davidson, Bronwyn J; Theodoros, Deborah G
2010-06-01
The use of standardized patients has been reported as a viable addition to traditional models of professional practice education in medicine, nursing and allied health programs. Educational programs rely on the inclusion of work-integrated learning components in order to graduate competent practitioners. Allied health programs world-wide have reported increasing difficulty in attaining sufficient traditional placements for students within the workplace. In response to this, allied health professionals are challenged to be innovative and problem-solving in the development and maintenance of clinical education placements and to consider potential alternative learning opportunities for students. Whilst there is a bank of literature describing the use of standardized patients in medicine and nursing, reports of its use in speech-language pathology clinical education are limited. Therefore, this paper aims to (1) provide a review of literature reporting on the use of standardized patients within medical and allied health professions with particular reference to use in speech-language pathology, (2) discuss methodological and practical issues involved in establishing and maintaining a standardized patient program and (3) identify future directions for research and clinical programs using standardized patients to build foundation clinical skills such as communication, interpersonal interaction and interviewing.
Motraghi, Terri E; Seim, Richard W; Meyer, Eric C; Morissette, Sandra B
2014-03-01
Virtual reality exposure therapy (VRET) is an extension of traditional exposure therapy and has been used to treat a variety of anxiety disorders. VRET utilizes a computer-generated virtual environment to present fear-relevant stimuli. Recent studies have evaluated the use of VRET for treatment of PTSD; however, a systematic evaluation of the methodological quality of these studies has yet to be conducted. This review aims to (a) identify treatment outcome studies examining the use of VRET for the treatment of PTSD and (b) appraise the methodological quality of each study using the 2010 Consolidating Standards of Reporting Trials (CONSORT) Statement and its 2008 extension for nonpharmacologic interventions. Two independent assessors conducted a database search (PsycINFO, Medline, CINAHL, Google Scholar) of studies published between January 1990 and June 2013 that reported outcome data comparing VRET with another type of treatment or a control condition. Next, a CONSORT quality appraisal of each study was completed. The search yielded nine unique studies. The CONSORT appraisal revealed that the methodological quality of studies examining VRET as a treatment for PTSD was variable. Although preliminary findings suggest some positive results for VRET as a form of exposure treatment for PTSD, additional research using well-specified randomization procedures, assessor blinding, and monitoring of treatment adherence is warranted. Movement toward greater standardization of treatment manuals, virtual environments, and equipment would further facilitate interpretation and consolidation of this literature. © 2013 Wiley Periodicals, Inc.
RAMESES publication standards: meta-narrative reviews
2013-01-01
Background Meta-narrative review is one of an emerging menu of new approaches to qualitative and mixed-method systematic review. A meta-narrative review seeks to illuminate a heterogeneous topic area by highlighting the contrasting and complementary ways in which researchers have studied the same or a similar topic. No previous publication standards exist for the reporting of meta-narrative reviews. This publication standard was developed as part of the RAMESES (Realist And MEta-narrative Evidence Syntheses: Evolving Standards) project. The project's aim is to produce preliminary publication standards for meta-narrative reviews. Methods We (a) collated and summarized existing literature on the principles of good practice in meta-narrative reviews; (b) considered the extent to which these principles had been followed by published reviews, thereby identifying how rigor may be lost and how existing methods could be improved; (c) used a three-round online Delphi method with an interdisciplinary panel of national and international experts in evidence synthesis, meta-narrative reviews, policy and/or publishing to produce and iteratively refine a draft set of methodological steps and publication standards; (d) provided real-time support to ongoing meta-narrative reviews and the open-access RAMESES online discussion list so as to capture problems and questions as they arose; and (e) synthesized expert input, evidence review and real-time problem analysis into a definitive set of standards. Results We identified nine published meta-narrative reviews, provided real-time support to four ongoing reviews and captured questions raised in the RAMESES discussion list. Through analysis and discussion within the project team, we summarized the published literature, and common questions and challenges into briefing materials for the Delphi panel, comprising 33 members. Within three rounds this panel had reached consensus on 20 key publication standards, with an overall response rate of 90%. Conclusion This project used multiple sources to draw together evidence and expertise in meta-narrative reviews. For each item we have included an explanation for why it is important and guidance on how it might be reported. Meta-narrative review is a relatively new method for evidence synthesis and as experience and methodological developments occur, we anticipate that these standards will evolve to reflect further theoretical and methodological developments. We hope that these standards will act as a resource that will contribute to improving the reporting of meta-narrative reviews. To encourage dissemination of the RAMESES publication standards, this article is co-published in the Journal of Advanced Nursing and is freely accessible on Wiley Online Library (http://www.wileyonlinelibrary.com/journal/jan). Please see related articles http://www.biomedcentral.com/1741-7015/11/21 and http://www.biomedcentral.com/1741-7015/11/22 PMID:23360661
Annual Research Progress Report.
1979-09-30
will be trained in SLRL test procedures and the methodology will be developed for the incorporation of test materials into the standard rearing diet ...requirements exist for system software maintenance and development of software to report dosing data, to calculate diet preparation data, to manage collected...influence of diet and exercise on myo- globin and metmyoglobin reductase were evaluated in the rat. The activity of inetmyo- globin reductase was
Reporting standards for studies of diagnostic test accuracy in dementia
Noel-Storr, Anna H.; McCleery, Jenny M.; Richard, Edo; Ritchie, Craig W.; Flicker, Leon; Cullum, Sarah J.; Davis, Daniel; Quinn, Terence J.; Hyde, Chris; Rutjes, Anne W.S.; Smailagic, Nadja; Marcus, Sue; Black, Sandra; Blennow, Kaj; Brayne, Carol; Fiorivanti, Mario; Johnson, Julene K.; Köpke, Sascha; Schneider, Lon S.; Simmons, Andrew; Mattsson, Niklas; Zetterberg, Henrik; Bossuyt, Patrick M.M.; Wilcock, Gordon
2014-01-01
Objective: To provide guidance on standards for reporting studies of diagnostic test accuracy for dementia disorders. Methods: An international consensus process on reporting standards in dementia and cognitive impairment (STARDdem) was established, focusing on studies presenting data from which sensitivity and specificity were reported or could be derived. A working group led the initiative through 4 rounds of consensus work, using a modified Delphi process and culminating in a face-to-face consensus meeting in October 2012. The aim of this process was to agree on how best to supplement the generic standards of the STARD statement to enhance their utility and encourage their use in dementia research. Results: More than 200 comments were received during the wider consultation rounds. The areas at most risk of inadequate reporting were identified and a set of dementia-specific recommendations to supplement the STARD guidance were developed, including better reporting of patient selection, the reference standard used, avoidance of circularity, and reporting of test-retest reliability. Conclusion: STARDdem is an implementation of the STARD statement in which the original checklist is elaborated and supplemented with guidance pertinent to studies of cognitive disorders. Its adoption is expected to increase transparency, enable more effective evaluation of diagnostic tests in Alzheimer disease and dementia, contribute to greater adherence to methodologic standards, and advance the development of Alzheimer biomarkers. PMID:24944261
Isolating Science from the Humanities: The Third Dogma of Educational Research
ERIC Educational Resources Information Center
Howe, Kenneth R.
2009-01-01
The demand for scientifically-based educational research has fostered a new methodological orthodoxy exemplified by documents such as the National Research Council's "Scientific Research in Education" and "Advancing Scientific Research in Education" and American Educational Research Association's "Standards for Reporting on Empirical Social…
Infantile colic: a systematic review of medical and conventional therapies.
Hall, Belinda; Chesters, Janice; Robinson, Anske
2012-02-01
Infantile colic is a prevalent and distressing condition for which there is no proven standard therapy. The aim of this paper is to review medical and conventional treatments for infantile colic. A systematic literature review was undertaken of studies on medical and conventional interventions for infantile colic from 1980 to March 2009. The results and methodological rigour of included studies were analysed using the CONSORT (Consolidated Standards Of Reporting Trials) 2001 statement checklist and Centre for Evidence Based Medicine critical appraisal tools. Nineteen studies and two literature reviews were included for review. Pharmacological studies on Simethicone gave conflicting results and with Dicyclomine hydrochloride and Cimetropium bromide results were favourable but side effects were noted along with issues in study methodology. Some nutritional studies reported favourable results for the use of hydrolysed formulas in bottle-fed infants or low-allergen maternal diets in breastfed infants but not for the use of additional fibre or lactase. There were several issues in regards to methodological rigour. Behavioural studies on the use of increased stimulation gave unfavourable results, whereas results from the use of decreased stimulation and contingent music were favourable. These studies demonstrated poor methodological rigour. There is some scientific evidence to support the use of a casein hydrolysate formula in formula-fed infants or a low-allergen maternal diet in breastfed infants with infantile colic. However, there is little scientific evidence to support the use of Simethicone, Dicyclomine hydrochloride, Cimetropium bromide, lactase, additional fibre or behavioural interventions. Further research of good methodological quality on low-allergenic formulas and maternal diets is indicated. © 2011 The Authors. Journal of Paediatrics and Child Health © 2011 Paediatrics and Child Health Division (Royal Australasian College of Physicians).
Critical Thinking, Army Design Methodology, And The Climate Change Policy Debate
2016-05-26
Staff College Fort Leavenworth, Kansas 2016 REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of...information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39.18 Fair use determination or copyright permission has been obtained for the inclusion
Sauer, Vernon B.
2002-01-01
Surface-water computation methods and procedures are described in this report to provide standards from which a completely automated electronic processing system can be developed. To the greatest extent possible, the traditional U. S. Geological Survey (USGS) methodology and standards for streamflow data collection and analysis have been incorporated into these standards. Although USGS methodology and standards are the basis for this report, the report is applicable to other organizations doing similar work. The proposed electronic processing system allows field measurement data, including data stored on automatic field recording devices and data recorded by the field hydrographer (a person who collects streamflow and other surface-water data) in electronic field notebooks, to be input easily and automatically. A user of the electronic processing system easily can monitor the incoming data and verify and edit the data, if necessary. Input of the computational procedures, rating curves, shift requirements, and other special methods are interactive processes between the user and the electronic processing system, with much of this processing being automatic. Special computation procedures are provided for complex stations such as velocity-index, slope, control structures, and unsteady-flow models, such as the Branch-Network Dynamic Flow Model (BRANCH). Navigation paths are designed to lead the user through the computational steps for each type of gaging station (stage-only, stagedischarge, velocity-index, slope, rate-of-change in stage, reservoir, tide, structure, and hydraulic model stations). The proposed electronic processing system emphasizes the use of interactive graphics to provide good visual tools for unit values editing, rating curve and shift analysis, hydrograph comparisons, data-estimation procedures, data review, and other needs. Documentation, review, finalization, and publication of records are provided for with the electronic processing system, as well as archiving, quality assurance, and quality control.
Quintana, D S; Alvares, G A; Heathers, J A J
2016-01-01
The number of publications investigating heart rate variability (HRV) in psychiatry and the behavioral sciences has increased markedly in the last decade. In addition to the significant debates surrounding ideal methods to collect and interpret measures of HRV, standardized reporting of methodology in this field is lacking. Commonly cited recommendations were designed well before recent calls to improve research communication and reproducibility across disciplines. In an effort to standardize reporting, we propose the Guidelines for Reporting Articles on Psychiatry and Heart rate variability (GRAPH), a checklist with four domains: participant selection, interbeat interval collection, data preparation and HRV calculation. This paper provides an overview of these four domains and why their standardized reporting is necessary to suitably evaluate HRV research in psychiatry and related disciplines. Adherence to these communication guidelines will help expedite the translation of HRV research into a potential psychiatric biomarker by improving interpretation, reproducibility and future meta-analyses. PMID:27163204
International classification of reliability for implanted cochlear implant receiver stimulators.
Battmer, Rolf-Dieter; Backous, Douglas D; Balkany, Thomas J; Briggs, Robert J S; Gantz, Bruce J; van Hasselt, Andrew; Kim, Chong Sun; Kubo, Takeshi; Lenarz, Thomas; Pillsbury, Harold C; O'Donoghue, Gerard M
2010-10-01
To design an international standard to be used when reporting reliability of the implanted components of cochlear implant systems to appropriate governmental authorities, cochlear implant (CI) centers, and for journal editors in evaluating manuscripts involving cochlear implant reliability. The International Consensus Group for Cochlear Implant Reliability Reporting was assembled to unify ongoing efforts in the United States, Europe, Asia, and Australia to create a consistent and comprehensive classification system for the implanted components of CI systems across manufacturers. All members of the consensus group are from tertiary referral cochlear implant centers. None. A clinically relevant classification scheme adapted from principles of ISO standard 5841-2:2000 originally designed for reporting reliability of cardiac pacemakers, pulse generators, or leads. Standard definitions for device failure, survival time, clinical benefit, reduced clinical benefit, and specification were generated. Time intervals for reporting back to implant centers for devices tested to be "out of specification," categorization of explanted devices, the method of cumulative survival reporting, and content of reliability reports to be issued by manufacturers was agreed upon by all members. The methodology for calculating Cumulative survival was adapted from ISO standard 5841-2:2000. The International Consensus Group on Cochlear Implant Device Reliability Reporting recommends compliance to this new standard in reporting reliability of implanted CI components by all manufacturers of CIs and the adoption of this standard as a minimal reporting guideline for editors of journals publishing cochlear implant research results.
Pan, Xin; Lopez-Olivo, Maria A; Song, Juhee; Pratt, Gregory; Suarez-Almazor, Maria E
2017-01-01
Objectives We appraised the methodological and reporting quality of randomised controlled clinical trials (RCTs) evaluating the efficacy and safety of Chinese herbal medicine (CHM) in patients with rheumatoid arthritis (RA). Design For this systematic review, electronic databases were searched from inception until June 2015. The search was limited to humans and non-case report studies, but was not limited by language, year of publication or type of publication. Two independent reviewers selected RCTs, evaluating CHM in RA (herbals and decoctions). Descriptive statistics were used to report on risk of bias and their adherence to reporting standards. Multivariable logistic regression analysis was performed to determine study characteristics associated with high or unclear risk of bias. Results Out of 2342 unique citations, we selected 119 RCTs including 18 919 patients: 10 108 patients received CHM alone and 6550 received one of 11 treatment combinations. A high risk of bias was observed across all domains: 21% had a high risk for selection bias (11% from sequence generation and 30% from allocation concealment), 85% for performance bias, 89% for detection bias, 4% for attrition bias and 40% for reporting bias. In multivariable analysis, fewer authors were associated with selection bias (allocation concealment), performance bias and attrition bias, and earlier year of publication and funding source not reported or disclosed were associated with selection bias (sequence generation). Studies published in non-English language were associated with reporting bias. Poor adherence to recommended reporting standards (<60% of the studies not providing sufficient information) was observed in 11 of the 23 sections evaluated. Limitations Study quality and data extraction were performed by one reviewer and cross-checked by a second reviewer. Translation to English was performed by one reviewer in 85% of the included studies. Conclusions Studies evaluating CHM often fail to meet expected methodological criteria, and high-quality evidence is lacking. PMID:28249848
Zhang, J; Chen, X; Zhu, Q; Cui, J; Cao, L; Su, J
2016-11-01
In recent years, the number of randomized controlled trials (RCTs) in the field of orthopaedics is increasing in Mainland China. However, randomized controlled trials (RCTs) are inclined to bias if they lack methodological quality. Therefore, we performed a survey of RCT to assess: (1) What about the quality of RCTs in the field of orthopedics in Mainland China? (2) Whether there is difference between the core journals of the Chinese department of orthopedics and Orthopaedics Traumatology Surgery & Research (OTSR). This research aimed to evaluate the methodological reporting quality according to the CONSORT statement of randomized controlled trials (RCTs) in seven key orthopaedic journals published in Mainland China over 5 years from 2010 to 2014. All of the articles were hand researched on Chongqing VIP database between 2010 and 2014. Studies were considered eligible if the words "random", "randomly", "randomization", "randomized" were employed to describe the allocation way. Trials including animals, cadavers, trials published as abstracts and case report, trials dealing with subgroups analysis, or trials without the outcomes were excluded. In addition, eight articles selected from Orthopaedics Traumatology Surgery & Research (OTSR) between 2010 and 2014 were included in this study for comparison. The identified RCTs are analyzed using a modified version of the Consolidated Standards of Reporting Trials (CONSORT), including the sample size calculation, allocation sequence generation, allocation concealment, blinding and handling of dropouts. A total of 222 RCTs were identified in seven core orthopaedic journals. No trials reported adequate sample size calculation, 74 (33.4%) reported adequate allocation generation, 8 (3.7%) trials reported adequate allocation concealment, 18 (8.1%) trials reported adequate blinding and 16 (7.2%) trials reported handling of dropouts. In OTSR, 1 (12.5%) trial reported adequate sample size calculation, 4 (50.0%) reported adequate allocation generation, 1 (12.5%) trials reported adequate allocation concealment, 2 (25.0%) trials reported adequate blinding and 5 (62.5%) trials reported handling of dropouts. There were statistical differences as for sample size calculation and handling of dropouts between papers from Mainland China and OTSR (P<0.05). The findings of this study show that the methodological reporting quality of RCTs in seven core orthopaedic journals from the Mainland China is far from satisfaction and it needs to further improve to keep up with the standards of the CONSORT statement. Level III case control. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Magalhães Junior, Hipólito V; Pernambuco, Leandro de Araújo; Lima, Kenio C; Ferreira, Maria Angela F
2018-04-03
Oropharyngeal dysphagia is a swallowing disorder with signs and symptoms which may be present in older adults, but they are rarely noticed as a health concern by older people. The earliest possible identification of this clinical condition is needed by self-reported population-based screening questionnaire, which are valid and reliable for preventing risks to nutritional status, increased morbidity and mortality. The aim of this systematic review was to identify self-reported screening questionnaires for oropharyngeal dysphagia in older adults to evaluate their methodological quality for population-based studies. An extensive search of electronic databases (PubMed (MEDLINE), Ovid MEDLINE(R), Scopus, Cochrane Library, CINAHL, Web of Science (WOS), PsycINFO (APA), Lilacs and Scielo) was conducted in the period from April to May 2017 using previously established search strategies by the two evaluators. The methodological quality and the psychometric properties of the included studies were evaluated by the COSMIN (Consensus based Standards for the selection of health Measurement Instruments) checklist and the quality criteria of Terwee and colleagues, respectively. The analysed information was extracted from three articles which had conducted studies on the prevalence of oropharyngeal dysphagia by self-reported screening questionnaires, showing poor methodological quality and flaws in the methodological description to demonstrate its psychometric properties. This study did not find any self-reported screening questionnaires for oropharyngeal dysphagia with suitable methodological quality and appropriate evidence in its psychometric properties for elders. Therefore, the self-reported questionnaires within the diagnostic proposal require greater details in its process for obtaining valid and reliable evidence. © 2018 John Wiley & Sons A/S and The Gerodontology Association. Published by John Wiley & Sons Ltd.
Protocol - realist and meta-narrative evidence synthesis: Evolving Standards (RAMESES)
2011-01-01
Background There is growing interest in theory-driven, qualitative and mixed-method approaches to systematic review as an alternative to (or to extend and supplement) conventional Cochrane-style reviews. These approaches offer the potential to expand the knowledge base in policy-relevant areas - for example by explaining the success, failure or mixed fortunes of complex interventions. However, the quality of such reviews can be difficult to assess. This study aims to produce methodological guidance, publication standards and training resources for those seeking to use the realist and/or meta-narrative approach to systematic review. Methods/design We will: [a] collate and summarise existing literature on the principles of good practice in realist and meta-narrative systematic review; [b] consider the extent to which these principles have been followed by published and in-progress reviews, thereby identifying how rigour may be lost and how existing methods could be improved; [c] using an online Delphi method with an interdisciplinary panel of experts from academia and policy, produce a draft set of methodological steps and publication standards; [d] produce training materials with learning outcomes linked to these steps; [e] pilot these standards and training materials prospectively on real reviews-in-progress, capturing methodological and other challenges as they arise; [f] synthesise expert input, evidence review and real-time problem analysis into more definitive guidance and standards; [g] disseminate outputs to audiences in academia and policy. The outputs of the study will be threefold: 1. Quality standards and methodological guidance for realist and meta-narrative reviews for use by researchers, research sponsors, students and supervisors 2. A 'RAMESES' (Realist and Meta-review Evidence Synthesis: Evolving Standards) statement (comparable to CONSORT or PRISMA) of publication standards for such reviews, published in an open-access academic journal. 3. A training module for researchers, including learning outcomes, outline course materials and assessment criteria. Discussion Realist and meta-narrative review are relatively new approaches to systematic review whose overall place in the secondary research toolkit is not yet fully established. As with all secondary research methods, guidance on quality assurance and uniform reporting is an important step towards improving quality and consistency of studies. PMID:21843376
Ndounga Diakou, Lee Aymar; Ntoumi, Francine; Ravaud, Philippe; Boutron, Isabelle
2017-07-05
Randomized controlled trials (RCTs) are needed to improve health care in Sub-Saharan Africa (SSA). However, inadequate methods and incomplete reporting of interventions can prevent the transposition of research in practice which leads waste of research. The aim of this systematic review was to assess the avoidable waste in research related to inadequate methods and incomplete reporting of interventions in RCTs performed in SSA. We performed a methodological systematic review of RCTs performed in SSA and published between 1 January 2014 and 31 March 2015. We searched PubMed, the Cochrane library and the African Index Medicus to identify reports. We assessed the risk of bias using the Cochrane Risk of Bias tool, and for each risk of bias item, determined whether easy adjustments with no or minor cost could change the domain to low risk of bias. The reporting of interventions was assessed by using standardized checklists based on the Consolidated Standards for Reporting Trials, and core items of the Template for Intervention Description and Replication. Corresponding authors of reports with incomplete reporting of interventions were contacted to obtain additional information. Data were descriptively analyzed. Among 121 RCTs selected, 74 (61%) evaluated pharmacological treatments (PTs), including drugs and nutritional supplements; and 47 (39%) nonpharmacological treatments (NPTs) (40 participative interventions, 1 surgical procedure, 3 medical devices and 3 therapeutic strategies). Overall, the randomization sequence was adequately generated in 76 reports (62%) and the intervention allocation concealed in 48 (39%). The primary outcome was described as blinded in 46 reports (38%), and incomplete outcome data were adequately addressed in 78 (64%). Applying easy methodological adjustments with no or minor additional cost to trials with at least one domain at high risk of bias could have reduced the number of domains at high risk for 24 RCTs (19%). Interventions were completely reported for 73/121 (60%) RCTs: 51/74 (68%) of PTs and 22/47 (46%) of NPTs. Additional information was obtained from corresponding authors for 11/48 reports (22%). Inadequate methods and incomplete reporting of published SSA RCTs could be improved by easy and inexpensive methodological adjustments and adherence to reporting guidelines.
Sargeant, Jan M; Saint-Onge, Jacqueline; Valcour, James; Thompson, Adam; Elgie, Robyn; Snedeker, Kate; Marcynuk, Pasha
2009-10-01
Randomized controlled trials (RCTs) are the gold standard for evaluating treatment efficacy. Therefore, it is important that RCTs are conducted with methodological rigor to prevent biased results and report results in a manner that allows the reader to evaluate internal and external validity. Most human health journals now require manuscripts to meet the Consolidated Standards of Reporting Trials (CONSORT) criteria for reporting of RCTs. Our objective was to evaluate preharvest food safety trials using a modification of the CONSORT criteria to assess methodological quality and completeness of reporting, and to investigate associations between reporting and treatment effects. One hundred randomly selected trials were evaluated using a modified CONSORT statement. The majority of the selected trials (84%) used a deliberate disease challenge, with the remainder representing natural pathogen exposure. There were widespread deficiencies in the reporting of many trial features. Randomization, double blinding, and the number of subjects lost to follow-up were reported in only 46%, 0%, and 43% of trials, respectively. The inclusion criteria for study subjects were only described in 16% of trials, and the number of animals housed together was only stated in 52% of the trials. Although 91 trials had more than one outcome, no trials specified the primary outcome of interest. There were significant bivariable associations between the proportion of positive treatment effects and failure to report the number of subjects lost to follow-up, the number of animals housed together in a group, the level of treatment allocation, and possible study limitations. The results suggest that there are substantive deficiencies in reporting of preharvest food safety trials, and that these deficiencies may be associated with biased treatment effects. The creation and adoption of standards for reporting in preharvest food safety trials will help to ensure the inclusion of important trial details in all publications.
Cryogenic Insulation Standard Data and Methodologies Project
NASA Technical Reports Server (NTRS)
Summerfield, Burton; Thompson, Karen; Zeitlin, Nancy; Mullenix, Pamela; Fesmire, James; Swanger, Adam
2015-01-01
Extending some recent developments in the area of technical consensus standards for cryogenic thermal insulation systems, a preliminary Inter-Laboratory Study of foam insulation materials was performed by NASA Kennedy Space Center and LeTourneau University. The initial focus was ambient pressure cryogenic boil off testing using the Cryostat-400 flat-plate instrument. Completion of a test facility at LETU has enabled direct, comparative testing, using identical cryostat instruments and methods, and the production of standard thermal data sets for a number of materials under sub-ambient conditions. The two sets of measurements were analyzed and indicate there is reasonable agreement between the two laboratories. Based on cryogenic boiloff calorimetry, new equipment and methods for testing thermal insulation systems have been successfully developed. These boiloff instruments (or cryostats) include both flat plate and cylindrical models and are applicable to a wide range of different materials under a wide range of test conditions. Test measurements are generally made at large temperature difference (boundary temperatures of 293 K and 78 K are typical) and include the full vacuum pressure range. Results are generally reported in effective thermal conductivity (ke) and mean heat flux (q) through the insulation system. The new cryostat instruments provide an effective and reliable way to characterize the thermal performance of materials under subambient conditions. Proven in through thousands of tests of hundreds of material systems, they have supported a wide range of aerospace, industry, and research projects. Boiloff testing technology is not just for cryogenic testing but is a cost effective, field-representative methodology to test any material or system for applications at sub-ambient temperatures. This technology, when adequately coupled with a technical standards basis, can provide a cost-effective, field-representative methodology to test any material or system for applications at sub-ambient to cryogenic temperatures. A growing need for energy efficiency and cryogenic applications is creating a worldwide demand for improved thermal insulation systems for low temperatures. The need for thermal characterization of these systems and materials raises a corresponding need for insulation test standards and thermal data targeted for cryogenic-vacuum applications. Such standards have a strong correlation to energy, transportation, and environment and the advancement of new materials technologies in these areas. In conjunction with this project, two new standards on cryogenic insulation were recently published by ASTM International: C1774 and C740. Following the requirements of NPR 7120.10, Technical Standards for NASA Programs and Projects, the appropriate information in this report can be provided to the NASA Chief Engineer as input for NASA's annual report to NIST, as required by OMB Circular No. A-119, describing NASA's use of voluntary consensus standards and participation in the development of voluntary consensus standards and bodies.
Ezenwa, Miriam O; Suarez, Marie L; Carrasco, Jesus D; Hipp, Theresa; Gill, Anayza; Miller, Jacob; Shea, Robert; Shuey, David; Zhao, Zhongsheng; Angulo, Veronica; McCurry, Timothy; Martin, Joanna; Yao, Yingwei; Molokie, Robert E; Wang, Zaijie Jim; Wilkie, Diana J
2017-07-01
This purpose of this article is to describe how we adhere to the Patient-Centered Outcomes Research Institute's (PCORI) methodology standards relevant to the design and implementation of our PCORI-funded study, the PAIN RelieveIt Trial. We present details of the PAIN RelieveIt Trial organized by the PCORI methodology standards and components that are relevant to our study. The PAIN RelieveIt Trial adheres to four PCORI standards and 21 subsumed components. The four standards include standards for formulating research questions, standards associated with patient centeredness, standards for data integrity and rigorous analyses, and standards for preventing and handling missing data. In the past 24 months, we screened 2,837 cancer patients and their caregivers; 874 dyads were eligible; 223.5 dyads consented and provided baseline data. Only 55 patients were lost to follow-up-a 25% attrition rate. The design and implementation of the PAIN RelieveIt Trial adhered to PCORI's methodology standards for research rigor.
42 CFR 495.110 - Preclusion on administrative and judicial review.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., hospital charges, charity charges, and Medicare share; and (ii) The period used to determine such estimate... EP is hospital-based; and (6) The specification of the EHR reporting period, as well as whether... eligible hospitals— (1) The methodology and standards for determining the incentive payment amounts made to...
The Role of Politics and Governance in Educational Accountability Systems
ERIC Educational Resources Information Center
Brewer, Dominic J.; Killeen, Kieran M.; Welsh, Richard O.
2013-01-01
This brief utilizes case study methodology to illustrate the role of governance in educational accountability systems. Most research on the effectiveness of such systems has focused on technical components, such as standards-setting, assessments, rewards and sanctions, and data collection and reporting. This brief seeks to demonstrate that this…
Methodological Approaches to Online Scoring of Essays.
ERIC Educational Resources Information Center
Chung, Gregory K. W. K.; O'Neil, Harold F., Jr.
This report examines the feasibility of scoring essays using computer-based techniques. Essays have been incorporated into many of the standardized testing programs. Issues of validity and reliability must be addressed to deploy automated approaches to scoring fully. Two approaches that have been used to classify documents, surface- and word-based…
42 CFR 457.431 - Actuarial report for benchmark-equivalent coverage.
Code of Federal Regulations, 2010 CFR
2010-10-01
...— (1) By an individual who is a member of the American Academy of Actuaries; (2) Using generally accepted actuarial principles and methodologies of the American Academy of Actuaries; (3) Using a... coverage. (c) The actuary who prepares the opinion must select and specify the standardized set and...
Report to the President of the United States on Sexual Assault Prevention and Response
2014-11-01
established research history based on laboratory- tested principles of memory retrieval, knowledge representation, and communication. AFOSI has been using CI...analysis methods, including scientific research , data analysis, focus groups , and on -site assessments to evaluate the Department’s SAPR program...131 Report to the President of the United States on SAPR DMDC’s focus group methodology employs a standard qualitative research approach to
Stress Optical Coefficient, Test Methodology, and Glass Standard Evaluation
2016-05-01
identifying and mapping flaw size distributions on glass surfaces for predicting mechanical response. International Journal of Applied Glass ...ARL-TN-0756 ● MAY 2016 US Army Research Laboratory Stress Optical Coefficient, Test Methodology, and Glass Standard Evaluation...Stress Optical Coefficient, Test Methodology, and Glass Standard Evaluation by Clayton M Weiss Oak Ridge Institute for Science and Education
Djulbegovic, Benjamin; Cantor, Alan; Clarke, Mike
2003-01-01
Previous research has identified methodological problems in the design and conduct of randomized trials that could, if left unaddressed, lead to biased results. In this report we discuss one such problem, inadequate control intervention, and argue that it can be by far the most important design characteristic of randomized trials in overestimating the effect of new treatments. Current guidelines for the design and reporting of randomized trials, such as the Consolidated Standards of Reporting Trials (CONSORT) statement, do not address the choice of the comparator intervention. We argue that an adequate control intervention can be selected if people designing a trial explicitly take into consideration the ethical principle of equipoise, also known as "the uncertainty principle."
Traumatic brain injury: methodological approaches to estimate health and economic outcomes.
Lu, Juan; Roe, Cecilie; Aas, Eline; Lapane, Kate L; Niemeier, Janet; Arango-Lasprilla, Juan Carlos; Andelic, Nada
2013-12-01
The effort to standardize the methodology and adherence to recommended principles for all economic evaluations has been emphasized in medical literature. The objective of this review is to examine whether economic evaluations in traumatic brain injury (TBI) research have been compliant with existing guidelines. Medline search was performed between January 1, 1995 and August 11, 2012. All original TBI-related full economic evaluations were included in the study. Two authors independently rated each study's methodology and data presentation to determine compliance to the 10 methodological principles recommended by Blackmore et al. Descriptive analysis was used to summarize the data. Inter-rater reliability was assessed with Kappa statistics. A total of 28 studies met the inclusion criteria. Eighteen of these studies described cost-effectiveness, seven cost-benefit, and three cost-utility analyses. The results showed a rapid growth in the number of published articles on the economic impact of TBI since 2000 and an improvement in their methodological quality. However, overall compliance with recommended methodological principles of TBI-related economic evaluation has been deficient. On average, about six of the 10 criteria were followed in these publications, and only two articles met all 10 criteria. These findings call for an increased awareness of the methodological standards that should be followed by investigators both in performance of economic evaluation and in reviews of evaluation reports prior to publication. The results also suggest that all economic evaluations should be made by following the guidelines within a conceptual framework, in order to facilitate evidence-based practices in the field of TBI.
Reef Fish Survey Techniques: Assessing the Potential for Standardizing Methodologies.
Caldwell, Zachary R; Zgliczynski, Brian J; Williams, Gareth J; Sandin, Stuart A
2016-01-01
Dramatic changes in populations of fishes living on coral reefs have been documented globally and, in response, the research community has initiated efforts to assess and monitor reef fish assemblages. A variety of visual census techniques are employed, however results are often incomparable due to differential methodological performance. Although comparability of data may promote improved assessment of fish populations, and thus management of often critically important nearshore fisheries, to date no standardized and agreed-upon survey method has emerged. This study describes the use of methods across the research community and identifies potential drivers of method selection. An online survey was distributed to researchers from academic, governmental, and non-governmental organizations internationally. Although many methods were identified, 89% of survey-based projects employed one of three methods-belt transect, stationary point count, and some variation of the timed swim method. The selection of survey method was independent of the research design (i.e., assessment goal) and region of study, but was related to the researcher's home institution. While some researchers expressed willingness to modify their current survey protocols to more standardized protocols (76%), their willingness decreased when methodologies were tied to long-term datasets spanning five or more years. Willingness to modify current methodologies was also less common among academic researchers than resource managers. By understanding both the current application of methods and the reported motivations for method selection, we hope to focus discussions towards increasing the comparability of quantitative reef fish survey data.
Methodology of splicing large air filling factor suspended core photonic crystal fibres
NASA Astrophysics Data System (ADS)
Jaroszewicz, L. R.; Murawski, M.; Nasilowski, T.; Stasiewicz, K.; Marć, P.; Szymański, M.; Mergo, P.
2011-06-01
We report the methodology of effective low-loss fusion splicing a photonic crystal fibre (PCF) to itself as well as to a standard single mode fibre (SMF). Distinctly from other papers in this area, we report on the results for splicing suspended core (SC) PCF having tiny core and non-Gaussian shape of guided beam. We show that studied splices exhibit transmission losses strongly dispersive and non-reciprocal in view of light propagation direction. Achieved splicing losses, defined as larger decrease in transmitted optical power comparing both propagation directions, are equal to 2.71 ±0.25 dB, 1.55 ±0.25 dB at 1550 nm for fibre SC PCF spliced to itself and to SMF, respectively.
Lexchin, J; Holbrook, A
1994-07-01
To evaluate the methodologic quality and relevance of references in pharmaceutical advertisements in the Canadian Medical Association Journal (CMAJ). Analytic study. All 114 references cited in the first 22 distinct pharmaceutical advertisements in volume 146 of CMAJ. Mean methodologic quality score (modified from the 6-point scale used to assess articles in the American College of Physicians' Journal Club) and mean relevance score (based on a new 5-point scale) for all references in each advertisement. Twenty of the 22 companies responded, sending 78 (90%) of the 87 references requested. The mean methodologic quality score was 58% (95% confidence limits [CL] 51% and 65%) and the mean relevance score 76% (95% CL 72% and 80%). The two mean scores were statistically lower than the acceptable score of 80% (p < 0.05), and the methodologic quality score was outside the preset clinically significant difference of 15%. The poor rating for methodologic quality was primarily because of the citation of references to low-quality review articles and "other" sources (i.e., other than reports of clinical trials). Half of the advertisements had a methodologic quality score of less than 65%, but only five had a relevance score of less than 65%. Although the relevance of most of the references was within minimal acceptable limits, the methodologic quality was often unacceptable. Because advertisements are an important part of pharmaceutical marketing and education, we suggest that companies develop written standards for their advertisements and monitor their advertisements for adherence to these standards. We also suggest that the Pharmaceutical Advertising Advisory Board develop more stringent guidelines for advertising and that it enforce these guidelines in a consistent, rigorous fashion.
Lexchin, J; Holbrook, A
1994-01-01
OBJECTIVE: To evaluate the methodologic quality and relevance of references in pharmaceutical advertisements in the Canadian Medical Association Journal (CMAJ). DESIGN: Analytic study. DATA SOURCE: All 114 references cited in the first 22 distinct pharmaceutical advertisements in volume 146 of CMAJ. MAIN OUTCOME MEASURES: Mean methodologic quality score (modified from the 6-point scale used to assess articles in the American College of Physicians' Journal Club) and mean relevance score (based on a new 5-point scale) for all references in each advertisement. MAIN RESULTS: Twenty of the 22 companies responded, sending 78 (90%) of the 87 references requested. The mean methodologic quality score was 58% (95% confidence limits [CL] 51% and 65%) and the mean relevance score 76% (95% CL 72% and 80%). The two mean scores were statistically lower than the acceptable score of 80% (p < 0.05), and the methodologic quality score was outside the preset clinically significant difference of 15%. The poor rating for methodologic quality was primarily because of the citation of references to low-quality review articles and "other" sources (i.e., other than reports of clinical trials). Half of the advertisements had a methodologic quality score of less than 65%, but only five had a relevance score of less than 65%. CONCLUSIONS: Although the relevance of most of the references was within minimal acceptable limits, the methodologic quality was often unacceptable. Because advertisements are an important part of pharmaceutical marketing and education, we suggest that companies develop written standards for their advertisements and monitor their advertisements for adherence to these standards. We also suggest that the Pharmaceutical Advertising Advisory Board develop more stringent guidelines for advertising and that it enforce these guidelines in a consistent, rigorous fashion. PMID:8004560
The inquiry continuum: Science teaching practices and student performance on standardized tests
NASA Astrophysics Data System (ADS)
Jernnigan, Laura Jane
Few research studies have been conducted related to inquiry-based scientific teaching methodologies and NCLB-required state testing. The purpose of this study was to examine the relationship between the strategies used by seventh-grade science teachers in Illinois and student scores on the Illinois Standards Achievement Test (ISAT) to aid in determining best practices/strategies for teaching middle school science. The literature review defines scientific inquiry by placing teaching strategies on a continuum of scientific inquiry methodologies from No Inquiry (Direct Instruction) through Authentic Inquiry. Five major divisions of scientific inquiry: structured inquiry, guided inquiry, learning cycle inquiry, open inquiry, and authentic inquiry, have been identified and described. These five divisions contain eight sub-categories: demonstrations; simple or hands-on activities; discovery learning; variations of learning cycles; problem-based, event-based, and project-based; and student inquiry, science partnerships, and Schwab's enquiry. Quantitative data were collected from pre- and posttests and surveys given to the participants: five seventh grade science teachers in four Academic Excellence Award and Spotlight Award schools and their 531 students. Findings revealed that teachers reported higher inquiry scores for themselves than for their students; the two greatest reported factors limiting teachers' use of inquiry were not enough time and concern about discipline and large class size. Although the correlation between total inquiry and mean difference of pre- and posttest scores was not statistically significant, the survey instrument indicated how often teachers used inquiry in their classes, not the type of inquiry used. Implications arose from the findings that increase the methodology debate between direction instruction and inquiry-based teaching strategies; teachers are very knowledgeable about the Illinois state standards, and various inquiry-based methods need to be stressed in undergraduate methods classes. While this study focused on the various types of scientific inquiry by creating a continuum of scientific inquiry methodologies, research using the continuum needs to be conducted to determine the various teaching styles of successful teachers.
Cluster Randomised Trials in Cochrane Reviews: Evaluation of Methodological and Reporting Practice.
Richardson, Marty; Garner, Paul; Donegan, Sarah
2016-01-01
Systematic reviews can include cluster-randomised controlled trials (C-RCTs), which require different analysis compared with standard individual-randomised controlled trials. However, it is not known whether review authors follow the methodological and reporting guidance when including these trials. The aim of this study was to assess the methodological and reporting practice of Cochrane reviews that included C-RCTs against criteria developed from existing guidance. Criteria were developed, based on methodological literature and personal experience supervising review production and quality. Criteria were grouped into four themes: identifying, reporting, assessing risk of bias, and analysing C-RCTs. The Cochrane Database of Systematic Reviews was searched (2nd December 2013), and the 50 most recent reviews that included C-RCTs were retrieved. Each review was then assessed using the criteria. The 50 reviews we identified were published by 26 Cochrane Review Groups between June 2013 and November 2013. For identifying C-RCTs, only 56% identified that C-RCTs were eligible for inclusion in the review in the eligibility criteria. For reporting C-RCTs, only eight (24%) of the 33 reviews reported the method of cluster adjustment for their included C-RCTs. For assessing risk of bias, only one review assessed all five C-RCT-specific risk-of-bias criteria. For analysing C-RCTs, of the 27 reviews that presented unadjusted data, only nine (33%) provided a warning that confidence intervals may be artificially narrow. Of the 34 reviews that reported data from unadjusted C-RCTs, only 13 (38%) excluded the unadjusted results from the meta-analyses. The methodological and reporting practices in Cochrane reviews incorporating C-RCTs could be greatly improved, particularly with regard to analyses. Criteria developed as part of the current study could be used by review authors or editors to identify errors and improve the quality of published systematic reviews incorporating C-RCTs.
Early Market Site Identification Data
Levi Kilcher
2016-04-01
This data was compiled for the 'Early Market Opportunity Hot Spot Identification' project. The data and scripts included were used in the 'MHK Energy Site Identification and Ranking Methodology' Reports (Part I: Wave, NREL Report #66038; Part II: Tidal, NREL Report #66079). The Python scripts will generate a set of results--based on the Excel data files--some of which were described in the reports. The scripts depend on the 'score_site' package, and the score site package depends on a number of standard Python libraries (see the score_site install instructions).
Development of a methodology for structured reporting of information in echocardiography.
Homorodean, Călin; Olinic, Maria; Olinic, Dan
2012-03-01
In order to conduct research relying on ultrasound images, it is necessary to access a large number of relevant cases represented by images and their interpretation. DICOM standard defines the structured reporting information object. Templates are tree-like structures which offer structural guidance in report construction. Laying the foundations of a structured reporting methodology in echocardiography, through the generation of a consistent set of DICOM templates. We developed an information system with the ability of managing echocardiographic images and structured reports. In order to perform a complete description of the cardiac structures, we used 1900 coded concepts organized into 344 contexts by their semantic meaning in a variety of cardiac diseases. We developed 30 templates, with up to 10 nesting levels. The list of templates has a pyramid-like architecture. Two templates are used for reporting every measurement and description: "EchoMeasurement" and "EchoDescription". Intermediate level templates specify how to report the features of echoDoppler findings: "Spectral Curve", "Color Jet", "Intracardiac mass". Templates for every cardiovascular structure include the previous ones. "Echocardiography Procedure Report" includes all other templates. The templates were tested in reporting echo features of 100 patients by analyzing 500 DICOM images. The benefits of these templates has been proven during the testing process, through the quality of the echocardiography report, the ability to argue and to link every diagnostic feature to a defining image and by opening up opportunities for education, research. In the future, our template-based reporting methodology might be extended to other imaging modalities.
Fontela, Patricia Scolari; Pant Pai, Nitika; Schiller, Ian; Dendukuri, Nandini; Ramsay, Andrew; Pai, Madhukar
2009-11-13
Poor methodological quality and reporting are known concerns with diagnostic accuracy studies. In 2003, the QUADAS tool and the STARD standards were published for evaluating the quality and improving the reporting of diagnostic studies, respectively. However, it is unclear whether these tools have been applied to diagnostic studies of infectious diseases. We performed a systematic review on the methodological and reporting quality of diagnostic studies in TB, malaria and HIV. We identified diagnostic accuracy studies of commercial tests for TB, malaria and HIV through a systematic search of the literature using PubMed and EMBASE (2004-2006). Original studies that reported sensitivity and specificity data were included. Two reviewers independently extracted data on study characteristics and diagnostic accuracy, and used QUADAS and STARD to evaluate the quality of methods and reporting, respectively. Ninety (38%) of 238 articles met inclusion criteria. All studies had design deficiencies. Study quality indicators that were met in less than 25% of the studies included adequate description of withdrawals (6%) and reference test execution (10%), absence of index test review bias (19%) and reference test review bias (24%), and report of uninterpretable results (22%). In terms of quality of reporting, 9 STARD indicators were reported in less than 25% of the studies: methods for calculation and estimates of reproducibility (0%), adverse effects of the diagnostic tests (1%), estimates of diagnostic accuracy between subgroups (10%), distribution of severity of disease/other diagnoses (11%), number of eligible patients who did not participate in the study (14%), blinding of the test readers (16%), and description of the team executing the test and management of indeterminate/outlier results (both 17%). The use of STARD was not explicitly mentioned in any study. Only 22% of 46 journals that published the studies included in this review required authors to use STARD. Recently published diagnostic accuracy studies on commercial tests for TB, malaria and HIV have moderate to low quality and are poorly reported. The more frequent use of tools such as QUADAS and STARD may be necessary to improve the methodological and reporting quality of future diagnostic accuracy studies in infectious diseases.
Fontela, Patricia Scolari; Pant Pai, Nitika; Schiller, Ian; Dendukuri, Nandini; Ramsay, Andrew; Pai, Madhukar
2009-01-01
Background Poor methodological quality and reporting are known concerns with diagnostic accuracy studies. In 2003, the QUADAS tool and the STARD standards were published for evaluating the quality and improving the reporting of diagnostic studies, respectively. However, it is unclear whether these tools have been applied to diagnostic studies of infectious diseases. We performed a systematic review on the methodological and reporting quality of diagnostic studies in TB, malaria and HIV. Methods We identified diagnostic accuracy studies of commercial tests for TB, malaria and HIV through a systematic search of the literature using PubMed and EMBASE (2004–2006). Original studies that reported sensitivity and specificity data were included. Two reviewers independently extracted data on study characteristics and diagnostic accuracy, and used QUADAS and STARD to evaluate the quality of methods and reporting, respectively. Findings Ninety (38%) of 238 articles met inclusion criteria. All studies had design deficiencies. Study quality indicators that were met in less than 25% of the studies included adequate description of withdrawals (6%) and reference test execution (10%), absence of index test review bias (19%) and reference test review bias (24%), and report of uninterpretable results (22%). In terms of quality of reporting, 9 STARD indicators were reported in less than 25% of the studies: methods for calculation and estimates of reproducibility (0%), adverse effects of the diagnostic tests (1%), estimates of diagnostic accuracy between subgroups (10%), distribution of severity of disease/other diagnoses (11%), number of eligible patients who did not participate in the study (14%), blinding of the test readers (16%), and description of the team executing the test and management of indeterminate/outlier results (both 17%). The use of STARD was not explicitly mentioned in any study. Only 22% of 46 journals that published the studies included in this review required authors to use STARD. Conclusion Recently published diagnostic accuracy studies on commercial tests for TB, malaria and HIV have moderate to low quality and are poorly reported. The more frequent use of tools such as QUADAS and STARD may be necessary to improve the methodological and reporting quality of future diagnostic accuracy studies in infectious diseases. PMID:19915664
Weighing in on international growth standards: testing the case in Australian preschool children.
Pattinson, C L; Staton, S L; Smith, S S; Trost, S G; Sawyer, E F; Thorpe, K J
2017-10-01
Overweight and obesity in preschool-aged children are major health concerns. Accurate and reliable estimates of prevalence are necessary to direct public health and clinical interventions. There are currently three international growth standards used to determine prevalence of overweight and obesity, each using different methodologies: Center for Disease Control (CDC), World Health Organization (WHO) and International Obesity Task Force (IOTF). Adoption and use of each method were examined through a systematic review of Australian population studies (2006-2017). For this period, systematically identified population studies (N = 20) reported prevalence of overweight and obesity ranging between 15 and 38% with most (n = 16) applying the IOTF standards. To demonstrate the differences in prevalence estimates yielded by the IOTF in comparison to the WHO and CDC standards, methods were applied to a sample of N = 1,926 Australian children, aged 3-5 years. As expected, the three standards yielded significantly different estimates when applied to this single population. Prevalence of overweight/obesity was WHO - 9.3%, IOTF - 21.7% and CDC - 33.1%. Judicious selection of growth standards, taking account of their underpinning methodologies and provisions of access to study data sets to allow prevalence comparisons, is recommended. © 2017 World Obesity Federation.
Psychometric evaluation of commonly used game-specific skills tests in rugby: A systematic review
Oorschot, Sander; Chiwaridzo, Matthew; CM Smits-Engelsman, Bouwien
2017-01-01
Objectives To (1) give an overview of commonly used game-specific skills tests in rugby and (2) evaluate available psychometric information of these tests. Methods The databases PubMed, MEDLINE CINAHL and Africa Wide information were systematically searched for articles published between January 1995 and March 2017. First, commonly used game-specific skills tests were identified. Second, the available psychometrics of these tests were evaluated and the methodological quality of the studies assessed using the Consensus-based Standards for the selection of health Measurement Instruments checklist. Studies included in the first step had to report detailed information on the construct and testing procedure of at least one game-specific skill, and studies included in the second step had additionally to report at least one psychometric property evaluating reliability, validity or responsiveness. Results 287 articles were identified in the first step, of which 30 articles met the inclusion criteria and 64 articles were identified in the second step of which 10 articles were included. Reactive agility, tackling and simulated rugby games were the most commonly used tests. All 10 studies reporting psychometrics reported reliability outcomes, revealing mainly strong evidence. However, all studies scored poor or fair on methodological quality. Four studies reported validity outcomes in which mainly moderate evidence was indicated, but all articles had fair methodological quality. Conclusion Game-specific skills tests indicated mainly high reliability and validity evidence, but the studies lacked methodological quality. Reactive agility seems to be a promising domain, but the specific tests need further development. Future high methodological quality studies are required in order to develop valid and reliable test batteries for rugby talent identification. Trial registration number PROSPERO CRD42015029747. PMID:29259812
A probabilistic assessment of health risks associated with short-term exposure to tropospheric ozone
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitfield, R.G; Biller, W.F.; Jusko, M.J.
1996-06-01
The work described in this report is part of a larger risk assessment sponsored by the U.S. Environmental Protection Agency. Earlier efforts developed exposure-response relationships for acute health effects among populations engaged in heavy exertion. Those efforts also developed a probabilistic national ambient air quality standards exposure model and a general methodology for integrating probabilistic exposure-response relation- ships and exposure estimates to calculate overall risk results. Recently published data make it possible to model additional health endpoints (for exposure at moderate exertion), including hospital admissions. New air quality and exposure estimates for alternative national ambient air quality standards for ozonemore » are combined with exposure-response models to produce the risk results for hospital admissions and acute health effects. Sample results explain the methodology and introduce risk output formats.« less
NASA Astrophysics Data System (ADS)
Shultheis, C. F.
1985-02-01
This technical report describes an analysis of the performance allocations for a satellite link, focusing specifically on a single-hop 7 to 8 GHz link of the Defense Satellite Communications System (DSCS). The analysis is performed for three primary reasons: (1) to reevaluate link power margin requirements for DSCS links based on digital signalling; (2) to analyze the implications of satellite availability and error rate allocations contained in proposed MIL-STD-188-323, system design and engineering standards for long haul digital transmission system performance; and (3) to standardize a methodology for determination of rain-related propagation constraints. The aforementioned methodology is then used to calculate the link margin requirements of typical DSCS binary/quaternary phase shift keying (BPSK/QPSK) links at 7 to 8 GHz for several different Earth terminal locations.
Methodological reporting of randomized clinical trials in respiratory research in 2010.
Lu, Yi; Yao, Qiuju; Gu, Jie; Shen, Ce
2013-09-01
Although randomized controlled trials (RCTs) are considered the highest level of evidence, they are also subject to bias, due to a lack of adequately reported randomization, and therefore the reporting should be as explicit as possible for readers to determine the significance of the contents. We evaluated the methodological quality of RCTs in respiratory research in high ranking clinical journals, published in 2010. We assessed the methodological quality, including generation of the allocation sequence, allocation concealment, double-blinding, sample-size calculation, intention-to-treat analysis, flow diagrams, number of medical centers involved, diseases, funding sources, types of interventions, trial registration, number of times the papers have been cited, journal impact factor, journal type, and journal endorsement of the CONSORT (Consolidated Standards of Reporting Trials) rules, in RCTs published in 12 top ranking clinical respiratory journals and 5 top ranking general medical journals. We included 176 trials, of which 93 (53%) reported adequate generation of the allocation sequence, 66 (38%) reported adequate allocation concealment, 79 (45%) were double-blind, 123 (70%) reported adequate sample-size calculation, 88 (50%) reported intention-to-treat analysis, and 122 (69%) included a flow diagram. Multivariate logistic regression analysis revealed that journal impact factor ≥ 5 was the only variable that significantly influenced adequate allocation sequence generation. Trial registration and journal impact factor ≥ 5 significantly influenced adequate allocation concealment. Medical interventions, trial registration, and journal endorsement of the CONSORT statement influenced adequate double-blinding. Publication in one of the general medical journal influenced adequate sample-size calculation. The methodological quality of RCTs in respiratory research needs improvement. Stricter enforcement of the CONSORT statement should enhance the quality of RCTs.
76 FR 77999 - Standardizing Program Reporting Requirements for Broadcast Licensees
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-15
... submit comments, identified by MB Docket No. 11-189, by any of the following methods: Federal... of all relevant programs throughout the year. A constructed or composite week is a sampling method in... methodological validity for academic research and would provide a snapshot of programming for the public. We seek...
This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for disso...
This technical report provides a description of the field project design, quality control, the sampling protocols and analysis methodology used, and standard operating procedures for the South Fork Broad River Watershed (SFBR) Total Maximum Daily Load (TMDL) project. This watersh...
Open Education Resources and Higher Education Academic Practice
ERIC Educational Resources Information Center
Bradshaw, Peter; Younie, Sarah; Jones, Sarah
2013-01-01
Purpose: This paper aims to report on an externally-funded project and forms part of its dissemination. Design/methodology/approach: The objectives are achieved through a theoretical framing of the project and an alignment of these with the contexts for the project--namely the Professional Standards Framework of the HEA, its use in postgraduate…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-22
... Management (HFA-305), Food and Drug Administration, 5630 Fishers Lane, Rm. 1061, Rockville, MD 20852... with elements to assure safe use (ETASU) before the Drug Safety and Risk Management Advisory Committee... incorporate the latest methodologies in the evolving science of risk management. In its February 2013 report...
Hydration mechanisms of two polymorphs of synthetic ye'elimite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cuesta, A.; Álvarez-Pinazo, G.; Sanfélix, S.G.
2014-09-15
Ye'elimite is the main phase in calcium sulfoaluminate cements and also a key phase in sulfobelite cements. However, its hydration mechanism is not well understood. Here we reported new data on the hydration behavior of ye'elimite using synchrotron and laboratory powder diffraction coupled to the Rietveld methodology. Both internal and external standard methodologies have been used to determine the overall amorphous contents. We have addressed the standard variables: water-to-ye'elimite ratio and additional sulfate sources of different solubilities. Moreover, we report a deep study of the role of the polymorphism of pure ye'elimites. The hydration behavior of orthorhombic stoichiometric and pseudo-cubicmore » solid-solution ye'elimites is discussed. In the absence of additional sulfate sources, stoichiometric-ye'elimite reacts slower than solid-solution-ye'elimite, and AFm-type phases are the main hydrated crystalline phases, as expected. Moreover, solid-solution-ye'elimite produces higher amounts of ettringite than stoichiometric-ye'elimite. However, in the presence of additional sulfates, stoichiometric-ye'elimite reacts faster than solid-solution-ye'elimite.« less
ERIC Educational Resources Information Center
Kannan, Priya
2016-01-01
Federal accountability requirements after the No Child Left Behind (NCLB) Act of 2001 and the need to report progress for various disaggregated subgroups of students meant that the methods used to set and articulate performance standards across the grades must be revisited. Several solutions that involve either "a priori" deliberations…
Maher, Dermot; Waswa, Laban; Karabarinde, Alex; Baisley, Kathy
2011-08-17
Although concurrent sexual partnerships may play an important role in HIV transmission in Africa, the lack of an agreed definition of concurrency and of standard methodological approaches has hindered studies. In a long-standing general population cohort in rural Uganda we assessed the prevalence of concurrency and investigated its association with sociodemographic and behavioural factors and with HIV prevalence, using the new recommended standard definition and methodological approaches. As part of the 2010 annual cohort HIV serosurvey among adults, we used a structured questionnaire to collect information on sociodemographic and behavioural factors and to measure standard indicators of concurrency using the recommended method of obtaining sexual-partner histories. We used logistic regression to build a multivariable model of factors independently associated with concurrency. Among those eligible, 3,291 (66%) males and 4,052 (72%) females participated in the survey. Among currently married participants, 11% of men and 25% of women reported being in a polygynous union. Among those with a sexual partner in the past year, the proportion reporting at least one concurrent partnership was 17% in males and 0.5% in females. Polygyny accounted for a third of concurrency in men and was not associated with increased HIV risk. Among men there was no evidence of an association between concurrency and HIV prevalence (but too few women reported concurrency to assess this after adjusting for confounding). Regarding sociodemographic factors associated with concurrency, females were significantly more likely to be younger, unmarried, and of lower socioeconomic status than males. Behavioural factors associated with concurrency were young age at first sex, increasing lifetime partners, and a casual partner in the past year (among men and women) and problem drinking (only men). Our findings based on the new standard definition and methodological approaches provide a baseline for measuring changes in concurrency and HIV incidence in future surveys, and a benchmark for other studies. As campaigns are now widely conducted against concurrency, such surveys and studies are important in evaluating their effectiveness in decreasing HIV transmission.
DeLoid, Glen M.; Cohen, Joel M.; Pyrgiotakis, Georgios; Demokritou, Philip
2018-01-01
Summary Evidence continues to grow of the importance of in vitro and in vivo dosimetry in the hazard assessment and ranking of engineered nanomaterials (ENMs). Accurate dose metrics are particularly important for in vitro cellular screening to assess the potential health risks or bioactivity of ENMs. In order to ensure meaningful and reproducible quantification of in vitro dose, with consistent measurement and reporting between laboratories, it is necessary to adopt standardized and integrated methodologies for 1) generation of stable ENM suspensions in cell culture media, 2) colloidal characterization of suspended ENMs, particularly properties that determine particle kinetics in an in vitro system (size distribution and formed agglomerate effective density), and 3) robust numerical fate and transport modeling for accurate determination of ENM dose delivered to cells over the course of the in vitro exposure. Here we present such an integrated comprehensive protocol based on such a methodology for in vitro dosimetry, including detailed standardized procedures for each of these three critical steps. The entire protocol requires approximately 6-12 hours to complete. PMID:28102836
Design of a 0.13-μm CMOS cascade expandable ΣΔ modulator for multi-standard RF telecom systems
NASA Astrophysics Data System (ADS)
Morgado, Alonso; del Río, Rocío; de la Rosa, José M.
2007-05-01
This paper reports a 130-nm CMOS programmable cascade ΣΔ modulator for multi-standard wireless terminals, capable of operating on three standards: GSM, Bluetooth and UMTS. The modulator is reconfigured at both architecture- and circuit- level in order to adapt its performance to the different standards specifications with optimized power consumption. The design of the building blocks is based upon a top-down CAD methodology that combines simulation and statistical optimization at different levels of the system hierarchy. Transistor-level simulations show correct operation for all standards, featuring 13-bit, 11.3-bit and 9-bit effective resolution within 200-kHz, 1-MHz and 4-MHz bandwidth, respectively.
INACSL Standards of Best Practice for Simulation: Past, Present, and Future.
Sittner, Barbara J; Aebersold, Michelle L; Paige, Jane B; Graham, Leslie L M; Schram, Andrea Parsons; Decker, Sharon I; Lioce, Lori
2015-01-01
To describe the historical evolution of the International Nursing Association for Clinical Simulation and Learning's (INACSL) Standards of Best Practice: Simulation. The establishment of simulation standards began as a concerted effort by the INACSL Board of Directors in 2010 to provide best practices to design, conduct, and evaluate simulation activities in order to advance the science of simulation as a teaching methodology. A comprehensive review of the evolution of INACSL Standards of Best Practice: Simulation was conducted using journal publications, the INACSL website, INACSL member survey, and reports from members of the INACSL Standards Committee. The initial seven standards, published in 2011, were reviewed and revised in 2013. Two new standards were published in 2015. The standards will continue to evolve as the science of simulation advances. As the use of simulation-based experiences increases, the INACSL Standards of Best Practice: Simulation are foundational to standardizing language, behaviors, and curricular design for facilitators and learners.
NASA Technical Reports Server (NTRS)
Townsend, J.; Meyers, C.; Ortega, R.; Peck, J.; Rheinfurth, M.; Weinstock, B.
1993-01-01
Probabilistic structural analyses and design methods are steadily gaining acceptance within the aerospace industry. The safety factor approach to design has long been the industry standard, and it is believed by many to be overly conservative and thus, costly. A probabilistic approach to design may offer substantial cost savings. This report summarizes several probabilistic approaches: the probabilistic failure analysis (PFA) methodology developed by Jet Propulsion Laboratory, fast probability integration (FPI) methods, the NESSUS finite element code, and response surface methods. Example problems are provided to help identify the advantages and disadvantages of each method.
Standard of reporting animal-based experimental research in Indian Journal of Pharmacology.
Aiman, Umme; Rahman, Syed Ziaur
2015-01-01
The objective of present study was to survey and determine the reporting standards of animal studies published during three years from 2012 to 2014 in the Indian Journal of Pharmacology (IJP). All issues of IJP published in the year 2012, 2013 and 2014 were reviewed to identify animal studies. Each animal study was searched for 15 parameters specifically designed to review standards of animal experimentation and research methodology. All published studies had clearly defined aims and objectives while a statement on ethical clearance about the study protocol was provided in 97% of papers. Information about animal strain and sex was given in 91.8% and 90% of papers respectively. Age of experimental animals was mentioned by 44.4% papers while source of animals was given in 50.8% papers. Randomization was reported by 37.4% while 9.9% studies reported blinding. Only 3.5% studies mentioned any limitations of their work. Present study demonstrates relatively good reporting standards in animal studies published in IJP. The items which need to be improved are randomization, blinding, sample size calculation, stating the limitations of study, sources of support and conflict of interest. The knowledge shared in the present paper could be used for better reporting of animal based experiments.
Chahla, Jorge; Piuzzi, Nicolas S; Mitchell, Justin J; Dean, Chase S; Pascual-Garrido, Cecilia; LaPrade, Robert F; Muschler, George F
2016-09-21
Intra-articular cellular therapy injections constitute an appealing strategy that may modify the intra-articular milieu or regenerate cartilage in the settings of osteoarthritis and focal cartilage defects. However, little consensus exists regarding the indications for cellular therapies, optimal cell sources, methods of preparation and delivery, or means by which outcomes should be reported. We present a systematic review of the current literature regarding the safety and efficacy of cellular therapy delivered by intra-articular injection in the knee that provided a Level of Evidence of III or higher. A total of 420 papers were screened. Methodological quality was assessed using a modified Coleman methodology score. Only 6 studies (4 Level II and 2 Level III) met the criteria to be included in this review; 3 studies were on treatment of osteoarthritis and 3 were on treatment of focal cartilage defects. These included 4 randomized controlled studies without blinding, 1 prospective cohort study, and 1 retrospective therapeutic case-control study. The studies varied widely with respect to cell sources, cell characterization, adjuvant therapies, and assessment of outcomes. Outcome was reported in a total of 300 knees (124 in the osteoarthritis studies and 176 in the cartilage defect studies). Mean follow-up was 21.0 months (range, 12 to 36 months). All studies reported improved outcomes with intra-articular cellular therapy and no major adverse events. The mean modified Coleman methodology score was 59.1 ± 16 (range, 32 to 82). The studies of intra-articular cellular therapy injections for osteoarthritis and focal cartilage defects in the human knee suggested positive results with respect to clinical improvement and safety. However, the improvement was modest and a placebo effect cannot be disregarded. The overall quality of the literature was poor, and the methodological quality was fair, even among Level-II and III studies. Effective clinical assessment and optimization of injection therapies will demand greater attention to study methodology, including blinding; standardized quantitative methods for cell harvesting, processing, characterization, and delivery; and standardized reporting of clinical and structural outcomes. Therapeutic Level III. See Instructions for Authors for a complete description of levels of evidence. Copyright © 2016 by The Journal of Bone and Joint Surgery, Incorporated.
Traditional Chinese medicine injection for angina pectoris: an overview of systematic reviews.
Luo, Jing; Shang, Qinghua; Han, Mei; Chen, Keji; Xu, Hao
2014-01-01
Traditional Chinese medicine (TCM) injection is widely used to treat angina pectoris in China. This overview aims to systematically summarize the general characteristics of systematic reviews (SRs) on TCM injection in treating angina, and assess the methodological and reporting quality of these reviews. We searched PubMed, Embase, the Cochrane Library and four Chinese databases from inception until March 2013. Data were extracted according to a preset form. The AMSTAR and PRISMA checklists were used to explore the methodological quality and reporting characteristics of included reviews, respectively. All data analyses were descriptive. 46 SRs involving over 57,463 participants with angina reviewing 23 kinds of TCM injections were included. The main outcomes evaluated in the reviews were symptoms (43/46, 93.5%), surrogate outcomes (42/46, 91.3%) and adverse events (41/46, 87.0%). Few reviews evaluated endpoints (7/46, 15.2%) and quality of life (1/46, 2.2%). One third of the reviews (16/46, 34.8%) drew definitely positive conclusions while the others (30/46, 65.2%) suggested potential benefits mainly in symptoms, electrocardiogram and adverse events. With many serious flaws such as lack of a protocol and inappropriate data synthesis, the overall methodological and reporting quality of the reviews was limited. While many SRs of TCM injection on the treatment of angina suggested potential benefits or definitely positive effects, stakeholders should not accept the findings of these reviews uncritically due to the limited methodological and reporting quality. Future SRs should be appropriately conducted and reported according to international standards such as AMSTAR and PRISMA, rather than published in large numbers.
Statistical reporting inconsistencies in experimental philosophy
Colombo, Matteo; Duev, Georgi; Nuijten, Michèle B.; Sprenger, Jan
2018-01-01
Experimental philosophy (x-phi) is a young field of research in the intersection of philosophy and psychology. It aims to make progress on philosophical questions by using experimental methods traditionally associated with the psychological and behavioral sciences, such as null hypothesis significance testing (NHST). Motivated by recent discussions about a methodological crisis in the behavioral sciences, questions have been raised about the methodological standards of x-phi. Here, we focus on one aspect of this question, namely the rate of inconsistencies in statistical reporting. Previous research has examined the extent to which published articles in psychology and other behavioral sciences present statistical inconsistencies in reporting the results of NHST. In this study, we used the R package statcheck to detect statistical inconsistencies in x-phi, and compared rates of inconsistencies in psychology and philosophy. We found that rates of inconsistencies in x-phi are lower than in the psychological and behavioral sciences. From the point of view of statistical reporting consistency, x-phi seems to do no worse, and perhaps even better, than psychological science. PMID:29649220
2014-04-14
Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be...NUMBER OF PAGES 62 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form...Methodology ........................................................................................................................... 1 Finding A. Lack of
ERIC Educational Resources Information Center
International Atomic Energy Agency, Vienna (Austria).
Radiolabelled pesticides are used: in studies involving improved formulations of pesticides, to assist in developing standard residue analytical methodology, and in obtaining metabolism data to support registration of pesticides. This manual is designed to give the scientist involved in pesticide research the basic terms and principles for…
ERIC Educational Resources Information Center
Fairholm, G.W.; And Others
This study was conducted to develop quantitative and qualitative productivity standards, work measures, and activity reports to facilitate effective budgeting for library staff in the State University of New York (SUNY) library system. The research methodology used by the study team involved a survey of 11 libraries of the 22 institutions in the…
ERIC Educational Resources Information Center
Dunn, Michelle E.; Shelnut, Jill; Ryan, Joseph B.; Katsiyannis, Antonis
2017-01-01
The purpose of this review is to report on the effectiveness of peer-mediated interventions on academic outcomes for students with emotional and behavioral disorders (EBD). CEC standards for evidence-based practices were used for determination of methodologically sound studies. Twenty-four studies involving 288 participants met inclusionary…
ERIC Educational Resources Information Center
Mississippi State Legislature, Jackson. Performance Evaluation and Expenditure Review Committee.
This report to the Mississippi Legislature presents the findings of a review of the cash management policies, procedures, and practices of the State Board of Trustees of Institutions of Higher Learning (IHL). The methodology involved review of: applicable Mississippi statutes; standards promulgated by the National Association of College and…
Martin, James; Taljaard, Monica; Girling, Alan; Hemming, Karla
2016-01-01
Background Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. Methods We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. Results We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5–6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. Discussion The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs. PMID:26846897
Methodology of Clinical Trials Aimed at Assessing Interventions for Cutaneous Leishmaniasis
Olliaro, Piero; Vaillant, Michel; Arana, Byron; Grogl, Max; Modabber, Farrokh; Magill, Alan; Lapujade, Olivier; Buffet, Pierre; Alvar, Jorge
2013-01-01
The current evidence-base for recommendations on the treatment of cutaneous leishmaniasis (CL) is generally weak. Systematic reviews have pointed to a general lack of standardization of methods for the conduct and analysis of clinical trials of CL, compounded with poor overall quality of several trials. For CL, there is a specific need for methodologies which can be applied generally, while allowing the flexibility needed to cover the diverse forms of the disease. This paper intends to provide clinical investigators with guidance for the design, conduct, analysis and report of clinical trials of treatments for CL, including the definition of measurable, reproducible and clinically-meaningful outcomes. Having unified criteria will help strengthen evidence, optimize investments, and enhance the capacity for high-quality trials. The limited resources available for CL have to be concentrated in clinical studies of excellence that meet international quality standards. PMID:23556016
Methodological aspects of multicenter studies with quantitative PET.
Boellaard, Ronald
2011-01-01
Quantification of whole-body FDG PET studies is affected by many physiological and physical factors. Much of the variability in reported standardized uptake value (SUV) data seen in the literature results from the variability in methodology applied among these studies, i.e., due to the use of different scanners, acquisition and reconstruction settings, region of interest strategies, SUV normalization, and/or corrections methods. To date, the variability in applied methodology prohibits a proper comparison and exchange of quantitative FDG PET data. Consequently, the promising role of quantitative PET has been demonstrated in several monocentric studies, but these published results cannot be used directly as a guideline for clinical (multicenter) trials performed elsewhere. In this chapter, the main causes affecting whole-body FDG PET quantification and strategies to minimize its inter-institute variability are addressed.
Standardisation of costs: the Dutch Manual for Costing in economic evaluations.
Oostenbrink, Jan B; Koopmanschap, Marc A; Rutten, Frans F H
2002-01-01
The lack of a uniform costing methodology is often considered a weakness of economic evaluations that hinders the interpretation and comparison of studies. Standardisation is therefore an important topic within the methodology of economic evaluations and in national guidelines that formulate the formal requirements for studies to be considered when deciding on the reimbursement of new medical therapies. Recently, the Dutch Manual for Costing: Methods and Standard Costs for Economic Evaluations in Health Care (further referred to as "the manual") has been published, in addition to the Dutch guidelines for pharmacoeconomic research. The objectives of this article are to describe the main content of the manual and to discuss some key issues of the manual in relation to the standardisation of costs. The manual introduces a six-step procedure for costing. These steps concern: the scope of the study;the choice of cost categories;the identification of units;the measurement of resource use;the monetary valuation of units; andthe calculation of unit costs. Each step consists of a number of choices and these together define the approach taken. In addition to a description of the costing process, five key issues regarding the standardisation of costs are distinguished. These are the use of basic principles, methods for measurement and valuation, standard costs (average prices of healthcare services), standard values (values that can be used within unit cost calculations), and the reporting of outcomes. The use of the basic principles, standard values and minimal requirements for reporting outcomes, as defined in the manual, are obligatory in studies that support submissions to acquire reimbursement for new pharmaceuticals. Whether to use standard costs, and the choice of a particular method to measure or value costs, is left mainly to the investigator, depending on the specific study setting. In conclusion, several instruments are available to increase standardisation in costing methodology among studies. These instruments have to be used in such a way that a balance is found between standardisation and the specific setting in which a study is performed. The way in which the Dutch manual tries to reach this balance can serve as an illustration for other countries.
Pan, Xin; Lopez-Olivo, Maria A; Song, Juhee; Pratt, Gregory; Suarez-Almazor, Maria E
2017-03-01
We appraised the methodological and reporting quality of randomised controlled clinical trials (RCTs) evaluating the efficacy and safety of Chinese herbal medicine (CHM) in patients with rheumatoid arthritis (RA). For this systematic review, electronic databases were searched from inception until June 2015. The search was limited to humans and non-case report studies, but was not limited by language, year of publication or type of publication. Two independent reviewers selected RCTs, evaluating CHM in RA (herbals and decoctions). Descriptive statistics were used to report on risk of bias and their adherence to reporting standards. Multivariable logistic regression analysis was performed to determine study characteristics associated with high or unclear risk of bias. Out of 2342 unique citations, we selected 119 RCTs including 18 919 patients: 10 108 patients received CHM alone and 6550 received one of 11 treatment combinations. A high risk of bias was observed across all domains: 21% had a high risk for selection bias (11% from sequence generation and 30% from allocation concealment), 85% for performance bias, 89% for detection bias, 4% for attrition bias and 40% for reporting bias. In multivariable analysis, fewer authors were associated with selection bias (allocation concealment), performance bias and attrition bias, and earlier year of publication and funding source not reported or disclosed were associated with selection bias (sequence generation). Studies published in non-English language were associated with reporting bias. Poor adherence to recommended reporting standards (<60% of the studies not providing sufficient information) was observed in 11 of the 23 sections evaluated. Study quality and data extraction were performed by one reviewer and cross-checked by a second reviewer. Translation to English was performed by one reviewer in 85% of the included studies. Studies evaluating CHM often fail to meet expected methodological criteria, and high-quality evidence is lacking. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Zhao, Jun; Liao, Xing; Zhao, Hui; Li, Zhi-Geng; Wang, Nan-Yue; Wang, Li-Min
2016-11-01
To evaluate the methodological quality of the randomized controlled trials(RCTs) for traditional Chinese medicines for treatment of sub-health, in order to provide a scientific basis for the improvement of clinical trials and systematic review. Such databases as CNKI, CBM, VIP, Wanfang, EMbase, Medline, Clinical Trials, Web of Science and Cochrane Library were searched for RCTS for traditional Chinese medicines for treatment of sub-health between the time of establishment and February 29, 2016. Cochrane Handbook 5.1 was used to screen literatures and extract data, and CONSORT statement and CONSORT for traditional Chinese medicine statement were adopted as the basis for quality evaluation. Among the 72 RCTs included in this study, 67 (93.05%) trials described the inter-group baseline data comparability, 39(54.17%) trials described the unified diagnostic criteria, 28(38.89%) trials described the unified standards of efficacy, 4 (5.55%) trials mentioned the multi-center study, 19(26.38%) trials disclosed the random distribution method, 6(8.33%) trials used the random distribution concealment, 15(20.83%) trials adopted the method of blindness, 3(4.17%) study reported the sample size estimation in details, 5 (6.94%) trials showed a sample size of more than two hundred, 19(26.38%) trials reported the number of withdrawal, defluxion cases and those lost to follow-up, but only 2 trials adopted the ITT analysis,10(13.89%) trials reported the follow-up results, none of the trial reported the test registration and the test protocol, 48(66.7%) trials reported all of the indicators of expected outcomes, 26(36.11%) trials reported the adverse reactions and adverse events, and 4(5.56%) trials reported patient compliance. The overall quality of these randomized controlled trials for traditional Chinese medicines for treatment of sub-health is low, with methodological defects in different degrees. Therefore, it is still necessary to emphasize the correct application of principles such as blindness, randomization and control in RCTs, while requiring reporting in accordance with international standards. Copyright© by the Chinese Pharmaceutical Association.
Architectural Methodology Report
NASA Technical Reports Server (NTRS)
Dhas, Chris
2000-01-01
The establishment of conventions between two communicating entities in the end systems is essential for communications. Examples of the kind of decisions that need to be made in establishing a protocol convention include the nature of the data representation, the for-mat and the speed of the date representation over the communications path, and the sequence of control messages (if any) which are sent. One of the main functions of a protocol is to establish a standard path between the communicating entities. This is necessary to create a virtual communications medium with certain desirable characteristics. In essence, it is the function of the protocol to transform the characteristics of the physical communications environment into a more useful virtual communications model. The final function of a protocol is to establish standard data elements for communications over the path; that is, the protocol serves to create a virtual data element for exchange. Other systems may be constructed in which the transferred element is a program or a job. Finally, there are special purpose applications in which the element to be transferred may be a complex structure such as all or part of a graphic display. NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to describe the methodologies used in developing a protocol architecture for an in-space Internet node. The node would support NASA:s four mission areas: Earth Science; Space Science; Human Exploration and Development of Space (HEDS); Aerospace Technology. This report presents the methodology for developing the protocol architecture. The methodology addresses the architecture for a computer communications environment. It does not address an analog voice architecture.
Jandoc, Racquel; Burden, Andrea M; Mamdani, Muhammad; Lévesque, Linda E; Cadarette, Suzanne M
2015-08-01
To describe the use and reporting of interrupted time series methods in drug utilization research. We completed a systematic search of MEDLINE, Web of Science, and reference lists to identify English language articles through to December 2013 that used interrupted time series methods in drug utilization research. We tabulated the number of studies by publication year and summarized methodological detail. We identified 220 eligible empirical applications since 1984. Only 17 (8%) were published before 2000, and 90 (41%) were published since 2010. Segmented regression was the most commonly applied interrupted time series method (67%). Most studies assessed drug policy changes (51%, n = 112); 22% (n = 48) examined the impact of new evidence, 18% (n = 39) examined safety advisories, and 16% (n = 35) examined quality improvement interventions. Autocorrelation was considered in 66% of studies, 31% reported adjusting for seasonality, and 15% accounted for nonstationarity. Use of interrupted time series methods in drug utilization research has increased, particularly in recent years. Despite methodological recommendations, there is large variation in reporting of analytic methods. Developing methodological and reporting standards for interrupted time series analysis is important to improve its application in drug utilization research, and we provide recommendations for consideration. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Asthma Outcomes: Healthcare Utilization and Costs
Akinbami, Lara J.; Sullivan, Sean D.; Campbell, Jonathan D.; Grundmeier, Robert W.; Hartert, Tina V.; Lee, Todd A.; Smith, Robert A.
2014-01-01
Background Measures of healthcare utilization and indirect impact of asthma morbidity are used to assess clinical interventions and estimate cost. Objective National Institutes of Health (NIH) institutes and other federal agencies convened an expert group to propose standardized measurement, collection, analysis, and reporting of healthcare utilization and cost outcomes in future asthma studies. Methods We used comprehensive literature reviews and expert opinion to compile a list of asthma healthcare utilization outcomes that we classified as core (required in future studies), supplemental (used according to study aims and standardized) and emerging (requiring validation and standardization). We also have identified methodology to assign cost to these outcomes. This work was discussed at an NIH-organized workshop in March 2010 and finalized in September 2011. Results We identified 3 ways to promote comparability across clinical trials for measures of healthcare utilization, resource use, and cost: (1) specify the study perspective (patient, clinician, payer, society), (2) standardize the measurement period (ideally, 12 months), and (3) use standard units to measure healthcare utilization and other asthma-related events. Conclusions Large clinical trials and observational studies should collect and report detailed information on healthcare utilization, intervention resources, and indirect impact of asthma, so that costs can be calculated and cost-effectiveness analyses can be conducted across several studies. Additional research is needed to develop standard, validated survey instruments for collection of provider-reported and participant-reported data regarding asthma-related health care. PMID:22386509
Query Health: standards-based, cross-platform population health surveillance
Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N
2014-01-01
Objective Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Materials and methods Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. Results We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. Discussions This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Conclusions Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. PMID:24699371
Query Health: standards-based, cross-platform population health surveillance.
Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N
2014-01-01
Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Nikendei, C; Ganschow, P; Groener, J B; Huwendiek, S; Köchel, A; Köhl-Hackert, N; Pjontek, R; Rodrian, J; Scheibe, F; Stadler, A-K; Steiner, T; Stiepak, J; Tabatabai, J; Utz, A; Kadmon, M
2016-01-01
The competent physical examination of patients and the safe and professional implementation of clinical procedures constitute essential components of medical practice in nearly all areas of medicine. The central objective of the projects "Heidelberg standard examination" and "Heidelberg standard procedures", which were initiated by students, was to establish uniform interdisciplinary standards for physical examination and clinical procedures, and to distribute them in coordination with all clinical disciplines at the Heidelberg University Hospital. The presented project report illuminates the background of the initiative and its methodological implementation. Moreover, it describes the multimedia documentation in the form of pocketbooks and a multimedia internet-based platform, as well as the integration into the curriculum. The project presentation aims to provide orientation and action guidelines to facilitate similar processes in other faculties.
RRegrs: an R package for computer-aided model selection with multiple regression models.
Tsiliki, Georgia; Munteanu, Cristian R; Seoane, Jose A; Fernandez-Lozano, Carlos; Sarimveis, Haralambos; Willighagen, Egon L
2015-01-01
Predictive regression models can be created with many different modelling approaches. Choices need to be made for data set splitting, cross-validation methods, specific regression parameters and best model criteria, as they all affect the accuracy and efficiency of the produced predictive models, and therefore, raising model reproducibility and comparison issues. Cheminformatics and bioinformatics are extensively using predictive modelling and exhibit a need for standardization of these methodologies in order to assist model selection and speed up the process of predictive model development. A tool accessible to all users, irrespectively of their statistical knowledge, would be valuable if it tests several simple and complex regression models and validation schemes, produce unified reports, and offer the option to be integrated into more extensive studies. Additionally, such methodology should be implemented as a free programming package, in order to be continuously adapted and redistributed by others. We propose an integrated framework for creating multiple regression models, called RRegrs. The tool offers the option of ten simple and complex regression methods combined with repeated 10-fold and leave-one-out cross-validation. Methods include Multiple Linear regression, Generalized Linear Model with Stepwise Feature Selection, Partial Least Squares regression, Lasso regression, and Support Vector Machines Recursive Feature Elimination. The new framework is an automated fully validated procedure which produces standardized reports to quickly oversee the impact of choices in modelling algorithms and assess the model and cross-validation results. The methodology was implemented as an open source R package, available at https://www.github.com/enanomapper/RRegrs, by reusing and extending on the caret package. The universality of the new methodology is demonstrated using five standard data sets from different scientific fields. Its efficiency in cheminformatics and QSAR modelling is shown with three use cases: proteomics data for surface-modified gold nanoparticles, nano-metal oxides descriptor data, and molecular descriptors for acute aquatic toxicity data. The results show that for all data sets RRegrs reports models with equal or better performance for both training and test sets than those reported in the original publications. Its good performance as well as its adaptability in terms of parameter optimization could make RRegrs a popular framework to assist the initial exploration of predictive models, and with that, the design of more comprehensive in silico screening applications.Graphical abstractRRegrs is a computer-aided model selection framework for R multiple regression models; this is a fully validated procedure with application to QSAR modelling.
PCB congener analysis with Hall electrolytic conductivity detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edstrom, R.D.
1989-01-01
This work reports the development of an analytical methodology for the analysis of PCB congeners based on integrating relative retention data provided by other researchers. The retention data were transposed into a multiple retention marker system which provided good precision in the calculation of relative retention indices for PCB congener analysis. Analytical run times for the developed methodology were approximately one hour using a commercially available GC capillary column. A Tracor Model 700A Hall Electrolytic Conductivity Detector (HECD) was employed in the GC detection of Aroclor standards and environmental samples. Responses by the HECD provided good sensitivity and were reasonablymore » predictable. Ten response factors were calculated based on the molar chlorine content of each homolog group. Homolog distributions were determined for Aroclors 1016, 1221, 1232, 1242, 1248, 1254, 1260, 1262 along with binary and ternary mixtures of the same. These distributions were compared with distributions reported by other researchers using electron capture detection as well as chemical ionization mass spectrometric methodologies. Homolog distributions acquired by the HECD methodology showed good correlation with the previously mentioned methodologies. The developed analytical methodology was used in the analysis of bluefish (Pomatomas saltatrix) and weakfish (Cynoscion regalis) collected from the York River, lower James River and lower Chesapeake Bay in Virginia. Total PCB concentrations were calculated and homolog distributions were constructed from the acquired data. Increases in total PCB concentrations were found in the analyzed fish samples during the fall of 1985 collected from the lower James River and lower Chesapeake Bay.« less
Alayli-Goebbels, Adrienne F G; Evers, Silvia M A A; Alexeeva, Daria; Ament, André J H A; de Vries, Nanne K; Tilly, Jan C; Severens, Johan L
2014-06-01
The objective of this study was to review methodological quality of economic evaluations of lifestyle behavior change interventions (LBCIs) and to examine how they address methodological challenges for public health economic evaluation identified in the literature. Pubmed and the NHS economic evaluation database were searched for published studies in six key areas for behavior change: smoking, physical activity, dietary behavior, (illegal) drug use, alcohol use and sexual behavior. From included studies (n = 142), we extracted data on general study characteristics, characteristics of the LBCIs, methodological quality and handling of methodological challenges. Economic evaluation evidence for LBCIs showed a number of weaknesses: methods, study design and characteristics of evaluated interventions were not well reported; methodological quality showed several shortcomings and progress with addressing methodological challenges remained limited. Based on the findings of this review we propose an agenda for improving future evidence to support decision-making. Recommendations for practice include improving reporting of essential study details and increasing adherence with good practice standards. Recommendations for research methods focus on mapping out complex causal pathways for modeling, developing measures to capture broader domains of wellbeing and community outcomes, testing methods for considering equity, identifying relevant non-health sector costs and advancing methods for evidence synthesis. © The Author 2013. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Holtfreter, Birte; Albandar, Jasim M; Dietrich, Thomas; Dye, Bruce A; Eaton, Kenneth A; Eke, Paul I; Papapanou, Panos N; Kocher, Thomas
2015-05-01
Periodontal diseases are common and their prevalence varies in different populations. However, prevalence estimates are influenced by the methodology used, including measurement techniques, case definitions, and periodontal examination protocols, as well as differences in oral health status. As a consequence, comparisons between populations are severely hampered and inferences regarding the global variation in prevalence can hardly be drawn. To overcome these limitations, the authors suggest standardized principles for the reporting of the prevalence and severity of periodontal diseases in future epidemiological studies. These principles include the comprehensive reporting of the study design, the recording protocol, and specific subject-related and oral data. Further, a range of periodontal data should be reported in the total population and within specific age groups. Periodontal data include the prevalence and extent of clinical attachment loss (CAL) and probing depth (PD) on site and tooth level according to specific thresholds, mean CAL/PD, the CDC/AAP case definition, and bleeding on probing. Consistent implementation of these standards in future studies will ensure improved reporting quality, permit meaningful comparisons of the prevalence of periodontal diseases across populations, and provide better insights into the determinants of such variation. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Wong, Carlos K H; Guo, Vivian Y W; Chen, Jing; Lam, Cindy L K
2016-11-01
Health-related quality of life is an important outcome measure in patients with colorectal cancer. Comparison with normative data has been increasingly undertaken to assess the additional impact of colorectal cancer on health-related quality of life. This review aimed to critically appraise the methodological details and reporting characteristics of comparative studies evaluating differences in health-related quality of life between patients and controls. A systematic search of English-language literature published between January 1985 and May 2014 was conducted through a database search of PubMed, Web of Science, Embase, and Medline. Comparative studies reporting health-related quality-of-life outcomes among patients who have colorectal cancer and controls were selected. Methodological and reporting quality per comparison study was evaluated based on a 11-item methodological checklist proposed by Efficace in 2003 and a set of criteria predetermined by reviewers. Thirty-one comparative studies involving >10,000 patients and >10,000 controls were included. Twenty-three studies (74.2%) originated from European countries, with the largest number from the Netherlands (n = 6). Twenty-eight studies (90.3%) compared the health-related quality of life of patients with normative data published elsewhere, whereas the remaining studies recruited a group of patients who had colorectal cancer and a group of control patients within the same studies. The European Organisation for Research and Treatment of Cancer Quality-of-Life Questionnaire Core 30 was the most extensively used instrument (n = 16; 51.6%). Eight studies (25.8%) were classified as "probably robust" for clinical decision making according to the Efficace standard methodological checklist. Our further quality assessment revealed the lack of score differences reported (61.3%), contemporary comparisons (36.7%), statistical significance tested (38.7%), and matching of control group (58.1%), possibly leading to inappropriate control groups for fair comparisons. Meta-analysis of differences between the 2 groups was not available. In general, one-fourth of comparative studies that evaluated health-related quality of life of patients who had colorectal cancer achieved high quality in reporting characteristics and methodological details. Future studies are encouraged to undertake health-related quality-of-life measurement and adhere to a methodological checklist in comparison with controls.
A Field-Based Aquatic Life Benchmark for Conductivity in ...
This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for dissolved salts as measured by conductivity in Central Appalachian streams using data from West Virginia and Kentucky. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.
The EuroPrevall outpatient clinic study on food allergy: background and methodology.
Fernández-Rivas, M; Barreales, L; Mackie, A R; Fritsche, P; Vázquez-Cortés, S; Jedrzejczak-Czechowicz, M; Kowalski, M L; Clausen, M; Gislason, D; Sinaniotis, A; Kompoti, E; Le, T-M; Knulst, A C; Purohit, A; de Blay, F; Kralimarkova, T; Popov, T; Asero, R; Belohlavkova, S; Seneviratne, S L; Dubakiene, R; Lidholm, J; Hoffmann-Sommergruber, K; Burney, P; Crevel, R; Brill, M; Fernández-Pérez, C; Vieths, S; Clare Mills, E N; van Ree, R; Ballmer-Weber, B K
2015-05-01
The EuroPrevall project aimed to develop effective management strategies in food allergy through a suite of interconnected studies and a multidisciplinary integrated approach. To address some of the gaps in food allergy diagnosis, allergen risk management and socio-economic impact and to complement the EuroPrevall population-based surveys, a cross-sectional study in 12 outpatient clinics across Europe was conducted. We describe the study protocol. Patients referred for immediate food adverse reactions underwent a consistent and standardized allergy work-up that comprised collection of medical history; assessment of sensitization to 24 foods, 14 inhalant allergens and 55 allergenic molecules; and confirmation of clinical reactivity and food thresholds by standardized double-blind placebo-controlled food challenges (DBPCFCs) to milk, egg, fish, shrimp, peanut, hazelnut, celeriac, apple and peach. A standardized methodology for a comprehensive evaluation of food allergy was developed and implemented in 12 outpatient clinics across Europe. A total of 2121 patients (22.6% <14 years) reporting 8257 reactions to foods were studied, and 516 DBPCFCs were performed. This is the largest multicentre European case series in food allergy, in which subjects underwent a comprehensive, uniform and standardized evaluation including DBPCFC, by a methodology which is made available for further studies in food allergy. The analysis of this population will provide information on the different phenotypes of food allergy across Europe, will allow to validate novel in vitro diagnostic tests, to establish threshold values for major allergenic foods and to analyse the socio-economic impact of food allergy. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Credit risk migration rates modeling as open systems: A micro-simulation approach
NASA Astrophysics Data System (ADS)
Landini, S.; Uberti, M.; Casellina, S.
2018-05-01
The last financial crisis of 2008 stimulated the development of new Regulatory Criteria (commonly known as Basel III) that pushed the banking activity to become more prudential, either in the short and the long run. As well known, in 2014 the International Accounting Standards Board (IASB) promulgated the new International Financial Reporting Standard 9 (IFRS 9) for financial instruments that will become effective in January 2018. Since the delayed recognition of credit losses on loans was identified as a weakness in existing accounting standards, the IASB has introduced an Expected Loss model that requires more timely recognition of credit losses. Specifically, new standards require entities to account both for expected losses from when the impairments are recognized for the first time and for full loan lifetime; moreover, a clear preference toward forward looking models is expressed. In this new framework, it is necessary a re-thinking of the widespread standard theoretical approach on which the well known prudential model is founded. The aim of this paper is then to define an original methodological approach to migration rates modeling for credit risk which is innovative respect to the standard method from the point of view of a bank as well as in a regulatory perspective. Accordingly, the proposed not-standard approach considers a portfolio as an open sample allowing for entries, migrations of stayers and exits as well. While being consistent with the empirical observations, this open-sample approach contrasts with the standard closed-sample method. In particular, this paper offers a methodology to integrate the outcomes of the standard closed-sample method within the open-sample perspective while removing some of the assumptions of the standard method. Three main conclusions can be drawn in terms of economic capital provision: (a) based on the Markovian hypothesis with a-priori absorbing state at default, the standard closed-sample method is to be abandoned for not to predict lenders' bankruptcy by construction; (b) to meet more reliable estimates along with the new regulatory standards, the sample to estimate migration rates matrices for credit risk should include either entries and exits; (c) the static eigen-decomposition standard procedure to forecast migration rates should be replaced with a stochastic process dynamics methodology while conditioning forecasts to macroeconomic scenarios.
Raimondo, Joseph V; Heinemann, Uwe; de Curtis, Marco; Goodkin, Howard P; Dulla, Chris G; Janigro, Damir; Ikeda, Akio; Lin, Chou-Ching K; Jiruska, Premysl; Galanopoulou, Aristea S; Bernard, Christophe
2017-11-01
In vitro preparations are a powerful tool to explore the mechanisms and processes underlying epileptogenesis and ictogenesis. In this review, we critically review the numerous in vitro methodologies utilized in epilepsy research. We provide support for the inclusion of detailed descriptions of techniques, including often ignored parameters with unpredictable yet significant effects on study reproducibility and outcomes. In addition, we explore how recent developments in brain slice preparation relate to their use as models of epileptic activity. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Quantifying Ballistic Armor Performance: A Minimally Invasive Approach
NASA Astrophysics Data System (ADS)
Holmes, Gale; Kim, Jaehyun; Blair, William; McDonough, Walter; Snyder, Chad
2006-03-01
Theoretical and non-dimensional analyses suggest a critical link between the performance of ballistic resistant armor and the fundamental mechanical properties of the polymeric materials that comprise them. Therefore, a test methodology that quantifies these properties without compromising an armored vest that is exposed to the industry standard V-50 ballistic performance test is needed. Currently, there is considerable speculation about the impact that competing degradation mechanisms (e.g., mechanical, humidity, ultraviolet) may have on ballistic resistant armor. We report on the use of a new test methodology that quantifies the mechanical properties of ballistic fibers and how each proposed degradation mechanism may impact a vest's ballistic performance.
Allocation of nursing care hours in a combined ophthalmic nursing unit.
Navarro, V B; Stout, W A; Tolley, F M
1995-04-01
Traditional service configuration with separate nursing units for outpatient and inpatient care is becoming ineffective for new patient care delivery models. With the new configuration of a combined nursing unit, it was necessary to rethink traditional reporting methodologies and calculation of hours of care. This project management plan is an initial attempt to develop a standard costing/productivity model for a combined unit. The methodology developed from this plan measures nursing care hours for each patient population to determine the number of full time equivalents (FTEs) for a combined unit and allocates FTEs based on inpatient (IP), outpatient (OP), and emergency room (ER) volumes.
76 FR 70680 - Small Business Size Standards: Real Estate and Rental and Leasing
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-15
... industries and one sub- industry in North American Industry Classification System (NAICS) Sector 53, Real... industries grouped by NAICS Sector. SBA issued a White Paper entitled ``Size Standards Methodology'' and published in the October 21, 2009 issue of the Federal Register. That ``Size Standards Methodology'' is...
Workshop on LCA: Methodology, Current Development, and Application in Standards - LCA Methodology
As ASTM standards are being developed including Life Cycle Assessment within the Standards it is imperative that practitioners in the field learn more about what LCA is, and how to conduct it. This presentation will include an overview of the LCA process and will concentrate on ...
1987-11-16
technologi- cal engineering, and design in the subordinate units and takes steps to pro- vide them with necessary technical-material resources; b) It... methodologies and the uniform standards and norms for the branch, subbranches, and other activities, and oversees their manner of application; it...dual subordination, in the field of preparing and fulfilling the annual plans for research, design , and microproduction; f) It participates in the
ERIC Educational Resources Information Center
Vitale, Michael R.; Kaniuka, Theodore S.
2012-01-01
Present national methodological standards for evaluating the credibility of the design of individual research studies have resulted in findings reporting the pre-post effectiveness of Direct Instruction programs to be eliminated from consideration by educational leaders involved in making curricular decisions intended to advance local school…
ERIC Educational Resources Information Center
Monahan, Patrick O.; McHorney, Colleen A.; Stump, Timothy E.; Perkins, Anthony J.
2007-01-01
Previous methodological and applied studies that used binary logistic regression (LR) for detection of differential item functioning (DIF) in dichotomously scored items either did not report an effect size or did not employ several useful measures of DIF magnitude derived from the LR model. Equations are provided for these effect size indices.…
Psychoactive constituents of cannabis and their clinical implications: a systematic review.
Casajuana Köguel, Cristina; López-Pelayo, Hugo; Balcells-Olivero, Mª Mercedes; Colom, Joan; Gual, Antoni
2018-04-15
Objective This systematic review aims to summarize current evidence on which naturally present cannabinoids contribute to cannabis psychoactivity, considering their reported concentrations and pharmacodynamics in humans. Design Following PRISMA guidelines, papers published before March 2016 in Medline, Scopus-Elsevier, Scopus, ISI-Web of Knowledge and COCHRANE, and fulfilling established a-priori selection criteria have been included. Results In 40 original papers, three naturally present cannabinoids (∆-9-Tetrahydrocannabinol, ∆-8-Tetrahydrocannabinol and Cannabinol) and one human metabolite (11-OH-THC) had clinical relevance. Of these, the metabolite produces the greatest psychoactive effects. Cannabidiol (CBD) is not psychoactive but plays a modulating role on cannabis psychoactive effects. The proportion of 9-THC in plant material is higher (up to 40%) than in other cannabinoids (up to 9%). Pharmacodynamic reports vary due to differences in methodological aspects (doses, administration route and volunteers' previous experience with cannabis). Conclusions Findings reveal that 9-THC contributes the most to cannabis psychoactivity. Due to lower psychoactive potency and smaller proportions in plant material, other psychoactive cannabinoids have a weak influence on cannabis final effects. Current lack of standard methodology hinders homogenized research on cannabis health effects. Working on a standard cannabis unit considering 9-THC is recommended.
Intimate Partner Violence, 1993-2010
... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ...
The economic burden of patient safety targets in acute care: a systematic review
Mittmann, Nicole; Koo, Marika; Daneman, Nick; McDonald, Andrew; Baker, Michael; Matlow, Anne; Krahn, Murray; Shojania, Kaveh G; Etchells, Edward
2012-01-01
Background Our objective was to determine the quality of literature in costing of the economic burden of patient safety. Methods We selected 15 types of patient safety targets for our systematic review. We searched the literature published between 2000 and 2010 using the following terms: “costs and cost analysis,” “cost-effectiveness,” “cost,” and “financial management, hospital.” We appraised the methodologic quality of potentially relevant studies using standard economic methods. We recorded results in the original currency, adjusted for inflation, and then converted to 2010 US dollars for comparative purposes (2010 US$1.00 = 2010 €0.76). The quality of each costing study per patient safety target was also evaluated. Results We screened 1948 abstracts, and identified 158 potentially eligible studies, of which only 61 (39%) reported any costing methodology. In these 61 studies, we found wide estimates of the attributable costs of patient safety events ranging from $2830 to $10,074. In general hospital populations, the cost per case of hospital-acquired infection ranged from $2132 to $15,018. Nosocomial bloodstream infection was associated with costs ranging from $2604 to $22,414. Conclusion There are wide variations in the estimates of economic burden due to differences in study methods and methodologic quality. Greater attention to methodologic standards for economic evaluations in patient safety is needed. PMID:23097615
Atalağ, Koray; Bilgen, Semih; Gür, Gürden; Boyacioğlu, Sedat
2007-09-01
There are very few evaluation studies for the Minimal Standard Terminology for Digestive Endoscopy. This study aims to evaluate the usage of the Turkish translation of Minimal Standard Terminology by developing an endoscopic information system. After elicitation of requirements, database modeling and software development were performed. Minimal Standard Terminology driven forms were designed for rapid data entry. The endoscopic report was rapidly created by applying basic Turkish syntax and grammar rules. Entering free text and also editing of final report were possible. After three years of live usage, data analysis was performed and results were evaluated. The system has been used for reporting of all endoscopic examinations. 15,638 valid records were analyzed, including 11,381 esophagogastroduodenoscopies, 2,616 colonoscopies, 1,079 rectoscopies and 562 endoscopic retrograde cholangiopancreatographies. In accordance with other previous validation studies, the overall usage of Minimal Standard Terminology terms was very high: 85% for examination characteristics, 94% for endoscopic findings and 94% for endoscopic diagnoses. Some new terms, attributes and allowed values were also added for better clinical coverage. Minimal Standard Terminology has been shown to cover a high proportion of routine endoscopy reports. Good user acceptance proves that both the terms and structure of Minimal Standard Terminology were consistent with usual clinical thinking. However, future work on Minimal Standard Terminology is mandatory for better coverage of endoscopic retrograde cholangiopancreatographies examinations. Technically new software development methodologies have to be sought for lowering cost of development and the maintenance phase. They should also address integration and interoperability of disparate information systems.
Jünger, Saskia; Payne, Sheila A; Brine, Jenny; Radbruch, Lukas; Brearley, Sarah G
2017-09-01
The Delphi technique is widely used for the development of guidance in palliative care, having impact on decisions with relevance for patient care. To systematically examine the application of the Delphi technique for the development of best practice guidelines in palliative care. A methodological systematic review was undertaken using the databases PubMed, CINAHL, Web of Science, Academic Search Complete and EMBASE. Original articles (English language) were included when reporting on empirical studies that had used the Delphi technique to develop guidance for good clinical practice in palliative care. Data extraction included a quality appraisal on the rigour in conduct of the studies and the quality of reporting. A total of 30 empirical studies (1997-2015) were considered for full-text analysis. Considerable differences were identified regarding the rigour of the design and the reporting of essential process and outcome parameters. Furthermore, discrepancies regarding the use of terms for describing the method were observed, for example, concerning the understanding of a 'round' or a 'modified Delphi study'. Substantial variation was found concerning the quality of the study conduct and the transparency of reporting of Delphi studies used for the development of best practice guidance in palliative care. Since credibility of the resulting recommendations depends on the rigorous use of the Delphi technique, there is a need for consistency and quality both in the conduct and reporting of studies. To allow a critical appraisal of the methodology and the resulting guidance, a reporting standard for Conducting and REporting of DElphi Studies (CREDES) is proposed.
Luo, Jing; Xu, Hao; Yang, Guoyan; Qiu, Yu; Liu, Jianping; Chen, Keji
2014-08-01
Oral Chinese proprietary medicine (CPM) is commonly used to treat angina pectoris, and many relevant systematic reviews/meta-analyses are available. However, these reviews have not been systematically summarized and evaluated. We conducted an overview of these reviews, and explored their methodological and reporting quality to inform both practice and further research. We included systematic reviews/meta-analyses on oral CPM in treating angina until March 2013 by searching PubMed, Embase, the Cochrane Library and four Chinese databases. We extracted data according to a pre-designed form, and assessed the methodological and reporting characteristics of the reviews in terms of AMSTAR and PRISMA respectively. Most of the data analyses were descriptive. 36 systematic reviews/meta-analyses involving over 82,105 participants with angina reviewing 13 kinds of oral CPM were included. The main outcomes assessed in the reviews were surrogate outcomes (34/36, 94.4%), adverse events (31/36, 86.1%), and symptoms (30/36, 83.3%). Six reviews (6/36, 16.7%) drew definitely positive conclusions, while the others suggested potential benefits in the symptoms, electrocardiogram, and adverse events. The overall methodological and reporting quality of the reviews was limited, with many serious flaws such as the lack of review protocol and incomprehensive literature searches. Though many systematic reviews/meta-analyses on oral CPM for angina suggested potential benefits or definitely positive effects, stakeholders should interpret the findings of these reviews with caution, considering the overall limited methodological and reporting quality. We recommend further studies should be appropriately conducted and systematic reviews reported according to PRISMA standard. Copyright © 2014 Elsevier Ltd. All rights reserved.
Funding source and the quality of reports of chronic wounds trials: 2004 to 2011
2014-01-01
Background Critical commentaries suggest that wound care randomised controlled trials (RCTs) are often poorly reported with many methodological flaws. Furthermore, interventions in chronic wounds, rather than being drugs, are often medical devices for which there are no requirements for RCTs to bring products to market. RCTs in wounds trials therefore potentially represent a form of marketing. This study presents a methodological overview of chronic wound trials published between 2004 and 2011 and investigates the influence of industry funding on methodological quality. Methods A systematic search for RCTs for the treatment of chronic wounds published in the English language between 2004 and 2011 (inclusive) in the Cochrane Wounds Group Specialised Register of Trials was carried out. Data were extracted on aspects of trial design, conduct and quality including sample size, duration of follow-up, specification of a primary outcome, use of surrogate outcomes, and risks of bias. In addition, the prevalence of industry funding was assessed and its influence on the above aspects of trial design, conduct and quality was assessed. Results A total of 167 RCTs met our inclusion criteria. We found chronic wound trials often have short durations of follow-up (median 12 weeks), small sample sizes (median 63), fail to define a primary outcome in 41% of cases, and those that do define a primary outcome, use surrogate measures of healing in 40% of cases. Only 40% of trials used appropriate methods of randomisation, 25% concealed allocation and 34% blinded outcome assessors. Of the included trials, 41% were wholly or partially funded by industry, 33% declared non-commercial funding and 26% did not report a funding source. Industry funding was not statistically significantly associated with any measure of methodological quality, though this analysis was probably underpowered. Conclusions This overview confirms concerns raised about the methodological quality of RCTs in wound care and illustrates that greater efforts must be made to follow international standards for conducting and reporting RCTs. There is currently minimal evidence of an influence of industry funding on methodological quality although analyses had limited power and funding source was not reported for a quarter of studies. PMID:24422753
Funding source and the quality of reports of chronic wounds trials: 2004 to 2011.
Hodgson, Robert; Allen, Richard; Broderick, Ellen; Bland, J Martin; Dumville, Jo C; Ashby, Rebecca; Bell-Syer, Sally; Foxlee, Ruth; Hall, Jill; Lamb, Karen; Madden, Mary; O'Meara, Susan; Stubbs, Nikki; Cullum, Nicky
2014-01-14
Critical commentaries suggest that wound care randomised controlled trials (RCTs) are often poorly reported with many methodological flaws. Furthermore, interventions in chronic wounds, rather than being drugs, are often medical devices for which there are no requirements for RCTs to bring products to market. RCTs in wounds trials therefore potentially represent a form of marketing. This study presents a methodological overview of chronic wound trials published between 2004 and 2011 and investigates the influence of industry funding on methodological quality. A systematic search for RCTs for the treatment of chronic wounds published in the English language between 2004 and 2011 (inclusive) in the Cochrane Wounds Group Specialised Register of Trials was carried out.Data were extracted on aspects of trial design, conduct and quality including sample size, duration of follow-up, specification of a primary outcome, use of surrogate outcomes, and risks of bias. In addition, the prevalence of industry funding was assessed and its influence on the above aspects of trial design, conduct and quality was assessed. A total of 167 RCTs met our inclusion criteria. We found chronic wound trials often have short durations of follow-up (median 12 weeks), small sample sizes (median 63), fail to define a primary outcome in 41% of cases, and those that do define a primary outcome, use surrogate measures of healing in 40% of cases. Only 40% of trials used appropriate methods of randomisation, 25% concealed allocation and 34% blinded outcome assessors. Of the included trials, 41% were wholly or partially funded by industry, 33% declared non-commercial funding and 26% did not report a funding source. Industry funding was not statistically significantly associated with any measure of methodological quality, though this analysis was probably underpowered. This overview confirms concerns raised about the methodological quality of RCTs in wound care and illustrates that greater efforts must be made to follow international standards for conducting and reporting RCTs. There is currently minimal evidence of an influence of industry funding on methodological quality although analyses had limited power and funding source was not reported for a quarter of studies.
Nikendei, C.; Ganschow, P.; Groener, J. B.; Huwendiek, S.; Köchel, A.; Köhl-Hackert, N.; Pjontek, R.; Rodrian, J.; Scheibe, F.; Stadler, A.-K.; Steiner, T.; Stiepak, J.; Tabatabai, J.; Utz, A.; Kadmon, M.
2016-01-01
The competent physical examination of patients and the safe and professional implementation of clinical procedures constitute essential components of medical practice in nearly all areas of medicine. The central objective of the projects “Heidelberg standard examination” and “Heidelberg standard procedures”, which were initiated by students, was to establish uniform interdisciplinary standards for physical examination and clinical procedures, and to distribute them in coordination with all clinical disciplines at the Heidelberg University Hospital. The presented project report illuminates the background of the initiative and its methodological implementation. Moreover, it describes the multimedia documentation in the form of pocketbooks and a multimedia internet-based platform, as well as the integration into the curriculum. The project presentation aims to provide orientation and action guidelines to facilitate similar processes in other faculties. PMID:27579354
Madanat, Rami; Mäkinen, Tatu J; Aro, Hannu T; Bragdon, Charles; Malchau, Henrik
2014-09-01
Guidelines for standardization of radiostereometry (RSA) of implants were published in 2005 to facilitate comparison of outcomes between various research groups. In this systematic review, we determined how well studies have adhered to these guidelines. We carried out a literature search to identify all articles published between January 2000 and December 2011 that used RSA in the evaluation of hip or knee prosthesis migration. 2 investigators independently evaluated each of the studies for adherence to the 13 individual guideline items. Since some of the 13 points included more than 1 criterion, studies were assessed on whether each point was fully met, partially met, or not met. 153 studies that met our inclusion criteria were identified. 61 of these were published before the guidelines were introduced (2000-2005) and 92 after the guidelines were introduced (2006-2011). The methodological quality of RSA studies clearly improved from 2000 to 2011. None of the studies fully met all 13 guidelines. Nearly half (43) of the studies published after the guidelines demonstrated a high methodological quality and adhered at least partially to 10 of the 13 guidelines, whereas less than one-fifth (11) of the studies published before the guidelines had the same methodological quality. Commonly unaddressed guideline items were related to imaging methodology, determination of precision from double examinations, and also mean error of rigid-body fitting and condition number cutoff levels. The guidelines have improved methodological reporting in RSA studies, but adherence to these guidelines is still relatively low. There is a need to update and clarify the guidelines for clinical hip and knee arthroplasty RSA studies.
Deep Borehole Emplacement Mode Hazard Analysis Revision 0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sevougian, S. David
This letter report outlines a methodology and provides resource information for the Deep Borehole Emplacement Mode Hazard Analysis (DBEMHA). The main purpose is identify the accident hazards and accident event sequences associated with the two emplacement mode options (wireline or drillstring), to outline a methodology for computing accident probabilities and frequencies, and to point to available databases on the nature and frequency of accidents typically associated with standard borehole drilling and nuclear handling operations. Risk mitigation and prevention measures, which have been incorporated into the two emplacement designs (see Cochran and Hardin 2015), are also discussed. A key intent ofmore » this report is to provide background information to brief subject matter experts involved in the Emplacement Mode Design Study. [Note: Revision 0 of this report is concentrated more on the wireline emplacement mode. It is expected that Revision 1 will contain further development of the preliminary fault and event trees for the drill string emplacement mode.]« less
Li, Tianjing; Hutfless, Susan; Scharfstein, Daniel O; Daniels, Michael J; Hogan, Joseph W; Little, Roderick J A; Roy, Jason A; Law, Andrew H; Dickersin, Kay
2014-01-01
To recommend methodological standards in the prevention and handling of missing data for primary patient-centered outcomes research (PCOR). We searched National Library of Medicine Bookshelf and Catalog as well as regulatory agencies' and organizations' Web sites in January 2012 for guidance documents that had formal recommendations regarding missing data. We extracted the characteristics of included guidance documents and recommendations. Using a two-round modified Delphi survey, a multidisciplinary panel proposed mandatory standards on the prevention and handling of missing data for PCOR. We identified 1,790 records and assessed 30 as having relevant recommendations. We proposed 10 standards as mandatory, covering three domains. First, the single best approach is to prospectively prevent missing data occurrence. Second, use of valid statistical methods that properly reflect multiple sources of uncertainty is critical when analyzing missing data. Third, transparent and thorough reporting of missing data allows readers to judge the validity of the findings. We urge researchers to adopt rigorous methodology and promote good science by applying best practices to the prevention and handling of missing data. Developing guidance on the prevention and handling of missing data for observational studies and studies that use existing records is a priority for future research. Copyright © 2014 Elsevier Inc. All rights reserved.
Booth, Andrew
2016-05-04
Qualitative systematic reviews or qualitative evidence syntheses (QES) are increasingly recognised as a way to enhance the value of systematic reviews (SRs) of clinical trials. They can explain the mechanisms by which interventions, evaluated within trials, might achieve their effect. They can investigate differences in effects between different population groups. They can identify which outcomes are most important to patients, carers, health professionals and other stakeholders. QES can explore the impact of acceptance, feasibility, meaningfulness and implementation-related factors within a real world setting and thus contribute to the design and further refinement of future interventions. To produce valid, reliable and meaningful QES requires systematic identification of relevant qualitative evidence. Although the methodologies of QES, including methods for information retrieval, are well-documented, little empirical evidence exists to inform their conduct and reporting. This structured methodological overview examines papers on searching for qualitative research identified from the Cochrane Qualitative and Implementation Methods Group Methodology Register and from citation searches of 15 key papers. A single reviewer reviewed 1299 references. Papers reporting methodological guidance, use of innovative methodologies or empirical studies of retrieval methods were categorised under eight topical headings: overviews and methodological guidance, sampling, sources, structured questions, search procedures, search strategies and filters, supplementary strategies and standards. This structured overview presents a contemporaneous view of information retrieval for qualitative research and identifies a future research agenda. This review concludes that poor empirical evidence underpins current information practice in information retrieval of qualitative research. A trend towards improved transparency of search methods and further evaluation of key search procedures offers the prospect of rapid development of search methods.
All-Cause and External Mortality in Released Prisoners: Systematic Review and Meta-Analysis
Zlodre, Jakov
2012-01-01
Objectives. We systematically reviewed studies of mortality following release from prison and examined possible demographic and methodological factors associated with variation in mortality rates. Methods. We searched 5 computer-based literature indexes to conduct a systematic review of studies that reported all-cause, drug-related, suicide, and homicide deaths of released prisoners. We extracted and meta-analyzed crude death rates and standardized mortality ratios by age, gender, and race/ethnicity, where reported. Results. Eighteen cohorts met review criteria reporting 26 163 deaths with substantial heterogeneity in rates. The all-cause crude death rates ranged from 720 to 2054 per 100 000 person-years. Male all-cause standardized mortality ratios ranged from 1.0 to 9.4 and female standardized mortality ratios from 2.6 to 41.3. There were higher standardized mortality ratios in White, female, and younger prisoners. Conclusions. Released prisoners are at increased risk for death following release from prison, particularly in the early period. Aftercare planning for released prisoners could potentially have a large public health impact, and further work is needed to determine whether certain groups should be targeted as part of strategies to reduce mortality. PMID:23078476
Retention of Content Utilizing a Flipped Classroom Approach.
Shatto, Bobbi; LʼEcuyer, Kristine; Quinn, Jerod
The flipped classroom experience promotes retention and accountability for learning. The authors report their evaluation of a flipped classroom for accelerated second-degree nursing students during their primary medical-surgical nursing course. Standardized HESI® scores were compared between a group of students who experienced the flipped classroom and a previous group who had traditional teaching methods. Short- and long-term retention was measured using standardized exams 3 months and 12 months following the course. Results indicated that short-term retention was greater and long- term retention was significantly great in the students who were taught using flipped classroom methodology.
Risk Assessment Methodology Based on the NISTIR 7628 Guidelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Sheldon, Frederick T; Hauser, Katie R
2013-01-01
Earlier work describes computational models of critical infrastructure that allow an analyst to estimate the security of a system in terms of the impact of loss per stakeholder resulting from security breakdowns. Here, we consider how to identify, monitor and estimate risk impact and probability for different smart grid stakeholders. Our constructive method leverages currently available standards and defined failure scenarios. We utilize the National Institute of Standards and Technology (NIST) Interagency or Internal Reports (NISTIR) 7628 as a basis to apply Cyberspace Security Econometrics system (CSES) for comparing design principles and courses of action in making security-related decisions.
Young, Maria-Elena De Trinidad; Madrigal, Daniel S
2017-01-01
Undocumented status is rarely measured in health research, yet it influences the lives and well-being of immigrants. The growing body of research on undocumented status and health shows the need to assess the measurement of this legal status. We discuss the definition of undocumented status, conduct a systematic review of the methodological approaches currently taken to measure undocumented status of immigrants in the USA, and discuss recommendations for advancement of measurement methods. We conducted a systematic review of 61 studies indexed in PubMed, conducted in the USA, and published from 2004 to 2014. We categorized each of the studies' data source and type, measurement type, and information for classifying undocumented participants. Studies used self-reported or proxy measures of legal status. Information to classify undocumented participants included self-reported status, possession of a Social Security number, possession of health insurance or institutional resources, concern about deportation, and participant characteristics. Findings show it is feasible to collect self-reported measures of undocumented status. We recommend that researchers collect self-reported measures of undocumented status whenever possible and limit the use of proxy measures. Validated and standardized measures are needed for within and across country measurement. Authors should provide methodological information about measurement in publications. Finally, individuals who are undocumented should be included in the development of these methodologies. This systematic review is not registered.
Tracer methodology: an appropriate tool for assessing compliance with accreditation standards?
Bouchard, Chantal; Jean, Olivier
2017-10-01
Tracer methodology has been used by Accreditation Canada since 2008 to collect evidence on the quality and safety of care and services, and to assess compliance with accreditation standards. Given the importance of this methodology in the accreditation program, the objective of this study is to assess the quality of the methodology and identify its strengths and weaknesses. A mixed quantitative and qualitative approach was adopted to evaluate consistency, appropriateness, effectiveness and stakeholder synergy in applying the methodology. An online questionnaire was sent to 468 Accreditation Canada surveyors. According to surveyors' perceptions, tracer methodology is an effective tool for collecting useful, credible and reliable information to assess compliance with Qmentum program standards and priority processes. The results show good coherence between methodology components (appropriateness of the priority processes evaluated, activities to evaluate a tracer, etc.). The main weaknesses are the time constraints faced by surveyors and management's lack of cooperation during the evaluation of tracers. The inadequate amount of time allowed for the methodology to be applied properly raises questions about the quality of the information obtained. This study paves the way for a future, more in-depth exploration of the identified weaknesses to help the accreditation organization make more targeted improvements to the methodology. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Whalen, Robert T.; Napel, Sandy; Yan, Chye H.
1996-01-01
Progress in development of the methods required to study bone remodeling as a function of time is reported. The following topics are presented: 'A New Methodology for Registration Accuracy Evaluation', 'Registration of Serial Skeletal Images for Accurately Measuring Changes in Bone Density', and 'Precise and Accurate Gold Standard for Multimodality and Serial Registration Method Evaluations.'
Poor methodological detail precludes experimental repeatability and hampers synthesis in ecology.
Haddaway, Neal R; Verhoeven, Jos T A
2015-10-01
Despite the scientific method's central tenets of reproducibility (the ability to obtain similar results when repeated) and repeatability (the ability to replicate an experiment based on methods described), published ecological research continues to fail to provide sufficient methodological detail to allow either repeatability of verification. Recent systematic reviews highlight the problem, with one example demonstrating that an average of 13% of studies per year (±8.0 [SD]) failed to report sample sizes. The problem affects the ability to verify the accuracy of any analysis, to repeat methods used, and to assimilate the study findings into powerful and useful meta-analyses. The problem is common in a variety of ecological topics examined to date, and despite previous calls for improved reporting and metadata archiving, which could indirectly alleviate the problem, there is no indication of an improvement in reporting standards over time. Here, we call on authors, editors, and peer reviewers to consider repeatability as a top priority when evaluating research manuscripts, bearing in mind that legacy and integration into the evidence base can drastically improve the impact of individual research reports.
Quality of Reporting Nutritional Randomized Controlled Trials in Patients With Cystic Fibrosis.
Daitch, Vered; Babich, Tanya; Singer, Pierre; Leibovici, Leonard
2016-08-01
Randomized controlled trials (RCTs) have a major role in the making of evidence-based guidelines. The aim of the present study was to critically appraise the RCTs that addressed nutritional interventions in patients with cystic fibrosis. Embase, PubMed, and the Cochrane Library were systematically searched until July 2015. Methodology and reporting of nutritional RCTs were evaluated by the Consolidated Standards of Reporting Trials (CONSORT) checklist and additional dimensions relevant to patients with CF. Fifty-one RCTs were included. Full details on methods were provided in a minority of studies. The mean duration of intervention was <6 months. 56.9% of the RCTs did not define a primary outcome; 70.6% of studies did not provide details on sample size calculation; and only 31.4% reported on the subgroup or separated between important subgroups. The examined RCTs were characterized by a weak methodology, a small number of patients with no sample size calculations, a relatively short intervention, and many times did not examine the outcomes that are important to the patient. Improvement over the years has been minor.
Quality of reporting in oncology phase II trials: A 5-year assessment through systematic review.
Langrand-Escure, Julien; Rivoirard, Romain; Oriol, Mathieu; Tinquaut, Fabien; Rancoule, Chloé; Chauvin, Frank; Magné, Nicolas; Bourmaud, Aurélie
2017-01-01
Phase II clinical trials are a cornerstone of the development in experimental treatments They work as a "filter" for phase III trials confirmation. Surprisingly the attrition ratio in Phase III trials in oncology is significantly higher than in any other medical specialty. This suggests phase II trials in oncology fail to achieve their goal. Objective The present study aims at estimating the quality of reporting in published oncology phase II clinical trials. A literature review was conducted among all phase II and phase II/III clinical trials published during a 5-year period (2010-2015). All articles electronically published by three randomly-selected oncology journals with Impact-Factors>4 were included: Journal of Clinical Oncology, Annals of Oncology and British Journal of Cancer. Quality of reporting was assessed using the Key Methodological Score. 557 articles were included. 315 trials were single-arm studies (56.6%), 193 (34.6%) were randomized and 49 (8.8%) were non-randomized multiple-arm studies. The Methodological Score was equal to 0 (lowest level), 1, 2, 3 (highest level) respectively for 22 (3.9%), 119 (21.4%), 270 (48.5%) and 146 (26.2%) articles. The primary end point is almost systematically reported (90.5%), while sample size calculation is missing in 66% of the articles. 3 variables were independently associated with reporting of a high standard: presence of statistical design (p-value <0.001), multicenter trial (p-value = 0.012), per-protocol analysis (p-value <0.001). Screening was mainly performed by a sole author. The Key Methodological Score was based on only 3 items, making grey zones difficult to translate. This literature review highlights the existence of gaps concerning the quality of reporting. It therefore raised the question of the suitability of the methodology as well as the quality of these trials, reporting being incomplete in the corresponding articles.
Quality of reporting in oncology phase II trials: A 5-year assessment through systematic review
Langrand-Escure, Julien; Rivoirard, Romain; Oriol, Mathieu; Tinquaut, Fabien; Rancoule, Chloé; Chauvin, Frank; Magné, Nicolas; Bourmaud, Aurélie
2017-01-01
Background Phase II clinical trials are a cornerstone of the development in experimental treatments They work as a "filter" for phase III trials confirmation. Surprisingly the attrition ratio in Phase III trials in oncology is significantly higher than in any other medical specialty. This suggests phase II trials in oncology fail to achieve their goal. Objective The present study aims at estimating the quality of reporting in published oncology phase II clinical trials. Data sources A literature review was conducted among all phase II and phase II/III clinical trials published during a 5-year period (2010–2015). Study eligibility criteria All articles electronically published by three randomly-selected oncology journals with Impact-Factors>4 were included: Journal of Clinical Oncology, Annals of Oncology and British Journal of Cancer. Intervention Quality of reporting was assessed using the Key Methodological Score. Results 557 articles were included. 315 trials were single-arm studies (56.6%), 193 (34.6%) were randomized and 49 (8.8%) were non-randomized multiple-arm studies. The Methodological Score was equal to 0 (lowest level), 1, 2, 3 (highest level) respectively for 22 (3.9%), 119 (21.4%), 270 (48.5%) and 146 (26.2%) articles. The primary end point is almost systematically reported (90.5%), while sample size calculation is missing in 66% of the articles. 3 variables were independently associated with reporting of a high standard: presence of statistical design (p-value <0.001), multicenter trial (p-value = 0.012), per-protocol analysis (p-value <0.001). Limitations Screening was mainly performed by a sole author. The Key Methodological Score was based on only 3 items, making grey zones difficult to translate. Conclusions & implications of key findings This literature review highlights the existence of gaps concerning the quality of reporting. It therefore raised the question of the suitability of the methodology as well as the quality of these trials, reporting being incomplete in the corresponding articles. PMID:29216190
Cost-Effectiveness Analysis: a proposal of new reporting standards in statistical analysis
Bang, Heejung; Zhao, Hongwei
2014-01-01
Cost-effectiveness analysis (CEA) is a method for evaluating the outcomes and costs of competing strategies designed to improve health, and has been applied to a variety of different scientific fields. Yet, there are inherent complexities in cost estimation and CEA from statistical perspectives (e.g., skewness, bi-dimensionality, and censoring). The incremental cost-effectiveness ratio that represents the additional cost per one unit of outcome gained by a new strategy has served as the most widely accepted methodology in the CEA. In this article, we call for expanded perspectives and reporting standards reflecting a more comprehensive analysis that can elucidate different aspects of available data. Specifically, we propose that mean and median-based incremental cost-effectiveness ratios and average cost-effectiveness ratios be reported together, along with relevant summary and inferential statistics as complementary measures for informed decision making. PMID:24605979
Investigating human cognitive performance during spaceflight
NASA Astrophysics Data System (ADS)
Pattyn, Nathalie; Migeotte, Pierre-Francois; Demaeseleer, Wim; Kolinsky, Regine; Morais, Jose; Zizi, Martin
2005-08-01
Although astronauts' subjective self-evaluation of cognitive functioning often reports impairments, to date most studies of human higher cognitive functions in space never yielded univocal results. Since no golden standard exists to evaluate the higher cognitive functions, we proposed to assess astronaut's cognitive performance through a novel series of tests combined with the simultaneous recording of physiological parameters. We report here the validation of our methodology and the cognitive results of this testing on the cosmonauts from the 11 days odISSsea mission to the ISS (2002) and on a control group of pilots, carefully matched to the characteristics of the subjects. For the first time, we show a performance decrement in higher cognitive functions during space flight. Our results show a significant performance decrement for inflight measurement, as well as measurable variations in executive control of cognitive functions. Taken together, our data establish the validity of our methodology and the presence of a different information processing in operational conditions.
COS-STAR: a reporting guideline for studies developing core outcome sets (protocol).
Kirkham, Jamie J; Gorst, Sarah; Altman, Douglas G; Blazeby, Jane; Clarke, Mike; Devane, Declan; Gargon, Elizabeth; Williamson, Paula R
2015-08-22
Core outcome sets can increase the efficiency and value of research and, as a result, there are an increasing number of studies looking to develop core outcome sets (COS). However, the credibility of a COS depends on both the use of sound methodology in its development and clear and transparent reporting of the processes adopted. To date there is no reporting guideline for reporting COS studies. The aim of this programme of research is to develop a reporting guideline for studies developing COS and to highlight some of the important methodological considerations in the process. The study will include a reporting guideline item generation stage which will then be used in a Delphi study. The Delphi study is anticipated to include two rounds. The first round will ask stakeholders to score the items listed and to add any new items they think are relevant. In the second round of the process, participants will be shown the distribution of scores for all stakeholder groups separately and asked to re-score. A final consensus meeting will be held with an expert panel and stakeholder representatives to review the guideline item list. Following the consensus meeting, a reporting guideline will be drafted and review and testing will be undertaken until the guideline is finalised. The final outcome will be the COS-STAR (Core Outcome Set-STAndards for Reporting) guideline for studies developing COS and a supporting explanatory document. To assess the credibility and usefulness of a COS, readers of a COS development report need complete, clear and transparent information on its methodology and proposed core set of outcomes. The COS-STAR guideline will potentially benefit all stakeholders in COS development: COS developers, COS users, e.g. trialists and systematic reviewers, journal editors, policy-makers and patient groups.
Minimum Information about a Genotyping Experiment (MIGEN)
Huang, Jie; Mirel, Daniel; Pugh, Elizabeth; Xing, Chao; Robinson, Peter N.; Pertsemlidis, Alexander; Ding, LiangHao; Kozlitina, Julia; Maher, Joseph; Rios, Jonathan; Story, Michael; Marthandan, Nishanth; Scheuermann, Richard H.
2011-01-01
Genotyping experiments are widely used in clinical and basic research laboratories to identify associations between genetic variations and normal/abnormal phenotypes. Genotyping assay techniques vary from single genomic regions that are interrogated using PCR reactions to high throughput assays examining genome-wide sequence and structural variation. The resulting genotype data may include millions of markers of thousands of individuals, requiring various statistical, modeling or other data analysis methodologies to interpret the results. To date, there are no standards for reporting genotyping experiments. Here we present the Minimum Information about a Genotyping Experiment (MIGen) standard, defining the minimum information required for reporting genotyping experiments. MIGen standard covers experimental design, subject description, genotyping procedure, quality control and data analysis. MIGen is a registered project under MIBBI (Minimum Information for Biological and Biomedical Investigations) and is being developed by an interdisciplinary group of experts in basic biomedical science, clinical science, biostatistics and bioinformatics. To accommodate the wide variety of techniques and methodologies applied in current and future genotyping experiment, MIGen leverages foundational concepts from the Ontology for Biomedical Investigations (OBI) for the description of the various types of planned processes and implements a hierarchical document structure. The adoption of MIGen by the research community will facilitate consistent genotyping data interpretation and independent data validation. MIGen can also serve as a framework for the development of data models for capturing and storing genotyping results and experiment metadata in a structured way, to facilitate the exchange of metadata. PMID:22180825
Melson, Ambrose J; Monk, Rebecca Louise; Heim, Derek
2016-12-01
Data-driven student drinking norms interventions are based on reported normative overestimation of the extent and approval of an average student's drinking. Self-reported differences between personal and perceived normative drinking behaviors and attitudes are taken at face value as evidence of actual levels of overestimation. This study investigates whether commonly used data collection methods and socially desirable responding (SDR) may inadvertently impede establishing "objective" drinking norms. U.K. students (N = 421; 69% female; mean age 20.22 years [SD = 2.5]) were randomly assigned to 1 of 3 versions of a drinking norms questionnaire: The standard multi-target questionnaire assessed respondents' drinking attitudes and behaviors (frequency of consumption, heavy drinking, units on a typical occasion) as well as drinking attitudes and behaviors for an "average student." Two deconstructed versions of this questionnaire assessed identical behaviors and attitudes for participants themselves or an "average student." The Balanced Inventory of Desirable Responding was also administered. Students who answered questions about themselves and peers reported more extreme perceived drinking attitudes for the average student compared with those reporting solely on the "average student." Personal and perceived reports of drinking behaviors did not differ between multitarget and single-target versions of the questionnaire. Among those who completed the multitarget questionnaire, after controlling for demographics and weekly drinking, SDR was related positively with the magnitude of difference between students' own reported behaviors/attitudes and those perceived for the average student. Standard methodological practices and socially desirable responding may be sources of bias in peer norm overestimation research. Copyright © 2016 by the Research Society on Alcoholism.
Software for imaging phase-shift interference microscope
NASA Astrophysics Data System (ADS)
Malinovski, I.; França, R. S.; Couceiro, I. B.
2018-03-01
In recent years absolute interference microscope was created at National Metrology Institute of Brazil (INMETRO). The instrument by principle of operation is imaging phase-shifting interferometer (PSI) equipped with two stabilized lasers of different colour as traceable reference wavelength sources. We report here some progress in development of the software for this instrument. The status of undergoing internal validation and verification of the software is also reported. In contrast with standard PSI method, different methodology of phase evaluation is applied. Therefore, instrument specific procedures for software validation and verification are adapted and discussed.
Pollard, Beth; Johnston, Marie; Dixon, Diane
2007-01-01
Subjective measures involving clinician ratings or patient self-assessments have become recognised as an important tool for the assessment of health outcome. The value of a health outcome measure is usually assessed by a psychometric evaluation of its reliability, validity and responsiveness. However, psychometric testing involves an accumulation of evidence and has recognised limitations. It has been suggested that an evaluation of how well a measure has been developed would be a useful additional criteria in assessing the value of a measure. This paper explored the theoretical background and methodological development of subjective health status measures commonly used in osteoarthritis research. Fourteen subjective health outcome measures commonly used in osteoarthritis research were examined. Each measure was explored on the basis of their i) theoretical framework (was there a definition of what was being assessed and was it part of a theoretical model?) and ii) methodological development (what was the scaling strategy, how were the items generated and reduced, what was the response format and what was the scoring method?). Only the AIMS, SF-36 and WHOQOL defined what they were assessing (i.e. the construct of interest) and no measure assessed was part of a theoretical model. None of the clinician report measures appeared to have implemented a scaling procedure or described the rationale for the items selected or scoring system. Of the patient self-report measures, the AIMS, MPQ, OXFORD, SF-36, WHOQOL and WOMAC appeared to follow a standard psychometric scaling method. The DRP and EuroQol used alternative scaling methods. The review highlighted the general lack of theoretical framework for both clinician report and patient self-report measures. This review also drew attention to the wide variation in the methodological development of commonly used measures in OA. While, in general the patient self-report measures had good methodological development, the clinician report measures appeared less well developed. It would be of value if new measures defined the construct of interest and, that the construct, be part of theoretical model. By ensuring measures are both theoretically and empirically valid then improvements in subjective health outcome measures should be possible. PMID:17343739
An Interoperability Framework and Capability Profiling for Manufacturing Software
NASA Astrophysics Data System (ADS)
Matsuda, M.; Arai, E.; Nakano, N.; Wakai, H.; Takeda, H.; Takata, M.; Sasaki, H.
ISO/TC184/SC5/WG4 is working on ISO16100: Manufacturing software capability profiling for interoperability. This paper reports on a manufacturing software interoperability framework and a capability profiling methodology which were proposed and developed through this international standardization activity. Within the context of manufacturing application, a manufacturing software unit is considered to be capable of performing a specific set of function defined by a manufacturing software system architecture. A manufacturing software interoperability framework consists of a set of elements and rules for describing the capability of software units to support the requirements of a manufacturing application. The capability profiling methodology makes use of the domain-specific attributes and methods associated with each specific software unit to describe capability profiles in terms of unit name, manufacturing functions, and other needed class properties. In this methodology, manufacturing software requirements are expressed in terns of software unit capability profiles.
Development and Application of Health-Based Screening Levels for Use in Water-Quality Assessments
Toccalino, Patricia L.
2007-01-01
Health-Based Screening Levels (HBSLs) are non-enforceable water-quality benchmarks that were developed by the U.S. Geological Survey in collaboration with the U.S. Environmental Protection Agency (USEPA) and others. HBSLs supplement existing Federal drinking-water standards and guidelines, thereby providing a basis for a more comprehensive evaluation of contaminant-occurrence data in the context of human health. Since the original methodology used to calculate HBSLs for unregulated contaminants was published in 2003, revisions have been made to the HBSL methodology in order to reflect updates to relevant USEPA policies. These revisions allow for the use of the most recent, USEPA peer-reviewed, publicly available human-health toxicity information in the development of HBSLs. This report summarizes the revisions to the HBSL methodology for unregulated contaminants, and updates the guidance on the use of HBSLs for interpreting water-quality data in the context of human health.
An Approach for Implementation of Project Management Information Systems
NASA Astrophysics Data System (ADS)
Běrziša, Solvita; Grabis, Jānis
Project management is governed by project management methodologies, standards, and other regulatory requirements. This chapter proposes an approach for implementing and configuring project management information systems according to requirements defined by these methodologies. The approach uses a project management specification framework to describe project management methodologies in a standardized manner. This specification is used to automatically configure the project management information system by applying appropriate transformation mechanisms. Development of the standardized framework is based on analysis of typical project management concepts and process and existing XML-based representations of project management. A demonstration example of project management information system's configuration is provided.
Measures of outdoor play and independent mobility in children and youth: A methodological review.
Bates, Bree; Stone, Michelle R
2015-09-01
Declines in children's outdoor play have been documented globally, which are partly due to heightened restrictions around children's independent mobility. Literature on outdoor play and children's independent mobility is increasing, yet no paper has summarized the various methodological approaches used. A methodological review could highlight most commonly used measures and comprehensive research designs that could result in more standardized methodological approaches. Methodological review. A standardized protocol guided a methodological review of published research on measures of outdoor play and children's independent mobility in children and youth (0-18 years). Online searches of 8 electronic databases were conducted and studies included if they contained a subjective/objective measure of outdoor play or children's independent mobility. References of included articles were scanned to identify additional articles. Twenty-four studies were included on outdoor play, and twenty-three on children's independent mobility. Study designs were diverse. Common objective measures included accelerometry, global positioning systems and direct observation; questionnaires, surveys and interviews were common subjective measures. Focus groups, activity logs, monitoring sheets, travel/activity diaries, behavioral maps and guided tours were also utilized. Questionnaires were used most frequently, yet few studies used the same questionnaire. Five studies employed comprehensive, mixed-methods designs. Outdoor play and children's independent mobility have been measured using a wide variety of techniques, with only a few studies using similar methodologies. A standardized methodological approach does not exist. Future researchers should consider including both objective measures (accelerometry and global positioning systems) and subjective measures (questionnaires, activity logs, interviews), as more comprehensive designs will enhance understanding of each multidimensional construct. Creating a standardized methodological approach would improve study comparisons. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Howley, Lisa; Szauter, Karen; Perkowski, Linda; Clifton, Maurice; McNaughton, Nancy
2008-04-01
In order to assess or replicate the research findings of published reports, authors must provide adequate and transparent descriptions of their methods. We conducted 2 consecutive studies, the first to define reporting standards relating to the use of standardised patients (SPs) in research, and the second to evaluate the current literature according to these standards. Standards for reporting SPs in research were established by representatives of the Grants and Research Committee of the Association of Standardized Patient Educators (ASPE). An extensive literature search yielded 177 relevant English-language articles published between 1993 and 2005. Search terms included: 'standardised patient(s)'; 'simulated patient(s)'; 'objective structured clinical examination (OSCE)', and 'clinical skills assessment'. Articles were limited to those reporting the use of SPs as an outcome measure and published in 1 of 5 prominent health sciences education journals. Data regarding the SP encounter, SP characteristics, training and behavioural measure(s) were gathered. A random selection of 121 articles was evaluated according to 29 standards. Reviewers judged that few authors provided sufficient details regarding the encounter (21%, n = 25), SPs (16%, n = 19), training (15%, n = 15), and behavioural measures (38%, n = 44). Authors rarely reported SP gender (27%, n = 33) and age range (22%, n = 26), whether training was provided for the SPs (39%, n = 47) or other raters (24%, n = 29), and psychometric evidence to support the behavioural measure (23%, n = 25). The findings suggest that there is a need for increased rigor in reporting research involving SPs. In order to support the validity of research findings, journal editors, reviewers and authors are encouraged to provide adequate detail when describing SP methodology.
Becker, Christoph; Lauterbach, Gabriele; Spengler, Sarah; Dettweiler, Ulrich; Mess, Filip
2017-01-01
Background: Participants in Outdoor Education Programmes (OEPs) presumably benefit from these programmes in terms of their social and personal development, academic achievement and physical activity (PA). The aim of this systematic review was to identify studies about regular compulsory school- and curriculum-based OEPs, to categorise and evaluate reported outcomes, to assess the methodological quality, and to discuss possible benefits for students. Methods: We searched online databases to identify English- and German-language peer-reviewed journal articles that reported any outcomes on a student level. Two independent reviewers screened studies identified for eligibility and assessed the methodological quality. Results: Thirteen studies were included for analysis. Most studies used a case-study design, the average number of participants was moderate (mean valued (M) = 62.17; standard deviation (SD) = 64.12), and the methodological quality was moderate on average for qualitative studies (M = 0.52; SD = 0.11), and low on average for quantitative studies (M = 0.18; SD = 0.42). Eight studies described outcomes in terms of social dimensions, seven studies in learning dimensions and four studies were subsumed under additional outcomes, i.e., PA and health. Eleven studies reported positive, one study positive as well as negative, and one study reported negative effects. PA and mental health as outcomes were underrepresented. Conclusion: Tendencies were detected that regular compulsory school- and curriculum-based OEPs can promote students in respect of social, academic, physical and psychological dimensions. Very little is known concerning students’ PA or mental health. We recommend conducting more quasi-experimental design and longitudinal studies with a greater number of participants, and a high methodological quality to further investigate these tendencies. PMID:28475167
Becker, Christoph; Lauterbach, Gabriele; Spengler, Sarah; Dettweiler, Ulrich; Mess, Filip
2017-05-05
Participants in Outdoor Education Programmes (OEPs) presumably benefit from these programmes in terms of their social and personal development, academic achievement and physical activity (PA). The aim of this systematic review was to identify studies about regular compulsory school- and curriculum-based OEPs, to categorise and evaluate reported outcomes, to assess the methodological quality, and to discuss possible benefits for students. We searched online databases to identify English- and German-language peer-reviewed journal articles that reported any outcomes on a student level. Two independent reviewers screened studies identified for eligibility and assessed the methodological quality. Thirteen studies were included for analysis. Most studies used a case-study design, the average number of participants was moderate (mean valued (M) = 62.17; standard deviation (SD) = 64.12), and the methodological quality was moderate on average for qualitative studies (M = 0.52; SD = 0.11), and low on average for quantitative studies (M = 0.18; SD = 0.42). Eight studies described outcomes in terms of social dimensions, seven studies in learning dimensions and four studies were subsumed under additional outcomes, i.e., PA and health. Eleven studies reported positive, one study positive as well as negative, and one study reported negative effects. PA and mental health as outcomes were underrepresented. Tendencies were detected that regular compulsory school- and curriculum-based OEPs can promote students in respect of social, academic, physical and psychological dimensions. Very little is known concerning students' PA or mental health. We recommend conducting more quasi-experimental design and longitudinal studies with a greater number of participants, and a high methodological quality to further investigate these tendencies.
Standard methodologies for virus research in Apis mellifera
USDA-ARS?s Scientific Manuscript database
The international research network COLOSS (Prevention of honey bee COlony LOSSes) was established to coordinate efforts towards improving the health of western honey bee at the global level. The COLOSS BEEBOOK contains a collection of chapters intended to standardized methodologies for monitoring ...
Standard methodologies for Nosema apis and N. ceranae research
USDA-ARS?s Scientific Manuscript database
The international research network COLOSS (Prevention of honey bee COlony LOSSes) was established to coordinate efforts towards improving the health of western honey bee at the global level. The COLOSS BEEBOOK contains a collection of chapters intended to standardized methodologies for monitoring ...
Developing a standardized healthcare cost data warehouse.
Visscher, Sue L; Naessens, James M; Yawn, Barbara P; Reinalda, Megan S; Anderson, Stephanie S; Borah, Bijan J
2017-06-12
Research addressing value in healthcare requires a measure of cost. While there are many sources and types of cost data, each has strengths and weaknesses. Many researchers appear to create study-specific cost datasets, but the explanations of their costing methodologies are not always clear, causing their results to be difficult to interpret. Our solution, described in this paper, was to use widely accepted costing methodologies to create a service-level, standardized healthcare cost data warehouse from an institutional perspective that includes all professional and hospital-billed services for our patients. The warehouse is based on a National Institutes of Research-funded research infrastructure containing the linked health records and medical care administrative data of two healthcare providers and their affiliated hospitals. Since all patients are identified in the data warehouse, their costs can be linked to other systems and databases, such as electronic health records, tumor registries, and disease or treatment registries. We describe the two institutions' administrative source data; the reference files, which include Medicare fee schedules and cost reports; the process of creating standardized costs; and the warehouse structure. The costing algorithm can create inflation-adjusted standardized costs at the service line level for defined study cohorts on request. The resulting standardized costs contained in the data warehouse can be used to create detailed, bottom-up analyses of professional and facility costs of procedures, medical conditions, and patient care cycles without revealing business-sensitive information. After its creation, a standardized cost data warehouse is relatively easy to maintain and can be expanded to include data from other providers. Individual investigators who may not have sufficient knowledge about administrative data do not have to try to create their own standardized costs on a project-by-project basis because our data warehouse generates standardized costs for defined cohorts upon request.
Rezapour, Aziz; Jafari, Abdosaleh; Mirmasoudi, Kosha; Talebianpour, Hamid
2017-09-01
Health economic evaluation research plays an important role in selecting cost-effective interventions. The purpose of this study was to assess the quality of published articles in Iranian journals related to economic evaluation in health care programs based on Drummond's checklist in terms of numbers, features, and quality. In the present review study, published articles (Persian and English) in Iranian journals related to economic evaluation in health care programs were searched using electronic databases. In addition, the methodological quality of articles' structure was analyzed by Drummond's standard checklist. Based on the inclusion criteria, the search of databases resulted in 27 articles that fully covered economic evaluation in health care programs. A review of articles in accordance with Drummond's criteria showed that the majority of studies had flaws. The most common methodological weakness in the articles was in terms of cost calculation and valuation. Considering such methodological faults in these studies, it is anticipated that these studies would not provide an appropriate feedback to policy makers to allocate health care resources correctly and select suitable cost-effective interventions. Therefore, researchers are required to comply with the standard guidelines in order to better execute and report on economic evaluation studies.
Rezapour, Aziz; Jafari, Abdosaleh; Mirmasoudi, Kosha; Talebianpour, Hamid
2017-01-01
Health economic evaluation research plays an important role in selecting cost-effective interventions. The purpose of this study was to assess the quality of published articles in Iranian journals related to economic evaluation in health care programs based on Drummond’s checklist in terms of numbers, features, and quality. In the present review study, published articles (Persian and English) in Iranian journals related to economic evaluation in health care programs were searched using electronic databases. In addition, the methodological quality of articles’ structure was analyzed by Drummond’s standard checklist. Based on the inclusion criteria, the search of databases resulted in 27 articles that fully covered economic evaluation in health care programs. A review of articles in accordance with Drummond’s criteria showed that the majority of studies had flaws. The most common methodological weakness in the articles was in terms of cost calculation and valuation. Considering such methodological faults in these studies, it is anticipated that these studies would not provide an appropriate feedback to policy makers to allocate health care resources correctly and select suitable cost-effective interventions. Therefore, researchers are required to comply with the standard guidelines in order to better execute and report on economic evaluation studies. PMID:29234174
Liao, Yue; Skelton, Kara; Dunton, Genevieve; Bruening, Meg
2016-06-21
Ecological momentary assessment (EMA) is a method of collecting real-time data based on careful timing, repeated measures, and observations that take place in a participant's typical environment. Due to methodological advantages and rapid advancement in mobile technologies in recent years, more studies have adopted EMA in addressing topics of nutrition and physical activity in youth. The aim of this systematic review is to describe EMA methodology that has been used in studies addressing nutrition and physical activity in youth and provide a comprehensive checklist for reporting EMA studies. Thirteen studies were reviewed and analyzed for the following 5 areas of EMA methodology: (1) sampling and measures, (2) schedule, (3) technology and administration, (4) prompting strategy, and (5) response and compliance. Results of this review showed a wide variability in the design and reporting of EMA studies in nutrition and physical activity among youth. The majority of studies (69%) monitored their participants during one period of time, although the monitoring period ranged from 4 to 14 days, and EMA surveys ranged from 2 to 68 times per day. More than half (54%) of the studies employed some type of electronic technology. Most (85%) of the studies used interval-contingent prompting strategy. For studies that utilized electronic devices with interval-contingent prompting strategy, none reported the actual number of EMA prompts received by participants out of the intended number of prompts. About half (46%) of the studies failed to report information about EMA compliance rates. For those who reported, compliance rates ranged from 44-96%, with an average of 71%. Findings from this review suggest that in order to identify best practices for EMA methodology in nutrition and physical activity research among youth, more standardized EMA reporting is needed. Missing the key information about EMA design features and participant compliance might lead to misinterpretation of results. Future nutrition and physical activity EMA studies need to be more rigorous and thorough in descriptions of methodology and results. A reporting checklist was developed with the goal of enhancing reliability, efficacy, and overall interpretation of the findings for future studies that use EMAs.
GUM Analysis for TIMS and SIMS Isotopic Ratios in Graphite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heasler, Patrick G.; Gerlach, David C.; Cliff, John B.
2007-04-01
This report describes GUM calculations for TIMS and SIMS isotopic ratio measurements of reactor graphite samples. These isotopic ratios are used to estimate reactor burn-up, and currently consist of various ratios of U, Pu, and Boron impurities in the graphite samples. The GUM calculation is a propagation of error methodology that assigns uncertainties (in the form of standard error and confidence bound) to the final estimates.
MULTIRESIDUE DETERMINATION OF ACIDIC PESTICIDES ...
A multiresidue pesticide methodology has been studied and results for acidics are reported here with base/neutral to follow. This work studies a literature procedure as a possible general approach to many pesticides and potentially other analytes that are considered to be liquid chromatographic candidates rather than gas chromatographic ones. The analysis of thesewage effluent of a major southwestern US city serves as an example of the application of the methodology to a real sample. Recovery studies were also conducted to validate the proposed extraction step. A gradient elution program was followed for the high performance liquid chromatography leading to a general approach for acidics. Confirmation of identity was by EI GC/MS after conversion of the acids to the methyl ester (or other appropriate methylation) by means of trimethylsilyldiazomethane. The 3,4-dichlorophenoxyacetic acid was used as an internal standard to monitor the reaction and PCB #19 was used for the quantitation internal standard. Although others have reported similar analyses of acids, conversion to the methyl ester was by means of diazomethane itself rather than by the more convenient and safer trimethylsilyldiazomethane. Thus, the present paper supports the use of trimethylsilyldiazomethane with all of these acids (trimethylsilyldiazomethane has been used in environmental work with some phenoxyacetic acid herbicides) and further supports the usefulness of this reagent as a potential re
Francis, Andrew J; Resendiz, Marino J E
2017-07-28
Solid-phase synthesis has been used to obtain canonical and modified polymers of nucleic acids, specifically of DNA or RNA, which has made it a popular methodology for applications in various fields and for different research purposes. The procedure described herein focuses on the synthesis, purification, and characterization of dodecamers of RNA 5'-[CUA CGG AAU CAU]-3' containing zero, one, or two modifications located at the C2'-O-position. The probes are based on 2-thiophenylmethyl groups, incorporated into RNA nucleotides via standard organic synthesis and introduced into the corresponding oligonucleotides via their respective phosphoramidites. This report makes use of phosphoramidite chemistry via the four canonical nucleobases (Uridine (U), Cytosine (C), Guanosine (G), Adenosine (A)), as well as 2-thiophenylmethyl functionalized nucleotides modified at the 2'-O-position; however, the methodology is amenable for a large variety of modifications that have been developed over the years. The oligonucleotides were synthesized on a controlled-pore glass (CPG) support followed by cleavage from the resin and deprotection under standard conditions, i.e., a mixture of ammonia and methylamine (AMA) followed by hydrogen fluoride/triethylamine/N-methylpyrrolidinone. The corresponding oligonucleotides were purified via polyacrylamide electrophoresis (20% denaturing) followed by elution, desalting, and isolation via reversed-phase chromatography (Sep-pak, C18-column). Quantification and structural parameters were assessed via ultraviolet-visible (UV-vis) and circular dichroism (CD) photometric analysis, respectively. This report aims to serve as a resource and guide for beginner and expert researchers interested in embarking in this field. It is expected to serve as a work-in-progress as new technologies and methodologies are developed. The description of the methodologies and techniques within this document correspond to a DNA/RNA synthesizer (refurbished and purchased in 2013) that uses phosphoramidite chemistry.
Adherence of hip and knee arthroplasty studies to RSA standardization guidelines
Mäkinen, Tatu J; Aro, Hannu T; Bragdon, Charles; Malchau, Henrik
2014-01-01
Background and purpose Guidelines for standardization of radiostereometry (RSA) of implants were published in 2005 to facilitate comparison of outcomes between various research groups. In this systematic review, we determined how well studies have adhered to these guidelines. Methods We carried out a literature search to identify all articles published between January 2000 and December 2011 that used RSA in the evaluation of hip or knee prosthesis migration. 2 investigators independently evaluated each of the studies for adherence to the 13 individual guideline items. Since some of the 13 points included more than 1 criterion, studies were assessed on whether each point was fully met, partially met, or not met. Results 153 studies that met our inclusion criteria were identified. 61 of these were published before the guidelines were introduced (2000–2005) and 92 after the guidelines were introduced (2006–2011). The methodological quality of RSA studies clearly improved from 2000 to 2011. None of the studies fully met all 13 guidelines. Nearly half (43) of the studies published after the guidelines demonstrated a high methodological quality and adhered at least partially to 10 of the 13 guidelines, whereas less than one-fifth (11) of the studies published before the guidelines had the same methodological quality. Commonly unaddressed guideline items were related to imaging methodology, determination of precision from double examinations, and also mean error of rigid-body fitting and condition number cutoff levels. Interpretation The guidelines have improved methodological reporting in RSA studies, but adherence to these guidelines is still relatively low. There is a need to update and clarify the guidelines for clinical hip and knee arthroplasty RSA studies. PMID:24954489
49 CFR 1111.9 - Procedural schedule in cases using simplified standards.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) SURFACE TRANSPORTATION BOARD, DEPARTMENT OF TRANSPORTATION RULES OF PRACTICE COMPLAINT AND INVESTIGATION... the simplified standards: (1) In cases relying upon the Simplified-SAC methodology: Day 0—Complaint... dominance. (b) Defendant's second disclosure. In cases using the Simplified-SAC methodology, the defendant...
McCrae, Niall; Purssell, Edward
2015-12-01
Clear and logical eligibility criteria are fundamental to the design and conduct of a systematic review. This methodological review examined the quality of reporting and application of eligibility criteria in systematic reviews published in three leading medical journals. All systematic reviews in the BMJ, JAMA and The Lancet in the years 2013 and 2014 were extracted. These were assessed using a refined version of a checklist previously designed by the authors. A total of 113 papers were eligible, of which 65 were in BMJ, 17 in The Lancet and 31 in JAMA. Although a generally high level of reporting was found, eligibility criteria were often problematic. In 67% of papers, eligibility was specified after the search sources or terms. Unjustified time restrictions were used in 21% of reviews, and unpublished or unspecified data in 27%. Inconsistency between journals was apparent in the requirements for systematic reviews. The quality of reviews in these leading medical journals was high; however, there were issues that reduce the clarity and replicability of the review process. As well as providing a useful checklist, this methodological review informs the continued development of standards for systematic reviews. © 2015 John Wiley & Sons, Ltd.
A quality assessment of randomized controlled trial reports in endodontics.
Lucena, C; Souza, E M; Voinea, G C; Pulgar, R; Valderrama, M J; De-Deus, G
2017-03-01
To assess the quality of the randomized clinical trial (RCT) reports published in Endodontics between 1997 and 2012. Retrieval of RCTs in Endodontics was based on a search of the Thomson Reuters Web of Science (WoS) database (March 2013). Quality evaluation was performed using a checklist based on the Jadad criteria, CONSORT (Consolidated Standards of Reporting Trials) statement and SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials). Descriptive statistics were used for frequency distribution of data. Student's t-test and Welch test were used to identify the influence of certain trial characteristics upon report quality (α = 0.05). A total of 89 RCTs were evaluated, and several methodological flaws were found: only 45% had random sequence generation at low risk of bias, 75% did not provide information on allocation concealment, and 19% were nonblinded designs. Regarding statistics, only 55% of the RCTs performed adequate sample size estimations, only 16% presented confidence intervals, and 25% did not provide the exact P-value. Also, 2% of the articles used no statistical tests, and in 87% of the RCTs, the information provided was insufficient to determine whether the statistical methodology applied was appropriate or not. Significantly higher scores were observed for multicentre trials (P = 0.023), RCTs signed by more than 5 authors (P = 0.03), articles belonging to journals ranked above the JCR median (P = 0.03), and articles complying with the CONSORT guidelines (P = 0.000). The quality of RCT reports in key areas for internal validity of the study was poor. Several measures, such as compliance with the CONSORT guidelines, are important in order to raise the quality of RCTs in Endodontics. © 2016 International Endodontic Journal. Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Toniolo, Matthew D.; Tartabini, Paul V.; Roithmayr, Carlos M.; Albertson, Cindy W.; Karlgaard, Christopher D.
2016-01-01
The objective of this report is to develop and implement a physics based method for analysis and simulation of multi-body dynamics including launch vehicle stage separation. The constraint force equation (CFE) methodology discussed in this report provides such a framework for modeling constraint forces and moments acting at joints when the vehicles are still connected. Several stand-alone test cases involving various types of joints were developed to validate the CFE methodology. The results were compared with ADAMS(Registered Trademark) and Autolev, two different industry standard benchmark codes for multi-body dynamic analysis and simulations. However, these two codes are not designed for aerospace flight trajectory simulations. After this validation exercise, the CFE algorithm was implemented in Program to Optimize Simulated Trajectories II (POST2) to provide a capability to simulate end-to-end trajectories of launch vehicles including stage separation. The POST2/CFE methodology was applied to the STS-1 Space Shuttle solid rocket booster (SRB) separation and Hyper-X Research Vehicle (HXRV) separation from the Pegasus booster as a further test and validation for its application to launch vehicle stage separation problems. Finally, to demonstrate end-to-end simulation capability, POST2/CFE was applied to the ascent, orbit insertion, and booster return of a reusable two-stage-to-orbit (TSTO) vehicle concept. With these validation exercises, POST2/CFE software can be used for performing conceptual level end-to-end simulations, including launch vehicle stage separation, for problems similar to those discussed in this report.
Basch, Ethan; Abernethy, Amy P; Mullins, C Daniel; Reeve, Bryce B; Smith, Mary Lou; Coons, Stephen Joel; Sloan, Jeff; Wenzel, Keith; Chauhan, Cynthia; Eppard, Wayland; Frank, Elizabeth S; Lipscomb, Joseph; Raymond, Stephen A; Spencer, Merianne; Tunis, Sean
2012-12-01
Examining the patient's subjective experience in prospective clinical comparative effectiveness research (CER) of oncology treatments or process interventions is essential for informing decision making. Patient-reported outcome (PRO) measures are the standard tools for directly eliciting the patient experience. There are currently no widely accepted standards for developing or implementing PRO measures in CER. Recommendations for the design and implementation of PRO measures in CER were developed via a standardized process including multistakeholder interviews, a technical working group, and public comments. Key recommendations are to include assessment of patient-reported symptoms as well as health-related quality of life in all prospective clinical CER studies in adult oncology; to identify symptoms relevant to a particular study population and context based on literature review and/or qualitative and quantitative methods; to assure that PRO measures used are valid, reliable, and sensitive in a comparable population (measures particularly recommended include EORTC QLQ-C30, FACT, MDASI, PRO-CTCAE, and PROMIS); to collect PRO data electronically whenever possible; to employ methods that minimize missing patient reports and include a plan for analyzing and reporting missing PRO data; to report the proportion of responders and cumulative distribution of responses in addition to mean changes in scores; and to publish results of PRO analyses simultaneously with other clinical outcomes. Twelve core symptoms are recommended for consideration in studies in advanced or metastatic cancers. Adherence to methodologic standards for the selection, implementation, and analysis/reporting of PRO measures will lead to an understanding of the patient experience that informs better decisions by patients, providers, regulators, and payers.
Status of emerging standards for removable computer storage media and related contributions of NIST
NASA Technical Reports Server (NTRS)
Podio, Fernando L.
1992-01-01
Standards for removable computer storage media are needed so that users may reliably interchange data both within and among various computer installations. Furthermore, media interchange standards support competition in industry and prevent sole-source lock-in. NIST participates in magnetic tape and optical disk standards development through Technical Committees X3B5, Digital Magnetic Tapes, X3B11, Optical Digital Data Disk, and the Joint Technical Commission on Data Permanence. NIST also participates in other relevant national and international standards committees for removable computer storage media. Industry standards for digital magnetic tapes require the use of Standard Reference Materials (SRM's) developed and maintained by NIST. In addition, NIST has been studying care and handling procedures required for digital magnetic tapes. NIST has developed a methodology for determining the life expectancy of optical disks. NIST is developing care and handling procedures for optical digital data disks and is involved in a program to investigate error reporting capabilities of optical disk drives. This presentation reflects the status of emerging magnetic tape and optical disk standards, as well as NIST's contributions in support of these standards.
Tate, Robyn L; McDonald, Skye; Perdices, Michael; Togher, Leanne; Schultz, Regina; Savage, Sharon
2008-08-01
Rating scales that assess methodological quality of clinical trials provide a means to critically appraise the literature. Scales are currently available to rate randomised and non-randomised controlled trials, but there are none that assess single-subject designs. The Single-Case Experimental Design (SCED) Scale was developed for this purpose and evaluated for reliability. Six clinical researchers who were trained and experienced in rating methodological quality of clinical trials developed the scale and participated in reliability studies. The SCED Scale is an 11-item rating scale for single-subject designs, of which 10 items are used to assess methodological quality and use of statistical analysis. The scale was developed and refined over a 3-year period. Content validity was addressed by identifying items to reduce the main sources of bias in single-case methodology as stipulated by authorities in the field, which were empirically tested against 85 published reports. Inter-rater reliability was assessed using a random sample of 20/312 single-subject reports archived in the Psychological Database of Brain Impairment Treatment Efficacy (PsycBITE). Inter-rater reliability for the total score was excellent, both for individual raters (overall ICC = 0.84; 95% confidence interval 0.73-0.92) and for consensus ratings between pairs of raters (overall ICC = 0.88; 95% confidence interval 0.78-0.95). Item reliability was fair to excellent for consensus ratings between pairs of raters (range k = 0.48 to 1.00). The results were replicated with two independent novice raters who were trained in the use of the scale (ICC = 0.88, 95% confidence interval 0.73-0.95). The SCED Scale thus provides a brief and valid evaluation of methodological quality of single-subject designs, with the total score demonstrating excellent inter-rater reliability using both individual and consensus ratings. Items from the scale can also be used as a checklist in the design, reporting and critical appraisal of single-subject designs, thereby assisting to improve standards of single-case methodology.
Key challenges for nanotechnology: Standardization of ecotoxicity testing.
Cerrillo, Cristina; Barandika, Gotzone; Igartua, Amaya; Areitioaurtena, Olatz; Mendoza, Gemma
2017-04-03
Nanotechnology is expected to contribute to the protection of the environment, but many uncertainties exist regarding the environmental and human implications of manufactured nanomaterials (MNMs). Contradictory results have been reported for their ecotoxicity to aquatic organisms, which constitute one of the most important pathways for their entrance and transfer throughout the food web. The present review is focused on the international strategies that are laying the foundations of the ecotoxicological assessment of MNMs. Specific advice is provided on the preparation of MNM dispersions in the culture media of the organisms, which is considered a key factor to overcome the limitations in the standardization of the test methodologies.
RESIDUAL RISK ASSESSMENTS - RESIDUAL RISK ...
This source category previously subjected to a technology-based standard will be examined to determine if health or ecological risks are significant enough to warrant further regulation for Coke Ovens. These assesments utilize existing models and data bases to examine the multi-media and multi-pollutant impacts of air toxics emissions on human health and the environment. Details on the assessment process and methodologies can be found in EPA's Residual Risk Report to Congress issued in March of 1999 (see web site). To assess the health risks imposed by air toxics emissions from Coke Ovens to determine if control technology standards previously established are adequately protecting public health.
Rotary-wing flight test methods used for the evaluation of night vision devices
NASA Astrophysics Data System (ADS)
Haworth, Loran A.; Blanken, Christopher J.; Szoboszlay, Zoltan P.
2001-08-01
The U.S. Army Aviation mission includes flying helicopters at low altitude, at night, and in adverse weather. Night Vision Devices (NVDs) are used to supplement the pilot's visual cues for night flying. As the military requirement to conduct night helicopter operations has increased, the impact of helicopter flight operations with NVD technology in the Degraded Visual Environment (DVE) became increasingly important to quantify. Aeronautical Design Standard-33 (ADS- 33) was introduced to update rotorcraft handling qualities requirements and to quantify the impact of the NVDs in the DVE. As reported in this paper, flight test methodology in ADS-33 has been used by the handling qualities community to measure the impact of NVDs on task performance in the DVE. This paper provides the background and rationale behind the development of ADS-33 flight test methodology for handling qualities in the DVE, as well as the test methodology developed for human factor assessment of NVDs in the DVE. Lessons learned, shortcomings and recommendations for NVD flight test methodology are provided in this paper.
Realism and Pragmatism in a mixed methods study.
Allmark, Peter; Machaczek, Katarzyna
2018-06-01
A discussion of how adopting a Realist rather than Pragmatist methodology affects the conduct of mixed methods research. Mixed methods approaches are now extensively employed in nursing and other healthcare research. At the same time, realist methodology is increasingly used as philosophical underpinning of research in these areas. However, the standard philosophical underpinning of mixed methods research is Pragmatism, which is generally considered incompatible or at least at odds with Realism. This paper argues that Realism can be used as the basis of mixed methods research and that doing so carries advantages over using Pragmatism. A mixed method study into patient handover reports is used to illustrate how Realism affected its design and how it would have differed had a Pragmatist approach been taken. Discussion Paper. Philosophers Index; Google Scholar. Those undertaking mixed methods research should consider the use of Realist methodology with the addition of some insights from Pragmatism to do with the start and end points of enquiry. Realism is a plausible alternative methodology for those undertaking mixed methods studies. © 2018 John Wiley & Sons Ltd.
Sewell, Fiona; Doe, John; Gellatly, Nichola; Ragan, Ian; Burden, Natalie
2017-10-01
The current animal-based paradigm for safety assessment must change. In September 2016, the UK National Centre for Replacement, Refinement and Reduction of Animals in Research (NC3Rs) brought together scientists from regulatory authorities, academia and industry to review progress in bringing new methodology into regulatory use, and to identify ways to expedite progress. Progress has been slow. Science is advancing to make this possible but changes are necessary. The new paradigm should allow new methodology to be adopted once it is developed rather than being based on a fixed set of studies. Regulatory authorities can help by developing Performance-Based Standards. The most pressing need is in repeat dose toxicology, although setting standards will be more complex than in areas such as sensitization. Performance standards should be aimed directly at human safety, not at reproducing the results of animal studies. Regulatory authorities can also aid progress towards the acceptance of non-animal based methodology by promoting "safe-haven" trials where traditional and new methodology data can be submitted in parallel to build up experience in the new methods. Industry can play its part in the acceptance of new methodology, by contributing to the setting of performance standards and by actively contributing to "safe-haven" trials. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Green, Andrew; Liles, Clive; Rushton, Alison; Kyte, Derek G
2014-12-01
This systematic review investigated the measurement properties of disease-specific patient-reported outcome measures used in Patellofemoral Pain Syndrome. Two independent reviewers conducted a systematic search of key databases (MEDLINE, EMBASE, AMED, CINHAL+ and the Cochrane Library from inception to August 2013) to identify relevant studies. A third reviewer mediated in the event of disagreement. Methodological quality was evaluated using the validated COSMIN (Consensus-based Standards for the Selection of Health Measurement Instruments) tool. Data synthesis across studies determined the level of evidence for each patient-reported outcome measure. The search strategy returned 2177 citations. Following the eligibility review phase, seven studies, evaluating twelve different patient-reported outcome measures, met inclusion criteria. A 'moderate' level of evidence supported the structural validity of several measures: the Flandry Questionnaire, Anterior Knee Pain Scale, Functional Index Questionnaire, Eng and Pierrynowski Questionnaire and Visual Analogue Scales for 'usual' and 'worst' pain. In addition, there was a 'Limited' level of evidence supporting the test-retest reliability and validity (cross-cultural, hypothesis testing) of the Persian version of the Anterior Knee Pain Scale. Other measurement properties were evaluated with poor methodological quality, and many properties were not evaluated in any of the included papers. Current disease-specific outcome measures for Patellofemoral Pain Syndrome require further investigation. Future studies should evaluate all important measurement properties, utilising an appropriate framework such as COSMIN to guide study design, to facilitate optimal methodological quality. Copyright © 2014 Elsevier Ltd. All rights reserved.
Initiative for standardization of reporting genetics of male infertility.
Traven, Eva; Ogrinc, Ana; Kunej, Tanja
2017-02-01
The number of publications on research of male infertility is increasing. Technologies used in research of male infertility generate complex results and various types of data that need to be appropriately managed, arranged, and made available to other researchers for further use. In our previous study, we collected over 800 candidate loci for male fertility in seven mammalian species. However, the continuation of the work towards a comprehensive database of candidate genes associated with different types of idiopathic human male infertility is challenging due to fragmented information, obtained from a variety of technologies and various omics approaches. Results are published in different forms and usually need to be excavated from the text, which hinders the gathering of information. Standardized reporting of genetic anomalies as well as causative and risk factors of male infertility therefore presents an important issue. The aim of the study was to collect examples of diverse genomic loci published in association with human male infertility and to propose a standardized format for reporting genetic causes of male infertility. From the currently available data we have selected 75 studies reporting 186 representative genomic loci which have been proposed as genetic risk factors for male infertility. Based on collected and formatted data, we suggested a first step towards unification of reporting the genetics of male infertility in original and review studies. The proposed initiative consists of five relevant data types: 1) genetic locus, 2) race/ethnicity, number of participants (infertile/controls), 3) methodology, 4) phenotype (clinical data, disease ontology, and disease comorbidity), and 5) reference. The proposed form for standardized reporting presents a baseline for further optimization with additional genetic and clinical information. This data standardization initiative will enable faster multi-omics data integration, database development and sharing, establishing more targeted hypotheses, and facilitating biomarker discovery.
Maratos, A S; Gold, C; Wang, X; Crawford, M J
2008-01-23
Depression is a highly prevalent disorder associated with reduced social functioning, impaired quality of life, and increased mortality. Music therapy has been used in the treatment of a variety of mental disorders, but its impact on those with depression is unclear. To examine the efficacy of music therapy with standard care compared to standard care alone among people with depression and to compare the effects of music therapy for people with depression against other psychological or pharmacological therapies. CCDANCTR-Studies and CCDANCTR-References were searched on 7/11/2007, MEDLINE, PsycINFO, EMBASE, PsycLit, PSYindex, and other relevant sites were searched in November 2006. Reference lists of retrieved articles were hand searched, as well as specialist music and arts therapies journals. All randomised controlled trials comparing music therapy with standard care or other interventions for depression. Data on participants, interventions and outcomes were extracted and entered onto a database independently by two review authors. The methodological quality of each study was also assessed independently by two review authors. The primary outcome was reduction in symptoms of depression, based on a continuous scale. Five studies met the inclusion criteria of the review. Marked variations in the interventions offered and the populations studied meant that meta-analysis was not appropriate. Four of the five studies individually reported greater reduction in symptoms of depression among those randomised to music therapy than to those in standard care conditions. The fifth study, in which music therapy was used as an active control treatment, reported no significant change in mental state for music therapy compared with standard care. Dropout rates from music therapy conditions appeared to be low in all studies. Findings from individual randomised trials suggest that music therapy is accepted by people with depression and is associated with improvements in mood. However, the small number and low methodological quality of studies mean that it is not possible to be confident about its effectiveness. High quality trials evaluating the effects of music therapy on depression are required.
Intermountain Health Care, Inc.: Standard Costing System Methodology and Implementation
Rosqvist, W.V.
1984-01-01
Intermountain Health Care, Inc. (IHC) a notfor-profit hospital chain with 22 hospitals in the intermountain area and corporate offices located in Salt Lake City, Utah, has developed a Standard Costing System to provide hospital management with a tool for confronting increased cost pressures in the health care environment. This document serves as a description of methodology used in developing the standard costing system and outlines the implementation process.
A standardized framing for reporting protein identifications in mzIdentML 1.2
Seymour, Sean L.; Farrah, Terry; Binz, Pierre-Alain; Chalkley, Robert J.; Cottrell, John S.; Searle, Brian C.; Tabb, David L.; Vizcaíno, Juan Antonio; Prieto, Gorka; Uszkoreit, Julian; Eisenacher, Martin; Martínez-Bartolomé, Salvador; Ghali, Fawaz; Jones, Andrew R.
2015-01-01
Inferring which protein species have been detected in bottom-up proteomics experiments has been a challenging problem for which solutions have been maturing over the past decade. While many inference approaches now function well in isolation, comparing and reconciling the results generated across different tools remains difficult. It presently stands as one of the greatest barriers in collaborative efforts such as the Human Proteome Project and public repositories like the PRoteomics IDEntifications (PRIDE) database. Here we present a framework for reporting protein identifications that seeks to improve capabilities for comparing results generated by different inference tools. This framework standardizes the terminology for describing protein identification results, associated with the HUPO-Proteomics Standards Initiative (PSI) mzIdentML standard, while still allowing for differing methodologies to reach that final state. It is proposed that developers of software for reporting identification results will adopt this terminology in their outputs. While the new terminology does not require any changes to the core mzIdentML model, it represents a significant change in practice, and, as such, the rules will be released via a new version of the mzIdentML specification (version 1.2) so that consumers of files are able to determine whether the new guidelines have been adopted by export software. PMID:25092112
NASA Technical Reports Server (NTRS)
Miller, James; Leggett, Jay; Kramer-White, Julie
2008-01-01
A team directed by the NASA Engineering and Safety Center (NESC) collected methodologies for how best to develop safe and reliable human rated systems and how to identify the drivers that provide the basis for assessing safety and reliability. The team also identified techniques, methodologies, and best practices to assure that NASA can develop safe and reliable human rated systems. The results are drawn from a wide variety of resources, from experts involved with the space program since its inception to the best-practices espoused in contemporary engineering doctrine. This report focuses on safety and reliability considerations and does not duplicate or update any existing references. Neither does it intend to replace existing standards and policy.
Effectiveness of voice therapy in functional dysphonia: where are we now?
Bos-Clark, Marianne; Carding, Paul
2011-06-01
To review the recent literature since the 2009 Cochrane review regarding the effectiveness of voice therapy for patients with functional dysphonia. A range of articles report on the effects of voice therapy treatment for functional dysphonia, with a wide range of interventions described. Only one study is a randomized controlled trial. A number of excellent review articles have extended the knowledge base. In primary research, methodological issues persist: studies are small, and not adequately controlled. Studies show improved standards of outcome measurement and of description of the content of voice therapy. There is a continued need for larger, methodologically sound clinical effectiveness studies. Future studies need to be replicable and generalizable in order to inform and elucidate clinical practice.
Sampling flies or sampling flaws? Experimental design and inference strength in forensic entomology.
Michaud, J-P; Schoenly, Kenneth G; Moreau, G
2012-01-01
Forensic entomology is an inferential science because postmortem interval estimates are based on the extrapolation of results obtained in field or laboratory settings. Although enormous gains in scientific understanding and methodological practice have been made in forensic entomology over the last few decades, a majority of the field studies we reviewed do not meet the standards for inference, which are 1) adequate replication, 2) independence of experimental units, and 3) experimental conditions that capture a representative range of natural variability. Using a mock case-study approach, we identify design flaws in field and lab experiments and suggest methodological solutions for increasing inference strength that can inform future casework. Suggestions for improving data reporting in future field studies are also proposed.
Renewable Energy used in State Renewable Portfolio Standards Yielded
. Renewable Portfolio Standards also shows national water withdrawals and water consumption by fossil-fuel methodologies, while recognizing that states could perform their own more-detailed assessments," NREL's , respectively. Ranges are presented as the models and methodologies used are sensitive to multiple parameters
42 CFR 416.171 - Determination of payment rates for ASC services.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 3 2010-10-01 2010-10-01 false Determination of payment rates for ASC services... Determination of payment rates for ASC services. (a) Standard methodology. The standard methodology for determining the national unadjusted payment rate for ASC services is to calculate the product of the...
45 CFR 153.510 - Risk corridors establishment and payment methodology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... methodology. 153.510 Section 153.510 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS STANDARDS RELATED TO REINSURANCE, RISK CORRIDORS, AND RISK ADJUSTMENT UNDER THE AFFORDABLE CARE ACT Health Insurance Issuer Standards Related to the Risk Corridors Program § 153...
45 CFR 153.510 - Risk corridors establishment and payment methodology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... methodology. 153.510 Section 153.510 Public Welfare Department of Health and Human Services REQUIREMENTS RELATING TO HEALTH CARE ACCESS STANDARDS RELATED TO REINSURANCE, RISK CORRIDORS, AND RISK ADJUSTMENT UNDER THE AFFORDABLE CARE ACT Health Insurance Issuer Standards Related to the Risk Corridors Program § 153...
Penn, Alexandra S.; Knight, Christopher J. K.; Lloyd, David J. B.; Avitabile, Daniele; Kok, Kasper; Schiller, Frank; Woodward, Amy; Druckman, Angela; Basson, Lauren
2013-01-01
Fuzzy Cognitive Mapping (FCM) is a widely used participatory modelling methodology in which stakeholders collaboratively develop a ‘cognitive map’ (a weighted, directed graph), representing the perceived causal structure of their system. This can be directly transformed by a workshop facilitator into simple mathematical models to be interrogated by participants by the end of the session. Such simple models provide thinking tools which can be used for discussion and exploration of complex issues, as well as sense checking the implications of suggested causal links. They increase stakeholder motivation and understanding of whole systems approaches, but cannot be separated from an intersubjective participatory context. Standard FCM methodologies make simplifying assumptions, which may strongly influence results, presenting particular challenges and opportunities. We report on a participatory process, involving local companies and organisations, focussing on the development of a bio-based economy in the Humber region. The initial cognitive map generated consisted of factors considered key for the development of the regional bio-based economy and their directional, weighted, causal interconnections. A verification and scenario generation procedure, to check the structure of the map and suggest modifications, was carried out with a second session. Participants agreed on updates to the original map and described two alternate potential causal structures. In a novel analysis all map structures were tested using two standard methodologies usually used independently: linear and sigmoidal FCMs, demonstrating some significantly different results alongside some broad similarities. We suggest a development of FCM methodology involving a sensitivity analysis with different mappings and discuss the use of this technique in the context of our case study. Using the results and analysis of our process, we discuss the limitations and benefits of the FCM methodology in this case and in general. We conclude by proposing an extended FCM methodology, including multiple functional mappings within one participant-constructed graph. PMID:24244303
GUM Analysis for SIMS Isotopic Ratios in BEP0 Graphite Qualification Samples, Round 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerlach, David C.; Heasler, Patrick G.; Reid, Bruce D.
2009-01-01
This report describes GUM calculations for TIMS and SIMS isotopic ratio measurements of reactor graphite samples. These isotopic ratios are used to estimate reactor burn-up, and currently consist of various ratios of U, Pu, and Boron impurities in the graphite samples. The GUM calculation is a propagation of error methodology that assigns uncertainties (in the form of standard error and confidence bound) to the final estimates.
Yule, Morag; Davison, Joyce; Brotto, Lori
2011-01-01
The International Index of Erectile Function is a well-worded and psychometrically valid self-report questionnaire widely used as the standard for the evaluation of male sexual function. However, some conceptual and statistical problems arise when using the measure with men who are not sexually active. These problems are illustrated using 2 empirical examples, and the authors provide recommended solutions to further strengthen the efficacy and validity of this measure.
Peters, Brenton C; Fitzgerald, Christopher J
2006-10-01
Laboratory and field data reported in the literature are confusing with regard to "adequate" protection thresholds for borate timber preservatives. The confusion is compounded by differences in termite species, timber species and test methodology. Laboratory data indicate a borate retention of 0.5% mass/mass (m/m) boric acid equivalent (BAE) would cause > 90% termite mortality and restrict mass loss in test specimens to < or = 5%. Field data generally suggest that borate retentions appreciably > 0.5% m/m BAE are required. We report two field experiments with varying amounts of untreated feeder material in which Coptotermes acinaciformis (Froggatt) (Isoptera: Rhinotermitidae) responses to borate-treated radiata (Monterey) pine, Pinus radiata D. Don, were measured. The apparently conflicting results between laboratory and field data are explained by the presence or absence of untreated feeder material in the test environment. In the absence of untreated feeder material, wood containing 0.5% BAE provided adequate protection from Coptotermes sp., whereas in the presence of untreated feeder material, increased retentions were required. Furthermore, the retentions required increased with increased amounts of susceptible material present. Some termites, Nasutitermes sp. and Mastotermes darwiniensis Froggatt, for example, are borate-tolerant and borate timber preservatives are not a viable management option with these species. The lack of uniform standards for termite test methodology and assessment criteria for efficacy across the world is recognized as a difficulty with research into the performance of timber preservatives with termites. The many variables in laboratory and field assays make "prescriptive" standards difficult to recommend. The use of "performance" standards to define efficacy criteria ("adequate" protection) is discussed.
Gilabert-Perramon, Antoni; Torrent-Farnell, Josep; Catalan, Arancha; Prat, Alba; Fontanet, Manel; Puig-Peiró, Ruth; Merino-Montero, Sandra; Khoury, Hanane; Goetghebeur, Mireille M; Badia, Xavier
2017-01-01
The aim of this study was to adapt and assess the value of a Multi-Criteria Decision Analysis (MCDA) framework (EVIDEM) for the evaluation of Orphan drugs in Catalonia (Catalan Health Service). The standard evaluation and decision-making procedures of CatSalut were compared with the EVIDEM methodology and contents. The EVIDEM framework was adapted to the Catalan context, focusing on the evaluation of Orphan drugs (PASFTAC program), during a Workshop with sixteen PASFTAC members. The criteria weighting was done using two different techniques (nonhierarchical and hierarchical). Reliability was assessed by re-test. The EVIDEM framework and methodology was found useful and feasible for Orphan drugs evaluation and decision making in Catalonia. All the criteria considered for the development of the CatSalut Technical Reports and decision making were considered in the framework. Nevertheless, the framework could improve the reporting of some of these criteria (i.e., "unmet needs" or "nonmedical costs"). Some Contextual criteria were removed (i.e., "Mandate and scope of healthcare system", "Environmental impact") or adapted ("population priorities and access") for CatSalut purposes. Independently of the weighting technique considered, the most important evaluation criteria identified for orphan drugs were: "disease severity", "unmet needs" and "comparative effectiveness", while the "size of the population" had the lowest relevance for decision making. Test-retest analysis showed weight consistency among techniques, supporting reliability overtime. MCDA (EVIDEM framework) could be a useful tool to complement the current evaluation methods of CatSalut, contributing to standardization and pragmatism, providing a method to tackle ethical dilemmas and facilitating discussions related to decision making.
Pool, Jan J. M.; van Tulder, Maurits W.; Riphagen, Ingrid I.; de Vet, Henrica C. W.
2006-01-01
Clinical provocative tests of the neck, which position the neck and arm inorder to aggravate or relieve arm symptoms, are commonly used in clinical practice in patients with a suspected cervical radiculopathy. Their diagnostic accuracy, however, has never been examined in a systematic review. A comprehensive search was conducted in order to identify all possible studies fulfilling the inclusion criteria. A study was included if: (1) any provocative test of the neck for diagnosing cervical radiculopathy was identified; (2) any reference standard was used; (3) sensitivity and specificity were reported or could be (re-)calculated; and, (4) the publication was a full report. Two reviewers independently selected studies, and assessed methodological quality. Only six studies met the inclusion criteria, which evaluated five provocative tests. In general, Spurling’s test demonstrated low to moderate sensitivity and high specificity, as did traction/neck distraction, and Valsalva’s maneuver. The upper limb tension test (ULTT) demonstrated high sensitivity and low specificity, while the shoulder abduction test demonstrated low to moderate sensitivity and moderate to high specificity. Common methodological flaws included lack of an optimal reference standard, disease progression bias, spectrum bias, and review bias. Limitations include few primary studies, substantial heterogeneity, and numerous methodological flaws among the studies; therefore, a meta-analysis was not conducted. This review suggests that, when consistent with the history and other physical findings, a positive Spurling’s, traction/neck distraction, and Valsalva’s might be indicative of a cervical radiculopathy, while a negative ULTT might be used to rule it out. However, the lack of evidence precludes any firm conclusions regarding their diagnostic value, especially when used in primary care. More high quality studies are necessary in order to resolve this issue. PMID:17013656
de Groot, Mark C H; Schuerch, Markus; de Vries, Frank; Hesse, Ulrik; Oliva, Belén; Gil, Miguel; Huerta, Consuelo; Requena, Gema; de Abajo, Francisco; Afonso, Ana S; Souverein, Patrick C; Alvarez, Yolanda; Slattery, Jim; Rottenkolber, Marietta; Schmiedl, Sven; Van Dijk, Liset; Schlienger, Raymond G; Reynolds, Robert; Klungel, Olaf H
2014-05-01
The annual prevalence of antiepileptic drug (AED) prescribing reported in the literature differs considerably among European countries due to use of different type of data sources, time periods, population distribution, and methodologic differences. This study aimed to measure prevalence of AED prescribing across seven European routine health care databases in Spain, Denmark, The Netherlands, the United Kingdom, and Germany using a standardized methodology and to investigate sources of variation. Analyses on the annual prevalence of AEDs were stratified by sex, age, and AED. Overall prevalences were standardized to the European 2008 reference population. Prevalence of any AED varied from 88 per 10,000 persons (The Netherlands) to 144 per 10,000 in Spain and Denmark in 2001. In all databases, prevalence increased linearly: from 6% in Denmark to 15% in Spain each year since 2001. This increase could be attributed entirely to an increase in "new," recently marketed AEDs while prevalence of AEDs that have been available since the mid-1990s, hardly changed. AED use increased with age for both female and male patients up to the ages of 80 to 89 years old and tended to be somewhat higher in female than in male patients between the ages of 40 and 70. No differences between databases in the number of AEDs used simultaneously by a patient were found. We showed that during the study period of 2001-2009, AED prescribing increased in five European Union (EU) countries and that this increase was due entirely to the newer AEDs marketed since the 1990s. Using a standardized methodology, we showed consistent trends across databases and countries over time. Differences in age and sex distribution explained only part of the variation between countries. Therefore, remaining variation in AED use must originate from other differences in national health care systems. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.
Using continuous process improvement methodology to standardize nursing handoff communication.
Klee, Kristi; Latta, Linda; Davis-Kirsch, Sallie; Pecchia, Maria
2012-04-01
The purpose of this article was to describe the use of continuous performance improvement (CPI) methodology to standardize nurse shift-to-shift handoff communication. The goals of the process were to standardize the content and process of shift handoff, improve patient safety, increase patient and family involvement in the handoff process, and decrease end-of-shift overtime. This article will describe process changes made over a 4-year period as result of application of the plan-do-check-act procedure, which is an integral part of the CPI methodology, and discuss further work needed to continue to refine this critical nursing care process. Copyright © 2012 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Tutlys, Vidmantas; Spöttl, Georg
2017-01-01
Purpose: This paper aims to explore methodological and institutional challenges on application of the work-process analysis approach in the design and development of competence-based occupational standards for Lithuania. Design/methodology/approach: The theoretical analysis is based on the review of scientific literature and the analysis of…
Design of experiments enhanced statistical process control for wind tunnel check standard testing
NASA Astrophysics Data System (ADS)
Phillips, Ben D.
The current wind tunnel check standard testing program at NASA Langley Research Center is focused on increasing data quality, uncertainty quantification and overall control and improvement of wind tunnel measurement processes. The statistical process control (SPC) methodology employed in the check standard testing program allows for the tracking of variations in measurements over time as well as an overall assessment of facility health. While the SPC approach can and does provide researchers with valuable information, it has certain limitations in the areas of process improvement and uncertainty quantification. It is thought by utilizing design of experiments methodology in conjunction with the current SPC practices that one can efficiently and more robustly characterize uncertainties and develop enhanced process improvement procedures. In this research, methodologies were developed to generate regression models for wind tunnel calibration coefficients, balance force coefficients and wind tunnel flow angularities. The coefficients of these regression models were then tracked in statistical process control charts, giving a higher level of understanding of the processes. The methodology outlined is sufficiently generic such that this research can be applicable to any wind tunnel check standard testing program.
Blöchliger, Nicolas; Keller, Peter M; Böttger, Erik C; Hombach, Michael
2017-09-01
The procedure for setting clinical breakpoints (CBPs) for antimicrobial susceptibility has been poorly standardized with respect to population data, pharmacokinetic parameters and clinical outcome. Tools to standardize CBP setting could result in improved antibiogram forecast probabilities. We propose a model to estimate probabilities for methodological categorization errors and defined zones of methodological uncertainty (ZMUs), i.e. ranges of zone diameters that cannot reliably be classified. The impact of ZMUs on methodological error rates was used for CBP optimization. The model distinguishes theoretical true inhibition zone diameters from observed diameters, which suffer from methodological variation. True diameter distributions are described with a normal mixture model. The model was fitted to observed inhibition zone diameters of clinical Escherichia coli strains. Repeated measurements for a quality control strain were used to quantify methodological variation. For 9 of 13 antibiotics analysed, our model predicted error rates of < 0.1% applying current EUCAST CBPs. Error rates were > 0.1% for ampicillin, cefoxitin, cefuroxime and amoxicillin/clavulanic acid. Increasing the susceptible CBP (cefoxitin) and introducing ZMUs (ampicillin, cefuroxime, amoxicillin/clavulanic acid) decreased error rates to < 0.1%. ZMUs contained low numbers of isolates for ampicillin and cefuroxime (3% and 6%), whereas the ZMU for amoxicillin/clavulanic acid contained 41% of all isolates and was considered not practical. We demonstrate that CBPs can be improved and standardized by minimizing methodological categorization error rates. ZMUs may be introduced if an intermediate zone is not appropriate for pharmacokinetic/pharmacodynamic or drug dosing reasons. Optimized CBPs will provide a standardized antibiotic susceptibility testing interpretation at a defined level of probability. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Statistical Data Editing in Scientific Articles.
Habibzadeh, Farrokh
2017-07-01
Scientific journals are important scholarly forums for sharing research findings. Editors have important roles in safeguarding standards of scientific publication and should be familiar with correct presentation of results, among other core competencies. Editors do not have access to the raw data and should thus rely on clues in the submitted manuscripts. To identify probable errors, they should look for inconsistencies in presented results. Common statistical problems that can be picked up by a knowledgeable manuscript editor are discussed in this article. Manuscripts should contain a detailed section on statistical analyses of the data. Numbers should be reported with appropriate precisions. Standard error of the mean (SEM) should not be reported as an index of data dispersion. Mean (standard deviation [SD]) and median (interquartile range [IQR]) should be used for description of normally and non-normally distributed data, respectively. If possible, it is better to report 95% confidence interval (CI) for statistics, at least for main outcome variables. And, P values should be presented, and interpreted with caution, if there is a hypothesis. To advance knowledge and skills of their members, associations of journal editors are better to develop training courses on basic statistics and research methodology for non-experts. This would in turn improve research reporting and safeguard the body of scientific evidence. © 2017 The Korean Academy of Medical Sciences.
Vector data structure conversion at the EROS Data Center
van Roessel, Jan W.; Doescher, S.W.
1986-01-01
With the increasing prevalence of GIS systems and the processing of spatial data, conversion of data from one system to another has become a more serious problem. This report describes the approach taken to arrive at a solution at the EROS Data Center. The report consists of a main section and a number of appendices. The methodology is described in the main section, while the appendices have system specific descriptions. The overall approach is based on a central conversion hub consisting of a relational database manager and associated tools, with a standard data structure for the transfer of spatial data. This approach is the best compromise between the two goals of reducing the overall interfacing effort and producing efficient system interfaces, while the tools can be used to arrive at a progression of interface sophistication ranging from toolbench to smooth flow. The appendices provide detailed information on a number of spatial data handling systems and data structures and existing interfaces as well as interfaces developed with the described methodology.
The use of Delphi and Nominal Group Technique in nursing education: A review.
Foth, Thomas; Efstathiou, Nikolaos; Vanderspank-Wright, Brandi; Ufholz, Lee-Anne; Dütthorn, Nadin; Zimansky, Manuel; Humphrey-Murto, Susan
2016-08-01
Consensus methods are used by healthcare professionals and educators within nursing education because of their presumed capacity to extract the profession's' "collective knowledge" which is often considered tacit knowledge that is difficult to verbalize and to formalize. Since their emergence, consensus methods have been criticized and their rigour has been questioned. Our study focuses on the use of consensus methods in nursing education and seeks to explore how extensively consensus methods are used, the types of consensus methods employed, the purpose of the research and how standardized the application of the methods is. A systematic approach was employed to identify articles reporting the use of consensus methods in nursing education. The search strategy included keyword search in five electronic databases [Medline (Ovid), Embase (Ovid), AMED (Ovid), ERIC (Ovid) and CINAHL (EBSCO)] for the period 2004-2014. We included articles published in English, French, German and Greek discussing the use of consensus methods in nursing education or in the context of identifying competencies. A standardized extraction form was developed using an iterative process with results from the search. General descriptors such as type of journal, nursing speciality, type of educational issue addressed, method used, geographic scope were recorded. Features reflecting methodology such as number, selection and composition of panel participants, number of rounds, response rates, definition of consensus, and feedback were recorded. 1230 articles were screened resulting in 101 included studies. The Delphi was used in 88.2% of studies. Most were reported in nursing journals (63.4%). The most common purpose to use these methods was defining competencies, curriculum development and renewal, and assessment. Remarkably, both standardization and reporting of consensus methods was noted to be generally poor. Areas where the methodology appeared weak included: preparation of the initial questionnaire; the selection and description of participants; number of rounds and number of participants remaining after each round; formal feedback of group ratings; definitions of consensus and a priori definition of numbers of rounds; and modifications to the methodology. The findings of this study are concerning if interpreted within the context of the structural critiques because our findings lend support to these critiques. If consensus methods should continue being used to inform best practices in nursing education, they must be rigorous in design. Copyright © 2016 Elsevier Ltd. All rights reserved.
Conventionalism and Methodological Standards in Contending with Skepticism about Uncertainty
NASA Astrophysics Data System (ADS)
Brumble, K. C.
2012-12-01
What it means to measure and interpret confidence and uncertainty in a result is often particular to a specific scientific community and its methodology of verification. Additionally, methodology in the sciences varies greatly across disciplines and scientific communities. Understanding the accuracy of predictions of a particular science thus depends largely upon having an intimate working knowledge of the methods, standards, and conventions utilized and underpinning discoveries in that scientific field. Thus, valid criticism of scientific predictions and discoveries must be conducted by those who are literate in the field in question: they must have intimate working knowledge of the methods of the particular community and of the particular research under question. The interpretation and acceptance of uncertainty is one such shared, community-based convention. In the philosophy of science, this methodological and community-based way of understanding scientific work is referred to as conventionalism. By applying the conventionalism of historian and philosopher of science Thomas Kuhn to recent attacks upon methods of multi-proxy mean temperature reconstructions, I hope to illuminate how climate skeptics and their adherents fail to appreciate the need for community-based fluency in the methodological standards for understanding uncertainty shared by the wider climate science community. Further, I will flesh out a picture of climate science community standards of evidence and statistical argument following the work of philosopher of science Helen Longino. I will describe how failure to appreciate the conventions of professionalism and standards of evidence accepted in the climate science community results in the application of naïve falsification criteria. Appeal to naïve falsification in turn has allowed scientists outside the standards and conventions of the mainstream climate science community to consider themselves and to be judged by climate skeptics as valid critics of particular statistical reconstructions with naïve and misapplied methodological criticism. Examples will include the skeptical responses to multi-proxy mean temperature reconstructions and congressional hearings criticizing the work of Michael Mann et al.'s Hockey Stick.
Standardized reporting of functioning information on ICF-based common metrics.
Prodinger, Birgit; Tennant, Alan; Stucki, Gerold
2018-02-01
In clinical practice and research a variety of clinical data collection tools are used to collect information on people's functioning for clinical practice and research and national health information systems. Reporting on ICF-based common metrics enables standardized documentation of functioning information in national health information systems. The objective of this methodological note on applying the ICF in rehabilitation is to demonstrate how to report functioning information collected with a data collection tool on ICF-based common metrics. We first specify the requirements for the standardized reporting of functioning information. Secondly, we introduce the methods needed for transforming functioning data to ICF-based common metrics. Finally, we provide an example. The requirements for standardized reporting are as follows: 1) having a common conceptual framework to enable content comparability between any health information; and 2) a measurement framework so that scores between two or more clinical data collection tools can be directly compared. The methods needed to achieve these requirements are the ICF Linking Rules and the Rasch measurement model. Using data collected incorporating the 36-item Short Form Health Survey (SF-36), the World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0), and the Stroke Impact Scale 3.0 (SIS 3.0), the application of the standardized reporting based on common metrics is demonstrated. A subset of items from the three tools linked to common chapters of the ICF (d4 Mobility, d5 Self-care and d6 Domestic life), were entered as "super items" into the Rasch model. Good fit was achieved with no residual local dependency and a unidimensional metric. A transformation table allows for comparison between scales, and between a scale and the reporting common metric. Being able to report functioning information collected with commonly used clinical data collection tools with ICF-based common metrics enables clinicians and researchers to continue using their tools while still being able to compare and aggregate the information within and across tools.
Identification of problems in search strategies in Cochrane Reviews.
Franco, Juan Víctor Ariel; Garrote, Virginia Laura; Escobar Liquitay, Camila Micaela; Vietto, Valeria
2018-05-15
Search strategies are essential for the adequate retrieval of studies in a systematic review (SR). Our objective was to identify problems in the design and reporting of search strategies in a sample of new Cochrane SRs first published in The Cochrane Library in 2015. We took a random sample of 70 new Cochrane SRs of interventions published in 2015. We evaluated their design and reporting of search strategies using the recommendations from the Cochrane Handbook for Systematic Reviews of Interventions, the Methodological Expectations of Cochrane Intervention Reviews, and the Peer Review of Electronic Search Strategies evidence-based guideline. Most reviews complied with the reporting standards in the Cochrane Handbook and the Methodological Expectations of Cochrane Intervention Reviews; however, 8 SRs did not search trials registers, 3 SRs included language restrictions, and there was inconsistent reporting of contact with individuals and searches of the gray literature. We found problems in the design of the search strategies in 73% of reviews (95% CI, 60-84%) and 53% of these contained problems (95% CI, 38-69%) that could limit both the sensitivity and precision of the search strategies. We found limitations in the design and reporting of search strategies. We consider that a greater adherence to the guidelines could improve their quality. Copyright © 2018 John Wiley & Sons, Ltd.
Martínez-Espronceda, Miguel; Trigo, Jesús D; Led, Santiago; Barrón-González, H Gilberto; Redondo, Javier; Baquero, Alfonso; Serrano, Luis
2014-11-01
Experiences applying standards in personal health devices (PHDs) show an inherent trade-off between interoperability and costs (in terms of processing load and development time). Therefore, reducing hardware and software costs as well as time-to-market is crucial for standards adoption. The ISO/IEEE11073 PHD family of standards (also referred to as X73PHD) provides interoperable communication between PHDs and aggregators. Nevertheless, the responsibility of achieving inexpensive implementations of X73PHD in limited resource microcontrollers falls directly on the developer. Hence, the authors previously presented a methodology based on patterns to implement X73-compliant PHDs into devices with low-voltage low-power constraints. That version was based on multitasking, which required additional features and resources. This paper therefore presents an event-driven evolution of the patterns-based methodology for cost-effective development of standardized PHDs. The results of comparing between the two versions showed that the mean values of decrease in memory consumption and cycles of latency are 11.59% and 45.95%, respectively. In addition, several enhancements in terms of cost-effectiveness and development time can be derived from the new version of the methodology. Therefore, the new approach could help in producing cost-effective X73-compliant PHDs, which in turn could foster the adoption of standards. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
2016 Workplace and Gender Relations Survey of Active Duty Members: Statistical Methodology Report
2017-03-01
2016 Workplace and Gender Relations Survey of Active Duty Members Statistical Methodology Report Additional copies of this report may be...MEMBERS: STATISTICAL METHODOLOGY REPORT Office of People Analytics (OPA) Defense Research, Surveys, and Statistics Center 4800 Mark Center Drive...20 1 2016 WORKPLACE AND GENDER RELATIONS SURVEY OF ACTIVE DUTY MEMBERS: STATISTICAL METHODOLOGY REPORT
Life Cycle Assessment and Carbon Footprint in the Wine Supply-Chain
NASA Astrophysics Data System (ADS)
Pattara, Claudio; Raggi, Andrea; Cichelli, Angelo
2012-06-01
Global warming represents one of the most critical internationally perceived environmental issues. The growing, and increasingly global, wine sector is one of the industries which is under increasing pressure to adopt approaches for environmental assessment and reporting of product-related greenhouse gas emissions. The International Organization for Vine and Wine has recently recognized the need to develop a standard and objective methodology and a related tool for calculating carbon footprint (CF). This study applied this tool to a wine previously analyzed using the life cycle assessment (LCA) methodology. The objective was to test the tool as regards both its potential and possible limitations, and thus to assess its suitability as a standard tool. Despite the tool's user-friendliness, a number of limitations were noted including the lack of accurate baseline data, a partial system boundary and the impossibility of dealing with the multi-functionality issue. When the CF and LCA results are compared in absolute terms, large discrepancies become obvious due to a number of different assumptions, as well as the modeling framework adopted. Nonetheless, in relative terms the results seem to be quite consistent. However, a critical limitation of the CF methodology was its focus on a single issue, which can lead to burden shifting. In conclusion, the study confirmed the need for both further improvement and adaptation to additional contexts and further studies to validate the use of this tool in different companies.
Life cycle assessment and carbon footprint in the wine supply-chain.
Pattara, Claudio; Raggi, Andrea; Cichelli, Angelo
2012-06-01
Global warming represents one of the most critical internationally perceived environmental issues. The growing, and increasingly global, wine sector is one of the industries which is under increasing pressure to adopt approaches for environmental assessment and reporting of product-related greenhouse gas emissions. The International Organization for Vine and Wine has recently recognized the need to develop a standard and objective methodology and a related tool for calculating carbon footprint (CF). This study applied this tool to a wine previously analyzed using the life cycle assessment (LCA) methodology. The objective was to test the tool as regards both its potential and possible limitations, and thus to assess its suitability as a standard tool. Despite the tool's user-friendliness, a number of limitations were noted including the lack of accurate baseline data, a partial system boundary and the impossibility of dealing with the multi-functionality issue. When the CF and LCA results are compared in absolute terms, large discrepancies become obvious due to a number of different assumptions, as well as the modeling framework adopted. Nonetheless, in relative terms the results seem to be quite consistent. However, a critical limitation of the CF methodology was its focus on a single issue, which can lead to burden shifting. In conclusion, the study confirmed the need for both further improvement and adaptation to additional contexts and further studies to validate the use of this tool in different companies.
A systematic review of etiological and risk factors associated with bruxism.
Feu, Daniela; Catharino, Fernanda; Quintão, Catia Cardoso Abdo; Almeida, Marco Antonio de Oliveira
2013-06-01
The aim of the present work was to systematically review the literature and identify all peer-reviewed papers dealing with etiological and risk factors associated with bruxism. Data extraction was carried out according to the standard Cochrane systematic review methodology. The following databases were searched for randomized clinical trials (RCT), controlled clinical trials (CCT) or cohort studies: Cochrane Library, Medline, and Embase from 1980 to 2011. Unpublished literature was searched electronically using ClinicalTrials.gov. The primary outcome was bruxism etiology. Studies should have a standardized method to assess bruxism. Screening of eligible studies, assessment of the methodological quality and data extraction were conducted independently and in duplicate. Two reviewers inspected the references using the same search strategy and then applied the same inclusion criteria to the selected studies. They used criteria for methodological quality that was previously described in the Cochrane Handbook. Among the 1247 related articles that were critically assessed, one randomized clinical trial, one controlled clinical trial and seven longitudinal studies were included in the critical appraisal. Of these studies, five were selected, but reported different outcomes. There is convincing evidence that (sleep-related) bruxism can be induced by esophageal acidification and also that it has an important relationship with smoking in a dose-dependent manner. Disturbances in the central dopaminergic system are also implicated in the etiology of bruxism.
NASA Astrophysics Data System (ADS)
Santos, A.; Córdoba, E.; Ramírez, Z.; Sierra, C.; Ortega, Y.
2017-12-01
This project aims to determine the coefficient of dynamic friction between micrometric size coatings of alumina and metallic materials (Steel and aluminium); the methodology used to achieve the proposed objective consisted of 4 phases, in the first one was developed a procedure that allowed, from a Pin on Disk machine built based on the specifications given by the ASTM G99-05 standard (Standard test method for wear tests with a Pin on Disk machine), to determine the coefficient of dynamic friction between two materials in contact; subsequently the methodology was verified through tests between steel-steel and steel-aluminium, due to these values are widely reported in the literature; as a third step, deposits of alumina particles of micrometric size were made on a steel substrate through thermal spraying by flame; finally, the tests were carried out between pins of steel of aluminium and alumina coating to determine the coefficients of dynamic friction between these two surfaces. The results of the project allowed to verify that the developed methodology is valid to obtain coefficients of dynamic friction between surfaces in contact since the percentages of error were of 3.5% and 2.1% for steel-steel and aluminium-steel, respectively; additionally, it was found that the coefficient of friction between steel-alumina coatings is 0.36 and aluminium-alumina coating is 0.25.
Johnson, Suzanne B; Anderson, Page L
2016-11-01
This study examined the extent to which social anxiety treatment studies report the demographic characteristics of their participants. One hundred and 56 treatment studies published in English between 2001 and 2012 articles were collected. Each study was evaluated on whether or not it reported information on gender, age, race, relationship status, education, socioeconomic status, sexual orientation, and disability and also the extent to which the racial composition of the sample was described. The majority of studies reported information on age (96.2%) and gender (94.2%), but the percentage of studies that reported anything else is much lower: race (50.0%), education (42.3%), relationship status (37.8%), socioeconomic status (5.1%), disability (2.6%), and sexual orientation (1.3%). One third (34.0%) of studies reported the race of all participants in their samples, while the remaining reported no information or information for only a subset of participants (e.g. "mostly white"). Participants of social anxiety disorder treatment studies generally are not described beyond their age and gender. Standards for reporting participant characteristics of treatment studies (similar to standards for reporting the methodology of treatment studies) could improve clinical researchers' and clinicians' ability to evaluate the external validity of this body of work.
Silvestri, Mark M.; Lewis, Jennifer M.; Borsari, Brian; Correia, Christopher J.
2014-01-01
Background Drinking games are prevalent among college students and are associated with increased alcohol use and negative alcohol-related consequences. There has been substantial growth in research on drinking games. However, the majority of published studies rely on retrospective self-reports of behavior and very few studies have made use of laboratory procedures to systematically observe drinking game behavior. Objectives The current paper draws on the authors’ experiences designing and implementing methods for the study of drinking games in the laboratory. Results The paper addressed the following key design features: (a) drinking game selection; (b) beverage selection; (c) standardizing game play; (d) selection of dependent and independent variables; and (e) creating a realistic drinking game environment. Conclusions The goal of this methodological review paper is to encourage other researchers to pursue laboratory research on drinking game behavior. Use of laboratory-based methodologies will facilitate a better understanding of the dynamics of risky drinking and inform prevention and intervention efforts. PMID:25192209
NASA Technical Reports Server (NTRS)
Allen, Cheryl L.
1991-01-01
Enhanced engineering tools can be obtained through the integration of expert system methodologies and existing design software. The application of these methodologies to the spacecraft design and cost model (SDCM) software provides an improved technique for the selection of hardware for unmanned spacecraft subsystem design. The knowledge engineering system (KES) expert system development tool was used to implement a smarter equipment section algorithm than that which is currently achievable through the use of a standard data base system. The guidance, navigation, and control subsystems of the SDCM software was chosen as the initial subsystem for implementation. The portions of the SDCM code which compute the selection criteria and constraints remain intact, and the expert system equipment selection algorithm is embedded within this existing code. The architecture of this new methodology is described and its implementation is reported. The project background and a brief overview of the expert system is described, and once the details of the design are characterized, an example of its implementation is demonstrated.
Taichi exercise for self-rated sleep quality in older people: a systematic review and meta-analysis.
Du, Shizheng; Dong, Jianshu; Zhang, Heng; Jin, Shengji; Xu, Guihua; Liu, Zengxia; Chen, Lixia; Yin, Haiyan; Sun, Zhiling
2015-01-01
Self-reported sleep disorders are common in older adults, resulting in serious consequences. Non-pharmacological measures are important complementary interventions, among which Taichi exercise is a popular alternative. Some experiments have been performed; however, the effect of Taichi exercise in improving sleep quality in older people has yet to be validated by systematic review. Using systematic review and meta-analysis, this study aimed to examine the efficacy of Taichi exercise in promoting self-reported sleep quality in older adults. Systematic review and meta-analysis of randomized controlled studies. 4 English databases: Pubmed, Cochrane Library, Web of Science and CINAHL, and 4 Chinese databases: CBMdisc, CNKI, VIP, and Wanfang database were searched through December 2013. Two reviewers independently selected eligible trials, conducted critical appraisal of the methodological quality by using the quality appraisal criteria for randomized controlled studies recommended by Cochrane Handbook. A standardized data form was used to extract information. Meta-analysis was performed. Five randomized controlled studies met inclusion criteria. All suffered from some methodological flaws. The results of this study showed that Taichi has large beneficial effect on sleep quality in older people, as indicated by decreases in the global Pittsburgh Sleep Quality Index score [standardized mean difference=-0.87, 95% confidence intervals (95% confidence interval) (-1.25, -0.49)], as well as its sub-domains of subjective sleep quality [standardized mean difference=-0.83, 95% confidence interval (-1.08, -0.57)], sleep latency [standardized mean difference=-0.75, 95% confidence interval (-1.42, -0.07)], sleep duration [standardized mean difference=-0.55, 95% confidence interval (-0.90, -0.21)], habitual sleep efficiency [standardized mean difference=-0.49, 95% confidence interval (-0.74, -0.23)], sleep disturbance [standardized mean difference=-0.44, 95% confidence interval (-0.69, -0.19)], and daytime dysfunction [standardized mean difference=-0.34, 95% confidence interval (-0.59, -0.09)]. Daytime sleepiness improvement was also observed. Weak evidence shows that Taichi exercise has a beneficial effect in improving self-rated sleep quality for older adults, suggesting that Taichi could be an effective alternative and complementary approach to existing therapies for older people with sleep problems. More rigorous experimental studies are required. Copyright © 2014 Elsevier Ltd. All rights reserved.
Richman, Vincent; Richman, Alex
2012-06-01
Reports of research fraud have raised concerns about research integrity similar to concerns raised about financial accounting fraud. We propose a departure from self-regulation in that researchers adopt the financial accounting approach in establishing trust through an external validation process, in addition to the reporting entities and the regulatory agencies. The general conceptual framework for reviewing financial reports, utilizes external auditors who are certified and objective in using established standards to provide an opinion on the financial reports. These standards have become both broader in scope and increasingly specific as to what information is reported and the methodologies to be employed. We believe that the financial reporting overhaul encompassed in the US Sarbanes-Oxley Act of 2002, which aims at preventing accounting fraud, can be applied to scientific research in 4 ways. First, Sarbanes-Oxley requires corporations to have a complete set of internal accounting controls. Research organizations should use appropriate sampling techniques and audit research projects for conformity with the initial research protocols. Second, corporations are required to have the chief financial officer certify the accuracy of their financial statements. In a similar way, each research organization should have their vice-president of research (or equivalent) certify the research integrity of their research activities. In contrast, the primary responsibility of the existing Research Integrity Officers is to handle allegations of research misconduct, an after-the-fact activity. Third, generally accepted auditing standards specify the appropriate procedures for external review of a corporation's financial statements. For similar reasons, the research review process would also require corresponding external auditing standards. Finally, these new requirements would be implemented in stages, with the largest 14 research organizations that receive 25% of the total National Institutes of Health funding, adopting these research oversight enhancements first.
Sudell, Maria; Kolamunnage-Dona, Ruwanthi; Tudur-Smith, Catrin
2016-12-05
Joint models for longitudinal and time-to-event data are commonly used to simultaneously analyse correlated data in single study cases. Synthesis of evidence from multiple studies using meta-analysis is a natural next step but its feasibility depends heavily on the standard of reporting of joint models in the medical literature. During this review we aim to assess the current standard of reporting of joint models applied in the literature, and to determine whether current reporting standards would allow or hinder future aggregate data meta-analyses of model results. We undertook a literature review of non-methodological studies that involved joint modelling of longitudinal and time-to-event medical data. Study characteristics were extracted and an assessment of whether separate meta-analyses for longitudinal, time-to-event and association parameters were possible was made. The 65 studies identified used a wide range of joint modelling methods in a selection of software. Identified studies concerned a variety of disease areas. The majority of studies reported adequate information to conduct a meta-analysis (67.7% for longitudinal parameter aggregate data meta-analysis, 69.2% for time-to-event parameter aggregate data meta-analysis, 76.9% for association parameter aggregate data meta-analysis). In some cases model structure was difficult to ascertain from the published reports. Whilst extraction of sufficient information to permit meta-analyses was possible in a majority of cases, the standard of reporting of joint models should be maintained and improved. Recommendations for future practice include clear statement of model structure, of values of estimated parameters, of software used and of statistical methods applied.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-11
...The U.S. Small Business Administration (SBA) proposes to increase small business size standards for 37 industries in North American Industry Classification System (NAICS) Sector 52, Finance and Insurance, and for two industries in NAICS Sector 55, Management of Companies and Enterprises. In addition, SBA proposes to change the measure of size from average assets to average receipts for NAICS 522293, International Trade Financing. As part of its ongoing comprehensive size standards review, SBA evaluated all receipts based and assets based size standards in NAICS Sectors 52 and 55 to determine whether they should be retained or revised. This proposed rule is one of a series of proposed rules that will review size standards of industries grouped by NAICS Sector. SBA issued a White Paper entitled ``Size Standards Methodology'' and published a notice in the October 21, 2009 issue of the Federal Register to advise the public that the document is available on its Web site at www.sba.gov/size for public review and comments. The ``Size Standards Methodology'' White Paper explains how SBA establishes, reviews, and modifies its receipts based and employee based small business size standards. In this proposed rule, SBA has applied its methodology that pertains to establishing, reviewing, and modifying a receipts based size standard.
Zheng, Kai; Guo, Michael H; Hanauer, David A
2011-01-01
To identify ways for improving the consistency of design, conduct, and results reporting of time and motion (T&M) research in health informatics. We analyzed the commonalities and divergences of empirical studies published 1990-2010 that have applied the T&M approach to examine the impact of health IT implementation on clinical work processes and workflow. The analysis led to the development of a suggested 'checklist' intended to help future T&M research produce compatible and comparable results. We call this checklist STAMP (Suggested Time And Motion Procedures). STAMP outlines a minimum set of 29 data/ information elements organized into eight key areas, plus three supplemental elements contained in an 'Ancillary Data' area, that researchers may consider collecting and reporting in their future T&M endeavors. T&M is generally regarded as the most reliable approach for assessing the impact of health IT implementation on clinical work. However, there exist considerable inconsistencies in how previous T&M studies were conducted and/or how their results were reported, many of which do not seem necessary yet can have a significant impact on quality of research and generalisability of results. Therefore, we deem it is time to call for standards that can help improve the consistency of T&M research in health informatics. This study represents an initial attempt. We developed a suggested checklist to improve the methodological and results reporting consistency of T&M research, so that meaningful insights can be derived from across-study synthesis and health informatics, as a field, will be able to accumulate knowledge from these studies.
Sattler, J M
1979-05-01
Hardy, Welcher, Mellitis, and Kagan altered standard WISC administrative and scoring procedures and, from the resulting higher subtest scores, concluded that IQs based on standardized tests are inappropriate measures for inner-city children. Careful examination of their study reveals many methodological inadequacies and problematic interpretations. Three of these are as follows: (a) failure to use any external criterion to evaluate the validity of their testing-of-limits procedures; (b) the possibility of examiner and investigator bias; and (c) lack of any comparison group that might demonstrate that poor children would be helped more than others by the probes recommended. Their report creates misleading doubts about existing intelligence tests and does a disservice to inner-city children who need the benefits of the judicious use of diagnostic procedures, which include standardized intelligence tests. Consequently, their assertion concerning the inappropriateness of standardized test results for inner-city children is not only premature and misleading, but it is unwarranted as well.
O. Musaiger, Abdulrahman; Al-Mannai, Mariam; Tayyem, Reema
2013-01-01
Objective: To find out the prevalence of overweight and obesity among female adolescents in Jordan. Methodology: A cross-sectional survey on females aged 15–18 in Amman, Jordan, was carried out using a multistage stratified random sampling method. The total sample size was 475 girls. Weight and height were measured and body mass index for age was used to determine overweight and obesity using the IOTF and WHO international standards. Results: The prevalence of overweight and obesity decreased with age. The highest prevalence of overweight and obesity was reported at age 15 (24.4% and 8.9%, respectively). The WHO standard showed a higher prevalence of obesity than the IOTF standard in all age groups. Conclusions: Overweight and obesity are serious public health problems among adolescents in Jordan, using both international standards. A program to combat obesity among schoolchildren, therefore, should be given a high priority in school health policy in Jordan. PMID:24353605
ERIC Educational Resources Information Center
Pedersen, Mitra
2013-01-01
This study investigated the rate of success for IT projects using agile and standard project management methodologies. Any successful project requires use of project methodology. Specifically, large projects require formal project management methodologies or models, which establish a blueprint of processes and project planning activities. This…
Setting priorities for space research: An experiment in methodology
NASA Technical Reports Server (NTRS)
1995-01-01
In 1989, the Space Studies Board created the Task Group on Priorities in Space Research to determine whether scientists should take a role in recommending priorities for long-term space research initiatives and, if so, to analyze the priority-setting problem in this context and develop a method by which such priorities could be established. After answering the first question in the affirmative in a previous report, the task group set out to accomplish the second task. The basic assumption in developing a priority-setting process is that a reasoned and structured approach for ordering competing initiatives will yield better results than other ways of proceeding. The task group proceeded from the principle that the central criterion for evaluating a research initiative must be its scientific merit -- the value of the initiative to the proposing discipline and to science generally. The group developed a two-stage methodology for priority setting and constructed a procedure and format to support the methodology. The first of two instruments developed was a standard format for structuring proposals for space research initiatives. The second instrument was a formal, semiquantitative appraisal procedure for evaluating competing proposals. This report makes available complete templates for the methodology, including the advocacy statement and evaluation forms, as well as an 11-step schema for a priority-setting process. From the beginning of its work, the task group was mindful that the issue of priority setting increasingly pervades all of federally supported science and that its work would have implications extending beyond space research. Thus, although the present report makes no recommendations for action by NASA or other government agencies, it provides the results of the task group's work for the use of others who may study priority-setting procedures or take up the challenge of implementing them in the future.
Clifford, Anton; McCalman, Janya; Bainbridge, Roxanne; Tsey, Komla
2015-04-01
This article describes the characteristics and reviews the methodological quality of interventions designed to improve cultural competency in health care for Indigenous peoples of Australia, New Zealand, Canada and the USA. A total of 17 electronic databases and 13 websites for the period of 2002-13. Studies were included if they evaluated an intervention strategy designed to improve cultural competency in health care for Indigenous peoples of Australia, New Zealand, the USA or Canada. Information on the characteristics and methodological quality of included studies was extracted using standardized assessment tools. Sixteen published evaluations of interventions to improve cultural competency in health care for Indigenous peoples were identified: 11 for Indigenous peoples of the USA and 5 for Indigenous Australians. The main types of intervention strategies were education and training of the health workforce, culturally specific health programs and recruitment of an Indigenous health workforce. Main positive outcomes reported were improvements in health professionals' confidence, and patients' satisfaction with and access to health care. The methodological quality of evaluations and the reporting of key methodological criteria were variable. Particular problems included weak study designs, low or no reporting of consent rates, confounding and non-validated measurement instruments. There is a lack of evidence from rigorous evaluations on the effectiveness of interventions for improving cultural competency in health care for Indigenous peoples. Future evaluations should employ more rigorous study designs and extend their measurement of outcomes beyond those relating to health professionals, to those relating to the health of Indigenous peoples. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
2010-03-25
PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES...AGENCY NAME( S ) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM( S ) 11. SPONSOR/MONITOR’S REPORT NUMBER( S ) 12. DISTRIBUTION/AVAILABILITY STATEMENT...Unserviceable FMR Financial Management Regulation OM& S Operating Materials and Supplies SFFAS Statement of Federal Financial Accounting Standards
ERIC Educational Resources Information Center
Tannenbaum, Richard J.; Wylie, E. Caroline
2008-01-01
The Common European Framework of Reference (CEFR) describes language proficiency in reading, writing, speaking, and listening on a 6-level scale. In this study, English-language experts from across Europe linked CEFR levels to scores on three tests: the TOEFL® iBT test, the TOEIC® assessment, and the TOEIC "Bridge"™ test.…
Drug companies' evidence to justify advertising.
Wade, V A; Mansfield, P R; McDonald, P J
1989-11-25
Ten international pharmaceutical companies were asked by letter to supply their best evidence in support of marketing claims for seventeen products. Fifteen replies were received. Seven replies cited a total of 67 references: 31 contained relevant original data and only 13 were controlled trials, all of which had serious methodological flaws. There were four reports of changes in advertising claims and one company ceased marketing nikethamide in the third world. Standards of evidence used to justify advertising claims are inadequate.
Pla-Tolós, J; Serra-Mora, P; Hakobyan, L; Molins-Legua, C; Moliner-Martinez, Y; Campins-Falcó, P
2016-11-01
In this work, in-tube solid phase microextraction (in-tube SPME) coupled to capillary LC (CapLC) with diode array detection has been reported, for on-line extraction and enrichment of booster biocides (irgarol-1051 and diuron) included in Water Frame Directive 2013/39/UE (WFD). The analytical performance has been successfully demonstrated. Furthermore, in the present work, the environmental friendliness of the procedure has been quantified by means of the implementation of the carbon footprint calculation of the analytical procedure and the comparison with other methodologies previously reported. Under the optimum conditions, the method presents good linearity over the range assayed, 0.05-10μg/L for irgarol-1051 and 0.7-10μg/L for diuron. The LODs were 0.015μg/L and 0.2μg/L for irgarol-1051 and diuron, respectively. Precision was also satisfactory (relative standard deviation, RSD<3.5%). The proposed methodology was applied to monitor water samples, taking into account the EQS standards for these compounds. The carbon footprint values for the proposed procedure consolidate the operational efficiency (analytical and environmental performance) of in-tube SPME-CapLC-DAD, in general, and in particular for determining irgarol-1051 and diuron in water samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Quantifying expert diagnosis variability when grading tumor-infiltrating lymphocytes
NASA Astrophysics Data System (ADS)
Toro, Paula; Corredor, Germán.; Wang, Xiangxue; Arias, Viviana; Velcheti, Vamsidhar; Madabhushi, Anant; Romero, Eduardo
2017-11-01
Tumor-infiltrating lymphocytes (TILs) have proved to play an important role in predicting prognosis, survival, and response to treatment in patients with a variety of solid tumors. Unfortunately, currently, there are not a standardized methodology to quantify the infiltration grade. The aim of this work is to evaluate variability among the reports of TILs given by a group of pathologists who examined a set of digitized Non-Small Cell Lung Cancer samples (n=60). 28 pathologists answered a different number of histopathological images. The agreement among pathologists was evaluated by computing the Kappa index coefficient and the standard deviation of their estimations. Furthermore, TILs reports were correlated with patient's prognosis and survival using the Pearson's correlation coefficient. General results show that the agreement among experts grading TILs in the dataset is low since Kappa values remain below 0.4 and the standard deviation values demonstrate that in none of the images there was a full consensus. Finally, the correlation coefficient for each pathologist also reveals a low association between the pathologists' predictions and the prognosis/survival data. Results suggest the need of defining standardized, objective, and effective strategies to evaluate TILs, so they could be used as a biomarker in the daily routine.
Katz, Matthew HG; Marsh, Robert; Herman, Joseph M.; Shi, Qian; Collison, Eric; Venook, Alan; Kindler, Hedy; Alberts, Steven; Philip, Philip; Lowy, Andrew M.; Pisters, Peter; Posner, Mitchell; Berlin, Jordan; Ahmad, Syed A.
2017-01-01
Methodological limitations of prior studies have prevented progress in the treatment of patients with borderline resectable pancreatic adenocarcinoma. Shortcomings have included the absence of staging and treatment standards and pre-existing biases with regard to the use of neoadjuvant therapy and the role of vascular resection at pancreatectomy. In this manuscript, we will review limitations of studies of borderline resectable PDAC reported to date, highlight important controversies related to this disease stage, emphasize the research infrastructure necessary for its future study, and present a recently-approved Intergroup pilot study (Alliance A0201101) that will provide a foundation upon which subsequent well-designed clinical trials can be performed. PMID:23435609
Persson, M; Sandy, J R; Waylen, A; Wills, A K; Al-Ghatam, R; Ireland, A J; Hall, A J; Hollingworth, W; Jones, T; Peters, T J; Preston, R; Sell, D; Smallridge, J; Worthington, H; Ness, A R
2015-11-01
We describe the methodology for a major study investigating the impact of reconfigured cleft care in the United Kingdom (UK) 15 years after an initial survey, detailed in the Clinical Standards Advisory Group (CSAG) report in 1998, had informed government recommendations on centralization. This is a UK multicentre cross-sectional study of 5-year-olds born with non-syndromic unilateral cleft lip and palate. Children born between 1 April 2005 and 31 March 2007 were seen in cleft centre audit clinics. Consent was obtained for the collection of routine clinical measures (speech recordings, hearing, photographs, models, oral health, psychosocial factors) and anthropometric measures (height, weight, head circumference). The methodology for each clinical measure followed those of the earlier survey as closely as possible. We identified 359 eligible children and recruited 268 (74.7%) to the study. Eleven separate records for each child were collected at the audit clinics. In total, 2666 (90.4%) were collected from a potential 2948 records. The response rates for the self-reported questionnaires, completed at home, were 52.6% for the Health and Lifestyle Questionnaire and 52.2% for the Satisfaction with Service Questionnaire. Response rates and measures were similar to those achieved in the previous survey. There are practical, administrative and methodological challenges in repeating cross-sectional surveys 15 years apart and producing comparable data. © 2015 The Authors. Orthodontics & Craniofacial Research Published by John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Cheung, Alan C. K.; Slavin, Robert E.
2013-01-01
The present review examines research on the effects of educational technology applications on mathematics achievement in K-12 classrooms. Unlike previous reviews, this review applies consistent inclusion standards to focus on studies that met high methodological standards. In addition, methodological and substantive features of the studies are…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-03
... Conductivity Using Field Data: An Adaptation of the U.S. EPA's Standard Methodology for Deriving Water Quality... Adaptation of the U.S. EPA's Standard Methodology for Deriving Water Quality Criteria'' DATES: Nominations... Deriving Water Quality Criteria'' should be directed to Dr. Michael Slimak, ORD's Associate Director of...
A normative price for a manufactured product: The SAMICS methodology. Volume 2: Analysis
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.
1979-01-01
The Solar Array Manufacturing Industry Costing Standards provide standard formats, data, assumptions, and procedures for determining the price a hypothetical solar array manufacturer would have to be able to obtain in the market to realize a specified after-tax rate of return on equity for a specified level of production. The methodology and its theoretical background are presented. The model is sufficiently general to be used in any production-line manufacturing environment. Implementation of this methodology by the Solar Array Manufacturing Industry Simultation computer program is discussed.
2018-04-30
2017 Workplace and Gender Relations Survey of Reserve Component Members Statistical Methodology Report Additional copies of this report...Survey of Reserve Component Members Statistical Methodology Report Office of People Analytics (OPA) 4800 Mark Center Drive, Suite...RESERVE COMPONENT MEMBERS STATISTICAL METHODOLOGY REPORT Introduction The Office of People Analytics’ Center for Health and Resilience (OPA[H&R
Neural network approach in multichannel auditory event-related potential analysis.
Wu, F Y; Slater, J D; Ramsay, R E
1994-04-01
Even though there are presently no clearly defined criteria for the assessment of P300 event-related potential (ERP) abnormality, it is strongly indicated through statistical analysis that such criteria exist for classifying control subjects and patients with diseases resulting in neuropsychological impairment such as multiple sclerosis (MS). We have demonstrated the feasibility of artificial neural network (ANN) methods in classifying ERP waveforms measured at a single channel (Cz) from control subjects and MS patients. In this paper, we report the results of multichannel ERP analysis and a modified network analysis methodology to enhance automation of the classification rule extraction process. The proposed methodology significantly reduces the work of statistical analysis. It also helps to standardize the criteria of P300 ERP assessment and facilitate the computer-aided analysis on neuropsychological functions.
A systematic review and metaanalysis of energy intake and weight gain in pregnancy.
Jebeile, Hiba; Mijatovic, Jovana; Louie, Jimmy Chun Yu; Prvan, Tania; Brand-Miller, Jennie C
2016-04-01
Gestational weight gain within the recommended range produces optimal pregnancy outcomes, yet many women exceed the guidelines. Official recommendations to increase energy intake by ∼ 1000 kJ/day in pregnancy may be excessive. To determine by metaanalysis of relevant studies whether greater increments in energy intake from early to late pregnancy corresponded to greater or excessive gestational weight gain. We systematically searched electronic databases for observational and intervention studies published from 1990 to the present. The databases included Ovid Medline, Cochrane Library, Excerpta Medica DataBASE (EMBASE), Cumulative Index to Nursing and Allied Health Literature (CINAHL), and Science Direct. In addition we hand-searched reference lists of all identified articles. Studies were included if they reported gestational weight gain and energy intake in early and late gestation in women of any age with a singleton pregnancy. Search also encompassed journals emerging from both developed and developing countries. Studies were individually assessed for quality based on the Quality Criteria Checklist obtained from the Evidence Analysis Manual: Steps in the academy evidence analysis process. Publication bias was plotted by the use of a funnel plot with standard mean difference against standard error. Identified studies were meta-analyzed and stratified by body mass index, study design, dietary methodology, and country status (developed/developing) by the use of a random-effects model. Of 2487 articles screened, 18 studies met inclusion criteria. On average, women gained 12.0 (2.8) kg (standardized mean difference = 1.306, P < .0005) yet reported only a small increment in energy intake that did not reach statistical significance (∼475 kJ/day, standard mean difference = 0.266, P = .016). Irrespective of baseline body mass index, study design, dietary methodology, or country status, changes in energy intake were not significantly correlated to the amount of gestational weight gain (r = 0.321, P = .11). Despite rapid physiologic weight gain, women report little or no change in energy intake during pregnancy. Current recommendations to increase energy intake by ∼ 1000 kJ/day may, therefore, encourage excessive weight gain and adverse pregnancy outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.
Translating standards into practice - one Semantic Web API for Gene Expression.
Deus, Helena F; Prud'hommeaux, Eric; Miller, Michael; Zhao, Jun; Malone, James; Adamusiak, Tomasz; McCusker, Jim; Das, Sudeshna; Rocca Serra, Philippe; Fox, Ronan; Marshall, M Scott
2012-08-01
Sharing and describing experimental results unambiguously with sufficient detail to enable replication of results is a fundamental tenet of scientific research. In today's cluttered world of "-omics" sciences, data standards and standardized use of terminologies and ontologies for biomedical informatics play an important role in reporting high-throughput experiment results in formats that can be interpreted by both researchers and analytical tools. Increasing adoption of Semantic Web and Linked Data technologies for the integration of heterogeneous and distributed health care and life sciences (HCLSs) datasets has made the reuse of standards even more pressing; dynamic semantic query federation can be used for integrative bioinformatics when ontologies and identifiers are reused across data instances. We present here a methodology to integrate the results and experimental context of three different representations of microarray-based transcriptomic experiments: the Gene Expression Atlas, the W3C BioRDF task force approach to reporting Provenance of Microarray Experiments, and the HSCI blood genomics project. Our approach does not attempt to improve the expressivity of existing standards for genomics but, instead, to enable integration of existing datasets published from microarray-based transcriptomic experiments. SPARQL Construct is used to create a posteriori mappings of concepts and properties and linking rules that match entities based on query constraints. We discuss how our integrative approach can encourage reuse of the Experimental Factor Ontology (EFO) and the Ontology for Biomedical Investigations (OBIs) for the reporting of experimental context and results of gene expression studies. Copyright © 2012 Elsevier Inc. All rights reserved.
Reference values of elements in human hair: a systematic review.
Mikulewicz, Marcin; Chojnacka, Katarzyna; Gedrange, Thomas; Górecki, Henryk
2013-11-01
The lack of systematic review on reference values of elements in human hair with the consideration of methodological approach. The absence of worldwide accepted and implemented universal reference ranges causes that hair mineral analysis has not become yet a reliable and useful method of assessment of nutritional status and exposure of individuals. Systematic review of reference values of elements in human hair. PubMed, ISI Web of Knowledge, Scopus. Humans, hair mineral analysis, elements or minerals, reference values, original studies. The number of studies screened and assessed for eligibility was 52. Eventually, included in the review were 5 papers. The studies report reference ranges for the content of elements in hair: macroelements, microelements, toxic elements and other elements. Reference ranges were elaborated for different populations in the years 2000-2012. The analytical methodology differed, in particular sample preparation, digestion and analysis (ICP-AES, ICP-MS). Consequently, the levels of hair minerals reported as reference values varied. It is necessary to elaborate the standard procedures and furtherly validate hair mineral analysis and deliver detailed methodology. Only then it would be possible to provide meaningful reference ranges and take advantage of the potential that lies in Hair Mineral Analysis as a medical diagnostic technique. Copyright © 2013 Elsevier B.V. All rights reserved.
The STROBE statement and neuropsychology: lighting the way toward evidence-based practice.
Loring, David W; Bowden, Stephen C
2014-01-01
Reporting appropriate research detail across clinical disciplines is often inconsistent or incomplete. Insufficient report detail reduces confidence in findings, makes study replication more difficult, and decreases the precision of data available for critical review including meta-analysis. In response to these concerns, cooperative attempts across multiple specialties have developed explicit research reporting standards to guide publication detail. These recommendations have been widely adopted by high impact medical journals, but have not yet been widely embraced by neuropsychology. The STROBE Statement (STrengthening the Reporting of Observational studies in Epidemiology) is particularly relevant to neuropsychology since clinical research is often based on non-funded studies of patient samples. In this paper we describe the STROBE Statement and demonstrate how STROBE criteria, applied to reporting of neuropsychological findings, will maintain neuropsychology's position as a leader in quantifying brain-behavior relationships. We also provide specific recommendations for data reporting and disclosure of perceived conflicts of interest that will further enhance reporting transparency for possible perceived sources of bias. In an era in which evidence-based practice assumes an increasingly prominent role, improved reporting standards will promote better patient care, assist in developing quality practice guidelines, and ensure that neuropsychology remains a vigorous discipline in the clinical neurosciences that consciously aspires to high methodological rigor.
Method and reporting quality in health professions education research: a systematic review.
Cook, David A; Levinson, Anthony J; Garside, Sarah
2011-03-01
Studies evaluating reporting quality in health professions education (HPE) research have demonstrated deficiencies, but none have used comprehensive reporting standards. Additionally, the relationship between study methods and effect size (ES) in HPE research is unknown. This review aimed to evaluate, in a sample of experimental studies of Internet-based instruction, the quality of reporting, the relationship between reporting and methodological quality, and associations between ES and study methods. We conducted a systematic search of databases including MEDLINE, Scopus, CINAHL, EMBASE and ERIC, for articles published during 1990-2008. Studies (in any language) quantifying the effect of Internet-based instruction in HPE compared with no intervention or other instruction were included. Working independently and in duplicate, we coded reporting quality using the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement, and coded study methods using a modified Newcastle-Ottawa Scale (m-NOS), the Medical Education Research Study Quality Instrument (MERSQI), and the Best Evidence in Medical Education (BEME) global scale. For reporting quality, articles scored a mean±standard deviation (SD) of 51±25% of STROBE elements for the Introduction, 58±20% for the Methods, 50±18% for the Results and 41±26% for the Discussion sections. We found positive associations (all p<0.0001) between reporting quality and MERSQI (ρ=0.64), m-NOS (ρ=0.57) and BEME (ρ=0.58) scores. We explored associations between study methods and knowledge ES by subtracting each study's ES from the pooled ES for studies using that method and comparing these differences between subgroups. Effect sizes in single-group pretest/post-test studies differed from the pooled estimate more than ESs in two-group studies (p=0.013). No difference was found between other study methods (yes/no: representative sample, comparison group from same community, randomised, allocation concealed, participants blinded, assessor blinded, objective assessment, high follow-up). Information is missing from all sections of reports of HPE experiments. Single-group pre-/post-test studies may overestimate ES compared with two-group designs. Other methodological variations did not bias study results in this sample. © Blackwell Publishing Ltd 2011.
Breuer, Jan-Philipp; Seeling, Matthes; Barz, Michael; Baldini, Timo; Scholtz, Kathrin; Spies, Claudia
2012-01-01
In order to be comprehensible and comparable scientific data should be reported according to a certain standard. One example is the 'STAndards for Reporting of Diagnostic Accuracy (STARD) Statement', a 25-item checklist for the appropriate conduct and reporting of diagnostic studies. Usually such scientific standards are published in English. The International Society for Pharmacoeconomics and Outcome Research (ISPOR) has developed guidelines for the translation and cultural adaptation of written medical instruments. The aim was to apply these ISPOR criteria to the German translation of the STARD Statement in order to allow for authorisation to be conferred by the original authors. In cooperation with the original authors the STARD statement was translated according to the ISPOR steps: (1) Preparation, (2) Forward Translation, (3) Reconciliation, (4) Back Translation, (5) Back Translation Review, (6) Harmonisation, (7) Cognitive Debriefing, which evaluated comprehensiveness and linguistic style with marks from 1 (very good) to 6 (insufficient), and (8) Review of Cognitive Debriefing Results and Finalisation. Die ISPOR criteria applied reasonably to the translation process, which required the work input and energy of four scientists and one professional translator and 177 accumulated working hours. The cognitive debriefing resulted in average grades 1.62±0.33 and 1.72±0.39 for comprehensiveness and linguistic style, respectively. Finally, the German STARD version was authorised by the original authors. Die ISPOR guidelines seem to be a suitable means to facilitate the structured adaptation of defined criteria for the reporting of studies, such as the STARD statement, to other languages. Copyright © 2012. Published by Elsevier GmbH.
Lu, Liming; Luo, Gaoquan; Xiao, Fang
2013-08-01
This study aims to assess the quality of reports and their correlates in randomized controlled trials (RCTs) of immunotherapy for Guillain-Barré syndrome (GBS). A search was performed in multiple databases of reports published between April 1992 and November 2012. Reporting quality was assessed by items of the Consolidated Standards of Reporting Trials (CONSORT) 2010 Statement. An overall quality score (OQS) and a key methodological index score (MIS) were calculated for each trial. Factors associated with OQS and MIS were then identified. A total of 19 RCTs were included in the full text. The median OQS was 7.0, with a range of 1-10. However, the quality of reporting in items of 'flow chart' and 'ancillary analyses' was poor with a positive rate of less than 40%. The median MIS was 0 with a range of 0-2. Twelve (63.2%) did not report any of the three key methodological items. Specifically, the mean OQS increased by approximately 2.73 for manuscripts published in the New England Journal of Medicine, The Lancet, Pediatrics and Neurology (95% CI: 0.35-5.12; p < 0.05). Multivariate linear regression and the Poisson regression model could not be presented as the number of included trials was too small. The reporting quality in RCTs on immunotherapy for GBS was poor, which indicated that reporting in RCTs of immunotherapy for GBS needed substantial improvement in order to meet the guideline of the CONSORT Statement.
Lu, Z. Q. J.; Lowhorn, N. D.; Wong-Ng, W.; Zhang, W.; Thomas, E. L.; Otani, M.; Green, M. L.; Tran, T. N.; Caylor, C.; Dilley, N. R.; Downey, A.; Edwards, B.; Elsner, N.; Ghamaty, S.; Hogan, T.; Jie, Q.; Li, Q.; Martin, J.; Nolas, G.; Obara, H.; Sharp, J.; Venkatasubramanian, R.; Willigan, R.; Yang, J.; Tritt, T.
2009-01-01
In an effort to develop a Standard Reference Material (SRM™) for Seebeck coefficient, we have conducted a round-robin measurement survey of two candidate materials—undoped Bi2Te3 and Constantan (55 % Cu and 45 % Ni alloy). Measurements were performed in two rounds by twelve laboratories involved in active thermoelectric research using a number of different commercial and custom-built measurement systems and techniques. In this paper we report the detailed statistical analyses on the interlaboratory measurement results and the statistical methodology for analysis of irregularly sampled measurement curves in the interlaboratory study setting. Based on these results, we have selected Bi2Te3 as the prototype standard material. Once available, this SRM will be useful for future interlaboratory data comparison and instrument calibrations. PMID:27504212
Mokkink, Lidwine B; Prinsen, Cecilia A C; Bouter, Lex M; Vet, Henrica C W de; Terwee, Caroline B
2016-01-19
COSMIN (COnsensus-based Standards for the selection of health Measurement INstruments) is an initiative of an international multidisciplinary team of researchers who aim to improve the selection of outcome measurement instruments both in research and in clinical practice by developing tools for selecting the most appropriate available instrument. In this paper these tools are described, i.e. the COSMIN taxonomy and definition of measurement properties; the COSMIN checklist to evaluate the methodological quality of studies on measurement properties; a search filter for finding studies on measurement properties; a protocol for systematic reviews of outcome measurement instruments; a database of systematic reviews of outcome measurement instruments; and a guideline for selecting outcome measurement instruments for Core Outcome Sets in clinical trials. Currently, we are updating the COSMIN checklist, particularly the standards for content validity studies. Also new standards for studies using Item Response Theory methods will be developed. Additionally, in the future we want to develop standards for studies on the quality of non-patient reported outcome measures, such as clinician-reported outcomes and performance-based outcomes. In summary, we plea for more standardization in the use of outcome measurement instruments, for conducting high quality systematic reviews on measurement instruments in which the best available outcome measurement instrument is recommended, and for stopping the use of poor outcome measurement instruments.
Mokkink, Lidwine B.; Prinsen, Cecilia A. C.; Bouter, Lex M.; de Vet, Henrica C. W.; Terwee, Caroline B.
2016-01-01
Background: COSMIN (COnsensus-based Standards for the selection of health Measurement INstruments) is an initiative of an international multidisciplinary team of researchers who aim to improve the selection of outcome measurement instruments both in research and in clinical practice by developing tools for selecting the most appropriate available instrument. Method: In this paper these tools are described, i.e. the COSMIN taxonomy and definition of measurement properties; the COSMIN checklist to evaluate the methodological quality of studies on measurement properties; a search filter for finding studies on measurement properties; a protocol for systematic reviews of outcome measurement instruments; a database of systematic reviews of outcome measurement instruments; and a guideline for selecting outcome measurement instruments for Core Outcome Sets in clinical trials. Currently, we are updating the COSMIN checklist, particularly the standards for content validity studies. Also new standards for studies using Item Response Theory methods will be developed. Additionally, in the future we want to develop standards for studies on the quality of non-patient reported outcome measures, such as clinician-reported outcomes and performance-based outcomes. Conclusions: In summary, we plea for more standardization in the use of outcome measurement instruments, for conducting high quality systematic reviews on measurement instruments in which the best available outcome measurement instrument is recommended, and for stopping the use of poor outcome measurement instruments. PMID:26786084
McLinden, Taylor; Sargeant, Jan M; Thomas, M Kate; Papadopoulos, Andrew; Fazil, Aamir
2014-09-01
Nontyphoidal Salmonella spp. are one of the most common causes of bacterial foodborne illness. Variability in cost inventories and study methodologies limits the possibility of meaningfully interpreting and comparing cost-of-illness (COI) estimates, reducing their usefulness. However, little is known about the relative effect these factors have on a cost-of-illness estimate. This is important for comparing existing estimates and when designing new cost-of-illness studies. Cost-of-illness estimates, identified through a scoping review, were used to investigate the association between descriptive, component cost, methodological, and foodborne illness-related factors such as chronic sequelae and under-reporting with the cost of nontyphoidal Salmonella spp. illness. The standardized cost of nontyphoidal Salmonella spp. illness from 30 estimates reported in 29 studies ranged from $0.01568 to $41.22 United States dollars (USD)/person/year (2012). The mean cost of nontyphoidal Salmonella spp. illness was $10.37 USD/person/year (2012). The following factors were found to be significant in multiple linear regression (p≤0.05): the number of direct component cost categories included in an estimate (0-4, particularly long-term care costs) and chronic sequelae costs (inclusion/exclusion), which had positive associations with the cost of nontyphoidal Salmonella spp. illness. Factors related to study methodology were not significant. Our findings indicated that study methodology may not be as influential as other factors, such as the number of direct component cost categories included in an estimate and costs incurred due to chronic sequelae. Therefore, these may be the most important factors to consider when designing, interpreting, and comparing cost of foodborne illness studies.
Lempesi, Evangelia; Toulia, Electra; Pandis, Nikolaos
2017-04-01
The aim of this study was to investigate the expert panel methodology applied in orthodontics and its reporting quality. Additionally, the relationship between the reporting quality and a range of variables was explored. PubMed was searched for orthodontic studies in which the final diagnosis or assessment was made by 2 or more experts published up to March 16, 2015. Reporting quality assessment was conducted using an established modified checklist. The relationship between potential predictors and the total score was assessed using univariable linear regression. We identified 237 studies with a mean score of 9.97 (SD, 1.12) out of a maximum of 15. Critical information about panel methodology was missing in all studies. The panel composition differed substantially across studies, ranging from 2 to 646 panel members, with large variations in the expertise represented. Only 17 studies (7.2%) reported sample size calculations to justify the panel size. Panel members were partly blinded in 65 (27.4%) studies. Most studies failed to report which statistic was used to compute intrarater (65.8%) and interrater (66.2%) agreements. Journal type (nonorthodontic: β, 0.23; 95% CI, -0.07 to 0.54 compared with orthodontic), publication year (β, 0; 95% CI, -0.02 to 0.02 for each additional year), number of authors (1-3: β, 0.30; 95% CI, -0.13 to 0.74 compared with at least 6; 4-5: β, 0.18; 95% CI, -0.29 to 0.33 compared with at least 6), and number of centers involved (single: β, 0.20; 95% CI, -0.14 to 0.54 compared with multicenter) were not significant predictors of improved reporting. Studies published in Asia and Australia had significantly lower scores compared with those published in Europe (β, -0.54; 95% CI, -0.92 to -0.17). Formal guidelines on methodology and reporting of studies involving expert panels are required. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Lefebvre, Carol; Glanville, Julie; Beale, Sophie; Boachie, Charles; Duffy, Steven; Fraser, Cynthia; Harbour, Jenny; McCool, Rachael; Smith, Lynne
2017-11-01
Effective study identification is essential for conducting health research, developing clinical guidance and health policy and supporting health-care decision-making. Methodological search filters (combinations of search terms to capture a specific study design) can assist in searching to achieve this. This project investigated the methods used to assess the performance of methodological search filters, the information that searchers require when choosing search filters and how that information could be better provided. Five literature reviews were undertaken in 2010/11: search filter development and testing; comparison of search filters; decision-making in choosing search filters; diagnostic test accuracy (DTA) study methods; and decision-making in choosing diagnostic tests. We conducted interviews and a questionnaire with experienced searchers to learn what information assists in the choice of search filters and how filters are used. These investigations informed the development of various approaches to gathering and reporting search filter performance data. We acknowledge that there has been a regrettable delay between carrying out the project, including the searches, and the publication of this report, because of serious illness of the principal investigator. The development of filters most frequently involved using a reference standard derived from hand-searching journals. Most filters were validated internally only. Reporting of methods was generally poor. Sensitivity, precision and specificity were the most commonly reported performance measures and were presented in tables. Aspects of DTA study methods are applicable to search filters, particularly in the development of the reference standard. There is limited evidence on how clinicians choose between diagnostic tests. No published literature was found on how searchers select filters. Interviewing and questioning searchers via a questionnaire found that filters were not appropriate for all tasks but were predominantly used to reduce large numbers of retrieved records and to introduce focus. The Inter Technology Appraisal Support Collaboration (InterTASC) Information Specialists' Sub-Group (ISSG) Search Filters Resource was most frequently mentioned by both groups as the resource consulted to select a filter. Randomised controlled trial (RCT) and systematic review filters, in particular the Cochrane RCT and the McMaster Hedges filters, were most frequently mentioned. The majority indicated that they used different filters depending on the requirement for sensitivity or precision. Over half of the respondents used the filters available in databases. Interviewees used various approaches when using and adapting search filters. Respondents suggested that the main factors that would make choosing a filter easier were the availability of critical appraisals and more detailed performance information. Provenance and having the filter available in a central storage location were also important. The questionnaire could have been shorter and could have included more multiple choice questions, and the reviews of filter performance focused on only four study designs. Search filter studies should use a representative reference standard and explicitly report methods and results. Performance measures should be presented systematically and clearly. Searchers find filters useful in certain circumstances but expressed a need for more user-friendly performance information to aid filter choice. We suggest approaches to use, adapt and report search filter performance. Future work could include research around search filters and performance measures for study designs not addressed here, exploration of alternative methods of displaying performance results and numerical synthesis of performance comparison results. The National Institute for Health Research (NIHR) Health Technology Assessment programme and Medical Research Council-NIHR Methodology Research Programme (grant number G0901496).
Lefebvre, Carol; Glanville, Julie; Beale, Sophie; Boachie, Charles; Duffy, Steven; Fraser, Cynthia; Harbour, Jenny; McCool, Rachael; Smith, Lynne
2017-01-01
BACKGROUND Effective study identification is essential for conducting health research, developing clinical guidance and health policy and supporting health-care decision-making. Methodological search filters (combinations of search terms to capture a specific study design) can assist in searching to achieve this. OBJECTIVES This project investigated the methods used to assess the performance of methodological search filters, the information that searchers require when choosing search filters and how that information could be better provided. METHODS Five literature reviews were undertaken in 2010/11: search filter development and testing; comparison of search filters; decision-making in choosing search filters; diagnostic test accuracy (DTA) study methods; and decision-making in choosing diagnostic tests. We conducted interviews and a questionnaire with experienced searchers to learn what information assists in the choice of search filters and how filters are used. These investigations informed the development of various approaches to gathering and reporting search filter performance data. We acknowledge that there has been a regrettable delay between carrying out the project, including the searches, and the publication of this report, because of serious illness of the principal investigator. RESULTS The development of filters most frequently involved using a reference standard derived from hand-searching journals. Most filters were validated internally only. Reporting of methods was generally poor. Sensitivity, precision and specificity were the most commonly reported performance measures and were presented in tables. Aspects of DTA study methods are applicable to search filters, particularly in the development of the reference standard. There is limited evidence on how clinicians choose between diagnostic tests. No published literature was found on how searchers select filters. Interviewing and questioning searchers via a questionnaire found that filters were not appropriate for all tasks but were predominantly used to reduce large numbers of retrieved records and to introduce focus. The Inter Technology Appraisal Support Collaboration (InterTASC) Information Specialists' Sub-Group (ISSG) Search Filters Resource was most frequently mentioned by both groups as the resource consulted to select a filter. Randomised controlled trial (RCT) and systematic review filters, in particular the Cochrane RCT and the McMaster Hedges filters, were most frequently mentioned. The majority indicated that they used different filters depending on the requirement for sensitivity or precision. Over half of the respondents used the filters available in databases. Interviewees used various approaches when using and adapting search filters. Respondents suggested that the main factors that would make choosing a filter easier were the availability of critical appraisals and more detailed performance information. Provenance and having the filter available in a central storage location were also important. LIMITATIONS The questionnaire could have been shorter and could have included more multiple choice questions, and the reviews of filter performance focused on only four study designs. CONCLUSIONS Search filter studies should use a representative reference standard and explicitly report methods and results. Performance measures should be presented systematically and clearly. Searchers find filters useful in certain circumstances but expressed a need for more user-friendly performance information to aid filter choice. We suggest approaches to use, adapt and report search filter performance. Future work could include research around search filters and performance measures for study designs not addressed here, exploration of alternative methods of displaying performance results and numerical synthesis of performance comparison results. FUNDING The National Institute for Health Research (NIHR) Health Technology Assessment programme and Medical Research Council-NIHR Methodology Research Programme (grant number G0901496). PMID:29188764
Methodological aspects of clinical trials in tinnitus: A proposal for an international standard
Landgrebe, Michael; Azevedo, Andréia; Baguley, David; Bauer, Carol; Cacace, Anthony; Coelho, Claudia; Dornhoffer, John; Figueiredo, Ricardo; Flor, Herta; Hajak, Goeran; van de Heyning, Paul; Hiller, Wolfgang; Khedr, Eman; Kleinjung, Tobias; Koller, Michael; Lainez, Jose Miguel; Londero, Alain; Martin, William H.; Mennemeier, Mark; Piccirillo, Jay; De Ridder, Dirk; Rupprecht, Rainer; Searchfield, Grant; Vanneste, Sven; Zeman, Florian; Langguth, Berthold
2013-01-01
Chronic tinnitus is a common condition with a high burden of disease. While many different treatments are used in clinical practice, the evidence for the efficacy of these treatments is low and the variance of treatment response between individuals is high. This is most likely due to the great heterogeneity of tinnitus with respect to clinical features as well as underlying pathophysiological mechanisms. There is a clear need to find effective treatment options in tinnitus, however, clinical trials differ substantially with respect to methodological quality and design. Consequently, the conclusions that can be derived from these studies are limited and jeopardize comparison between studies. Here, we discuss our view of the most important aspects of trial design in clinical studies in tinnitus and make suggestions for an international methodological standard in tinnitus trials. We hope that the proposed methodological standard will stimulate scientific discussion and will help to improve the quality of trials in tinnitus. PMID:22789414
Modern proposal of methodology for retrieval of characteristic synthetic rainfall hyetographs
NASA Astrophysics Data System (ADS)
Licznar, Paweł; Burszta-Adamiak, Ewa; Łomotowski, Janusz; Stańczyk, Justyna
2017-11-01
Modern engineering workshop of designing and modelling complex drainage systems is based on hydrodynamic modelling and has a probabilistic character. Its practical application requires a change regarding rainfall models accepted at the input. Previously used artificial rainfall models of simplified form, e.g. block precipitation or Euler's type II model rainfall are no longer sufficient. It is noticeable that urgent clarification is needed as regards the methodology of standardized rainfall hyetographs that would take into consideration the specifics of local storm rainfall temporal dynamics. The aim of the paper is to present a proposal for innovative methodology for determining standardized rainfall hyetographs, based on statistical processing of the collection of actual local precipitation characteristics. Proposed methodology is based on the classification of standardized rainfall hyetographs with the use of cluster analysis. Its application is presented on the example of selected rain gauges localized in Poland. Synthetic rainfall hyetographs achieved as a final result may be used for hydrodynamic modelling of sewerage systems, including probabilistic detection of necessary capacity of retention reservoirs.
Giansanti, Daniele; Morelli, Sandra; Maccioni, Giovanni; Guerriero, Lorenzo; Bedini, Remo; Pepe, Gennaro; Colombo, Cesare; Borghi, Gabriella; Macellari, Velio
2009-01-01
Due to major advances in the information technology, telemedicine applications are ready for a widespread use. Nonetheless, to allow their diffusion in National Health Care Systems (NHCSs) specific methodologies of health technology assessment (HTA) should be used to assess the standardization, the overall quality, the interoperability, the addressing to legal, economic and cost benefit aspects. One of the limits to the diffusion of the digital tele-echocardiography (T-E) applications in the NHCS lacking of a specific methodology for the HTA. In the present study, a solution offering a structured HTA of T-E products was designed. The methodology assured also the definition of standardized quality levels for the application. The first level represents the minimum level of acceptance; the other levels are accessory levels useful for a more accurate assessment of the product. The methodology showed to be useful to rationalize the process of standardization and has received a high degree of acceptance by the subjects involved in the study.
Nobre, Moacyr Roberto Cuce; da Costa, Frnanda Marques
2012-02-01
Surrogate endpoints may be used as substitutes for, but often do not predict clinically relevant events. Objective To assess the methodological quality of articles that present their conclusions based on clinically relevant or surrogate outcomes in a systematic review of randomised trials and cohort studies of patients with rheumatoid arthritis treated with antitumour necrosis factor (TNF) agents. PubMed, Embase and Cochrane databases were searched. The Jadad score, the percentage of Consolidated Standards Of Reporting Trials (CONSORT) statement items adequately reported and levels-of-evidence (Center for Evidence-based Medicine, Oxford) were used in a descriptive synthesis. Among 88 articles appraised, 27 had surrogate endpoints, mainly radiographic, and 44 were duplicate publications; 74% of articles with surrogate and 39% of articles with clinical endpoints (p=0.006). Fewer articles with surrogate endpoints represented a high level of evidence (Level 1b, 33% vs 62%, p=0.037) and the mean percentage of CONSORT statement items met was also lower for articles with surrogate endpoints (62.5 vs 70.7, p=0.026). Although fewer articles with surrogate endpoints were randomised trials (63% vs 74%, p=0.307) and articles with surrogate endpoints had lower Jadad scores (3.0 vs 3.2, p=0.538), these differences were not statistically significant. Studies of anti-TNF agents that report surrogate outcomes are of lesser methodological quality. As such, inclusion of such studies in evidence syntheses may bias results.
Potential errors and misuse of statistics in studies on leakage in endodontics.
Lucena, C; Lopez, J M; Pulgar, R; Abalos, C; Valderrama, M J
2013-04-01
To assess the quality of the statistical methodology used in studies of leakage in Endodontics, and to compare the results found using appropriate versus inappropriate inferential statistical methods. The search strategy used the descriptors 'root filling' 'microleakage', 'dye penetration', 'dye leakage', 'polymicrobial leakage' and 'fluid filtration' for the time interval 2001-2010 in journals within the categories 'Dentistry, Oral Surgery and Medicine' and 'Materials Science, Biomaterials' of the Journal Citation Report. All retrieved articles were reviewed to find potential pitfalls in statistical methodology that may be encountered during study design, data management or data analysis. The database included 209 papers. In all the studies reviewed, the statistical methods used were appropriate for the category attributed to the outcome variable, but in 41% of the cases, the chi-square test or parametric methods were inappropriately selected subsequently. In 2% of the papers, no statistical test was used. In 99% of cases, a statistically 'significant' or 'not significant' effect was reported as a main finding, whilst only 1% also presented an estimation of the magnitude of the effect. When the appropriate statistical methods were applied in the studies with originally inappropriate data analysis, the conclusions changed in 19% of the cases. Statistical deficiencies in leakage studies may affect their results and interpretation and might be one of the reasons for the poor agreement amongst the reported findings. Therefore, more effort should be made to standardize statistical methodology. © 2012 International Endodontic Journal.
Albach, Carlos Augusto; Wagland, Richard; Hunt, Katherine J
2018-04-01
This systematic review (1) identifies the current generic and cancer-related patient-reported outcome measures (PROMs) that have been cross-culturally adapted to Brazilian Portuguese and applied to cancer patients and (2) critically evaluates their cross-cultural adaptation (CCA) and measurement properties. Seven databases were searched for articles regarding the translation and evaluation of measurement properties of generic and cancer-related PROMs cross-culturally adapted to Brazilian Portuguese that are applied in adult (≥18 years old) cancer patients. The methodological quality of included studies was assessed using the COSMIN checklist. The bibliographic search retrieved 1674 hits, of which seven studies analysing eight instruments were included in this review. Data on the interpretability of scores were poorly reported. Overall, the quality of the CCA process was inconsistent throughout the studies. None of the included studies performed a cross-cultural validation. The evidence concerning the quality of measurement properties is limited by poor or fair methodological quality. Moreover, limited information regarding measurement properties was provided within the included papers. This review aids the selection process of Brazilian Portuguese PROMs for use in cancer patients. After acknowledging the methodological caveats and strengths of each tool, our opinion is that for quality of life and symptoms assessment the adapted FACT-G version and the ESAS could be recommended, respectively. Future research should rely on the already accepted standards of CCA and validation studies.
Sabour, Siamak
2018-03-08
The purpose of this letter, in response to Hall, Mehta, and Fackrell (2017), is to provide important knowledge about methodology and statistical issues in assessing the reliability and validity of an audiologist-administered tinnitus loudness matching test and a patient-reported tinnitus loudness rating. The author uses reference textbooks and published articles regarding scientific assessment of the validity and reliability of a clinical test to discuss the statistical test and the methodological approach in assessing validity and reliability in clinical research. Depending on the type of the variable (qualitative or quantitative), well-known statistical tests can be applied to assess reliability and validity. The qualitative variables of sensitivity, specificity, positive predictive value, negative predictive value, false positive and false negative rates, likelihood ratio positive and likelihood ratio negative, as well as odds ratio (i.e., ratio of true to false results), are the most appropriate estimates to evaluate validity of a test compared to a gold standard. In the case of quantitative variables, depending on distribution of the variable, Pearson r or Spearman rho can be applied. Diagnostic accuracy (validity) and diagnostic precision (reliability or agreement) are two completely different methodological issues. Depending on the type of the variable (qualitative or quantitative), well-known statistical tests can be applied to assess validity.
PI-RADS v2: Current standing and future outlook.
Smith, Clayton P; Türkbey, Barış
2018-05-01
The Prostate Imaging-Reporting and Data System (PI-RADS) was created in 2012 to establish standardization in prostate multiparametric magnetic resonance imaging (mpMRI) acquisition, interpretation, and reporting. In hopes of improving upon some of the PI-RADS v1 shortcomings, the PI-RADS Steering Committee released PI-RADS v2 in 2015. This paper reviews the accuracy, interobserver agreement, and clinical outcomes of PI-RADS v2 and comments on the limitations of the current literature. Overall, PI-RADS v2 shows improved sensitivity and similar specificity compared to PI-RADS v1. However, concerns exist regarding interobserver agreement and the heterogeneity of the study methodology.
Hahs-Vaughn, Debbie L; McWayne, Christine M; Bulotsky-Shearer, Rebecca J; Wen, Xiaoli; Faria, Ann-Marie
2011-06-01
Complex survey data are collected by means other than simple random samples. This creates two analytical issues: nonindependence and unequal selection probability. Failing to address these issues results in underestimated standard errors and biased parameter estimates. Using data from the nationally representative Head Start Family and Child Experiences Survey (FACES; 1997 and 2000 cohorts), three diverse multilevel models are presented that illustrate differences in results depending on addressing or ignoring the complex sampling issues. Limitations of using complex survey data are reported, along with recommendations for reporting complex sample results. © The Author(s) 2011
PI-RADS v2: Current standing and future outlook
Smith, Clayton P.
2018-01-01
The Prostate Imaging-Reporting and Data System (PI-RADS) was created in 2012 to establish standardization in prostate multiparametric magnetic resonance imaging (mpMRI) acquisition, interpretation, and reporting. In hopes of improving upon some of the PI-RADS v1 shortcomings, the PI-RADS Steering Committee released PI-RADS v2 in 2015. This paper reviews the accuracy, interobserver agreement, and clinical outcomes of PI-RADS v2 and comments on the limitations of the current literature. Overall, PI-RADS v2 shows improved sensitivity and similar specificity compared to PI-RADS v1. However, concerns exist regarding interobserver agreement and the heterogeneity of the study methodology. PMID:29733790
Methodological Issues in Meta-Analyzing Standard Deviations: Comment on Bond and DePaulo (2008)
ERIC Educational Resources Information Center
Pigott, Therese D.; Wu, Meng-Jia
2008-01-01
In this comment on C. F. Bond and B. M. DePaulo, the authors raise methodological concerns about the approach used to analyze the data. The authors suggest further refinement of the procedures used, and they compare the approach taken by Bond and DePaulo with standard methods for meta-analysis. (Contains 1 table and 2 figures.)
Nicholson, A; Berger, K; Bohn, R; Carcao, M; Fischer, K; Gringeri, A; Hoots, K; Mantovani, L; Schramm, W; van Hout, B A; Willan, A R; Feldman, B M
2008-01-01
The need for clearly reported studies evaluating the cost of prophylaxis and its overall outcomes has been recommended from previous literature. To establish minimal ''core standards'' that can be followed when conducting and reporting economic evaluations of hemophilia prophylaxis. Ten members of the IPSG Economic Analysis Working Group participated in a consensus process using the Nominal Groups Technique (NGT). The following topics relating to the economic analysis of prophylaxis studies were addressed; Whose perspective should be taken? Which is the best methodological approach? Is micro- or macro-costing the best costing strategy? What information must be presented about costs and outcomes in order to facilitate local and international interpretation? The group suggests studies on the economic impact of prophylaxis should be viewed from a societal perspective and be reported using a Cost Utility Analysis (CUA) (with consideration of also reporting Cost Benefit Analysis [CBA]). All costs that exceed $500 should be used to measure the costs of prophylaxis (macro strategy) including items such as clotting factor costs, hospitalizations, surgical procedures, productivity loss and number of days lost from school or work. Generic and disease specific quality of lífe and utility measures should be used to report the outcomes of the study. The IPSG has suggested minimal core standards to be applied to the reporting of economic evaluations of hemophilia prophylaxis. Standardized reporting will facilitate the comparison of studies and will allow for more rational policy decisions and treatment choices.
On the development of a methodology for extensive in-situ and continuous atmospheric CO2 monitoring
NASA Astrophysics Data System (ADS)
Wang, K.; Chang, S.; Jhang, T.
2010-12-01
Carbon dioxide is recognized as the dominating greenhouse gas contributing to anthropogenic global warming. Stringent controls on carbon dioxide emissions are viewed as necessary steps in controlling atmospheric carbon dioxide concentrations. From the view point of policy making, regulation of carbon dioxide emissions and its monitoring are keys to the success of stringent controls on carbon dioxide emissions. Especially, extensive atmospheric CO2 monitoring is a crucial step to ensure that CO2 emission control strategies are closely followed. In this work we develop a methodology that enables reliable and accurate in-situ and continuous atmospheric CO2 monitoring for policy making. The methodology comprises the use of gas filter correlation (GFC) instrument for in-situ CO2 monitoring, the use of CO2 working standards accompanying the continuous measurements, and the use of NOAA WMO CO2 standard gases for calibrating the working standards. The use of GFC instruments enables 1-second data sampling frequency with the interference of water vapor removed from added dryer. The CO2 measurements are conducted in the following timed and cycled manner: zero CO2 measurement, two standard CO2 gases measurements, and ambient air measurements. The standard CO2 gases are calibrated again NOAA WMO CO2 standards. The methodology is used in indoor CO2 measurements in a commercial office (about 120 people working inside), ambient CO2 measurements, and installed in a fleet of in-service commercial cargo ships for monitoring CO2 over global marine boundary layer. These measurements demonstrate our method is reliable, accurate, and traceable to NOAA WMO CO2 standards. The portability of the instrument and the working standards make the method readily applied for large-scale and extensive CO2 measurements.
Han, Julia L; Gandhi, Shreyas; Bockoven, Crystal G; Narayan, Vikram M; Dahm, Philipp
2017-04-01
To assess the quality of published systematic reviews in the urology literature (an extension of our previously reported work), as high-quality systematic reviews play a paramount role in informing evidence-based clinical practice. Our focus was on systematic reviews in the urology literature that incorporated questions of prevention and therapy. To identify such reviews published during a 36-month period (2013-2015), we systematically searched PubMed and hand-searched the table of contents of four major urology journals. Two reviewers independently assessed the methodological quality of those reviews, using the 11-point 'Assessment of Multiple Systematic Reviews' (AMSTAR) instrument. We performed protocol-driven analyses of the data from our present study's 36-month period alone, as well as in aggregate with the data from our previously reported work's study periods (2009-2012 and 1998-2008). In our literature search of the 36-month period (2013-2015), we initially identified 490 possibly relevant reviews, of which 125 met our inclusion criteria. The most common topic of reviews for the 2013-2015 period was oncology (51.2%; n = 64), followed by voiding dysfunction (21.6%; n = 27). The mean [standard deviation (SD)] AMSTAR score in the 2013-2015 period (n = 125) was 4.8 (2.4); 2009-2012 (n = 113), 5.4 (2.3); and 1998-2008 (n = 57), 4.8 (2.0) (P = 0.127). In the 2013-2015 period, the mean (SD) AMSTAR score for the BJU International (n = 25) was 5.6 (2.9); for The Journal of Urology (n = 20), 5.1 (2.6); for European Urology (n = 60), 4.5 (2.2); and for Urology (n = 20), 4.4 (2.2) (P = 0.106). The number of systematic reviews published in the urology literature has exponentially increased, year by year, but their methodological quality has stagnated. To enhance the validity and impact of systematic reviews, all authors and editors must apply established methodological standards. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.
Evaluation Studies of Robotic Rollators by the User Perspective: A Systematic Review.
Werner, Christian; Ullrich, Phoebe; Geravand, Milad; Peer, Angelika; Hauer, Klaus
2016-01-01
Robotic rollators enhance the basic functions of established devices by technically advanced physical, cognitive, or sensory support to increase autonomy in persons with severe impairment. In the evaluation of such ambient assisted living solutions, both the technical and user perspectives are important to prove usability, effectiveness and safety, and to ensure adequate device application. The aim of this systematic review is to summarize the methodology of studies evaluating robotic rollators with focus on the user perspective and to give recommendations for future evaluation studies. A systematic literature search up to December 31, 2014, was conducted based on the Cochrane Review methodology using the electronic databases PubMed and IEEE Xplore. Articles were selected according to the following inclusion criteria: evaluation studies of robotic rollators documenting human-robot interaction, no case reports, published in English language. Twenty-eight studies were identified that met the predefined inclusion criteria. Large heterogeneity in the definitions of the target user group, study populations, study designs and assessment methods was found across the included studies. No generic methodology to evaluate robotic rollators could be identified. We found major methodological shortcomings related to insufficient sample descriptions and sample sizes, and lack of appropriate, standardized and validated assessment methods. Long-term use in habitual environment was also not evaluated. Apart from the heterogeneity, methodological deficits in most of the identified studies became apparent. Recommendations for future evaluation studies include: clear definition of target user group, adequate selection of subjects, inclusion of other assistive mobility devices for comparison, evaluation of the habitual use of advanced prototypes, adequate assessment strategy with established, standardized and validated methods, and statistical analysis of study results. Assessment strategies may additionally focus on specific functionalities of the robotic rollators allowing an individually tailored assessment of innovative features to document their added value. © 2016 S. Karger AG, Basel.
Tadić, V; Rahi, J S
2017-01-01
The purpose of this article is to summarise methodological challenges and opportunities in the development and application of patient reported outcome measures (PROMs) for the rare and complex population of children with visually impairing disorders. Following a literature review on development and application of PROMs in children in general, including those with disabilities and or/chronic condition, we identified and discuss here 5 key issues that are specific to children with visual impairment: (1) the conflation between theoretically distinct vision-related constructs and outcomes, (2) the importance of developmentally appropriate approaches to design and application of PROMs, (3) feasibility of standard questionnaire formats and administration for children with different levels of visual impairment, (4) feasibility and nature of self-reporting by visually impaired children, and (5) epidemiological, statistical and ethical considerations. There is an established need for vision-specific age-appropriate PROMs for use in paediatric ophthalmology, but there are significant practical and methodological challenges in developing and applying appropriate measures. Further understanding of the characteristics and needs of visually impaired children as questionnaire respondents is necessary for development of quality PROMs and their meaningful application in clinical practice and research. PMID:28085146
The role of private industry in pragmatic comparative effectiveness trials.
Buesching, Don P; Luce, Bryan R; Berger, Marc L
2012-03-01
Comparative effectiveness research (CER) includes pragmatic clinical trials (PCTs) to address 'real-world' effectiveness. CER interest would be expected to stimulate biopharmaceutical manufacturer PCT investment; however, this does not seem to be the case. In this article we identify all industry-sponsored PCT studies from 1996 to 2010; analyze them across a variety of characteristics, including sponsor, research question, design, comparators and results; and suggest methodological and policy changes to spur future manufacturer PCT investment. Nine 'naturalistic', head-to-head versus standard of care or similar agent PCTs were identified. Two included a 'usual care' arm. Chronic care trials' length averaged 12 months (range: 6-24 months), six of which reported equivocal or no difference in effectiveness; results of two chronic and the single acute care PCTs favored the sponsor drug. None reported the sponsor drug inferior. Of seven that evaluated utilization or costs, six reported no differences and four of five studies comparing brand-generic drugs reported no difference. Whereas private investment in PCTs is in the public interest, manufacturers apparently have not yet seen the business case. To induce investment, we propose several methodological and regulatory policy innovations designed to reduce business risk by decreasing outcome variability and increasing trial efficiency, flexibility and market applicability.
The Development of a Methodology for Estimating the Cost of Air Force On-the-Job Training.
ERIC Educational Resources Information Center
Samers, Bernard N.; And Others
The Air Force uses a standardized costing methodology for resident technical training schools (TTS); no comparable methodology exists for computing the cost of on-the-job training (OJT). This study evaluates three alternative survey methodologies and a number of cost models for estimating the cost of OJT for airmen training in the Administrative…
D'Onza, Giuseppe; Greco, Giulio; Allegrini, Marco
2016-02-01
Recycling implies additional costs for separated municipal solid waste (MSW) collection. The aim of the present study is to propose and implement a management tool - the full cost accounting (FCA) method - to calculate the full collection costs of different types of waste. Our analysis aims for a better understanding of the difficulties of putting FCA into practice in the MSW sector. We propose a FCA methodology that uses standard cost and actual quantities to calculate the collection costs of separate and undifferentiated waste. Our methodology allows cost efficiency analysis and benchmarking, overcoming problems related to firm-specific accounting choices, earnings management policies and purchase policies. Our methodology allows benchmarking and variance analysis that can be used to identify the causes of off-standards performance and guide managers to deploy resources more efficiently. Our methodology can be implemented by companies lacking a sophisticated management accounting system. Copyright © 2015 Elsevier Ltd. All rights reserved.
Screening Methodologies to Support Risk and Technology ...
The Clean Air Act establishes a two-stage regulatory process for addressing emissions of hazardous air pollutants (HAPs) from stationary sources. In the first stage, the Act requires the EPA to develop technology-based standards for categories of industrial sources. We have largely completed the required “Maximum Achievable Control Technology” (MACT) standards. In the second stage of the regulatory process, EPA must review each MACT standard at least every eight years and revise them as necessary, “taking into account developments in practices, processes and control technologies.” We call this requirement the “technology review.” EPA is also required to complete a one-time assessment of the health and environmental risks that remain after sources come into compliance with MACT. This residual risk review also must be done within 8 years of setting the initial MACT standard. If additional risk reductions are necessary to protect public health with an ample margin of safety or to prevent adverse environmental effects, EPA must develop standards to address these remaining risks. Because the risk review is an important component of the RTR process, EPA is seeking SAB input on the scientific credibility of specific enhancements made to our risk assessment methodologies, particularly with respect to screening methodologies, since the last SAB review was completed in 2010. These enhancements to our risk methodologies are outlined in the document title
Boulkedid, Rym; Abdoul, Hendy; Loustau, Marine; Sibony, Olivier; Alberti, Corinne
2011-01-01
Objective Delphi technique is a structured process commonly used to developed healthcare quality indicators, but there is a little recommendation for researchers who wish to use it. This study aimed 1) to describe reporting of the Delphi method to develop quality indicators, 2) to discuss specific methodological skills for quality indicators selection 3) to give guidance about this practice. Methodology and Main Finding Three electronic data bases were searched over a 30 years period (1978–2009). All articles that used the Delphi method to select quality indicators were identified. A standardized data extraction form was developed. Four domains (questionnaire preparation, expert panel, progress of the survey and Delphi results) were assessed. Of 80 included studies, quality of reporting varied significantly between items (9% for year's number of experience of the experts to 98% for the type of Delphi used). Reporting of methodological aspects needed to evaluate the reliability of the survey was insufficient: only 39% (31/80) of studies reported response rates for all rounds, 60% (48/80) that feedback was given between rounds, 77% (62/80) the method used to achieve consensus and 57% (48/80) listed quality indicators selected at the end of the survey. A modified Delphi procedure was used in 49/78 (63%) with a physical meeting of the panel members, usually between Delphi rounds. Median number of panel members was 17(Q1:11; Q3:31). In 40/70 (57%) studies, the panel included multiple stakeholders, who were healthcare professionals in 95% (38/40) of cases. Among 75 studies describing criteria to select quality indicators, 28 (37%) used validity and 17(23%) feasibility. Conclusion The use and reporting of the Delphi method for quality indicators selection need to be improved. We provide some guidance to the investigators to improve the using and reporting of the method in future surveys. PMID:21694759
Langley Wind Tunnel Data Quality Assurance-Check Standard Results
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Grubb, John P.; Krieger, William B.; Cler, Daniel L.
2000-01-01
A framework for statistical evaluation, control and improvement of wind funnel measurement processes is presented The methodology is adapted from elements of the Measurement Assurance Plans developed by the National Bureau of Standards (now the National Institute of Standards and Technology) for standards and calibration laboratories. The present methodology is based on the notions of statistical quality control (SQC) together with check standard testing and a small number of customer repeat-run sets. The results of check standard and customer repeat-run -sets are analyzed using the statistical control chart-methods of Walter A. Shewhart long familiar to the SQC community. Control chart results are presented for. various measurement processes in five facilities at Langley Research Center. The processes include test section calibration, force and moment measurements with a balance, and instrument calibration.
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2010 CFR
2010-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2013 CFR
2013-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2014 CFR
2014-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2012 CFR
2012-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2011 CFR
2011-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
Costing evidence for health care decision-making in Austria: A systematic review.
Mayer, Susanne; Kiss, Noemi; Łaszewska, Agata; Simon, Judit
2017-01-01
With rising healthcare costs comes an increasing demand for evidence-informed resource allocation using economic evaluations worldwide. Furthermore, standardization of costing and reporting methods both at international and national levels are imperative to make economic evaluations a valid tool for decision-making. The aim of this review is to assess the availability and consistency of costing evidence that could be used for decision-making in Austria. It describes systematically the current economic evaluation and costing studies landscape focusing on the applied costing methods and their reporting standards. Findings are discussed in terms of their likely impacts on evidence-based decision-making and potential suggestions for areas of development. A systematic literature review of English and German language peer-reviewed as well as grey literature (2004-2015) was conducted to identify Austrian economic analyses. The databases MEDLINE, EMBASE, SSCI, EconLit, NHS EED and Scopus were searched. Publication and study characteristics, costing methods, reporting standards and valuation sources were systematically synthesised and assessed. A total of 93 studies were included. 87% were journal articles, 13% were reports. 41% of all studies were full economic evaluations, mostly cost-effectiveness analyses. Based on relevant standards the most commonly observed limitations were that 60% of the studies did not clearly state an analytical perspective, 25% of the studies did not provide the year of costing, 27% did not comprehensively list all valuation sources, and 38% did not report all applied unit costs. There are substantial inconsistencies in the costing methods and reporting standards in economic analyses in Austria, which may contribute to a low acceptance and lack of interest in economic evaluation-informed decision making. To improve comparability and quality of future studies, national costing guidelines should be updated with more specific methodological guidance and a national reference cost library should be set up to allow harmonisation of valuation methods.
Costing evidence for health care decision-making in Austria: A systematic review
Mayer, Susanne; Kiss, Noemi; Łaszewska, Agata
2017-01-01
Background With rising healthcare costs comes an increasing demand for evidence-informed resource allocation using economic evaluations worldwide. Furthermore, standardization of costing and reporting methods both at international and national levels are imperative to make economic evaluations a valid tool for decision-making. The aim of this review is to assess the availability and consistency of costing evidence that could be used for decision-making in Austria. It describes systematically the current economic evaluation and costing studies landscape focusing on the applied costing methods and their reporting standards. Findings are discussed in terms of their likely impacts on evidence-based decision-making and potential suggestions for areas of development. Methods A systematic literature review of English and German language peer-reviewed as well as grey literature (2004–2015) was conducted to identify Austrian economic analyses. The databases MEDLINE, EMBASE, SSCI, EconLit, NHS EED and Scopus were searched. Publication and study characteristics, costing methods, reporting standards and valuation sources were systematically synthesised and assessed. Results A total of 93 studies were included. 87% were journal articles, 13% were reports. 41% of all studies were full economic evaluations, mostly cost-effectiveness analyses. Based on relevant standards the most commonly observed limitations were that 60% of the studies did not clearly state an analytical perspective, 25% of the studies did not provide the year of costing, 27% did not comprehensively list all valuation sources, and 38% did not report all applied unit costs. Conclusion There are substantial inconsistencies in the costing methods and reporting standards in economic analyses in Austria, which may contribute to a low acceptance and lack of interest in economic evaluation-informed decision making. To improve comparability and quality of future studies, national costing guidelines should be updated with more specific methodological guidance and a national reference cost library should be set up to allow harmonisation of valuation methods. PMID:28806728
Improving automation standards via semantic modelling: Application to ISA88.
Dombayci, Canan; Farreres, Javier; Rodríguez, Horacio; Espuña, Antonio; Graells, Moisès
2017-03-01
Standardization is essential for automation. Extensibility, scalability, and reusability are important features for automation software that rely in the efficient modelling of the addressed systems. The work presented here is from the ongoing development of a methodology for semi-automatic ontology construction methodology from technical documents. The main aim of this work is to systematically check the consistency of technical documents and support the improvement of technical document consistency. The formalization of conceptual models and the subsequent writing of technical standards are simultaneously analyzed, and guidelines proposed for application to future technical standards. Three paradigms are discussed for the development of domain ontologies from technical documents, starting from the current state of the art, continuing with the intermediate method presented and used in this paper, and ending with the suggested paradigm for the future. The ISA88 Standard is taken as a representative case study. Linguistic techniques from the semi-automatic ontology construction methodology is applied to the ISA88 Standard and different modelling and standardization aspects that are worth sharing with the automation community is addressed. This study discusses different paradigms for developing and sharing conceptual models for the subsequent development of automation software, along with presenting the systematic consistency checking method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Mhaskar, Rahul; Djulbegovic, Benjamin; Magazin, Anja; Soares, Heloisa P.; Kumar, Ambuj
2011-01-01
Objectives To assess whether reported methodological quality of randomized controlled trials (RCTs) reflect the actual methodological quality, and to evaluate the association of effect size (ES) and sample size with methodological quality. Study design Systematic review Setting Retrospective analysis of all consecutive phase III RCTs published by 8 National Cancer Institute Cooperative Groups until year 2006. Data were extracted from protocols (actual quality) and publications (reported quality) for each study. Results 429 RCTs met the inclusion criteria. Overall reporting of methodological quality was poor and did not reflect the actual high methodological quality of RCTs. The results showed no association between sample size and actual methodological quality of a trial. Poor reporting of allocation concealment and blinding exaggerated the ES by 6% (ratio of hazard ratio [RHR]: 0.94, 95%CI: 0.88, 0.99) and 24% (RHR: 1.24, 95%CI: 1.05, 1.43), respectively. However, actual quality assessment showed no association between ES and methodological quality. Conclusion The largest study to-date shows poor quality of reporting does not reflect the actual high methodological quality. Assessment of the impact of quality on the ES based on reported quality can produce misleading results. PMID:22424985
Methodology and reporting of meta-analyses in the neurosurgical literature.
Klimo, Paul; Thompson, Clinton J; Ragel, Brian T; Boop, Frederick A
2014-04-01
Neurosurgeons are inundated with vast amounts of new clinical research on a daily basis, making it difficult and time-consuming to keep up with the latest literature. Meta-analysis is an extension of a systematic review that employs statistical techniques to pool the data from the literature in order to calculate a cumulative effect size. This is done to answer a clearly defined a priori question. Despite their increasing popularity in the neurosurgery literature, meta-analyses have not been scrutinized in terms of reporting and methodology. The authors performed a literature search using PubMed/MEDLINE to locate all meta-analyses that have been published in the JNS Publishing Group journals (Journal of Neurosurgery, Journal of Neurosurgery: Pediatrics, Journal of Neurosurgery: Spine, and Neurosurgical Focus) or Neurosurgery. Accepted checklists for reporting (PRISMA) and methodology (AMSTAR) were applied to each meta-analysis, and the number of items within each checklist that were satisfactorily fulfilled was recorded. The authors sought to answer 4 specific questions: Are meta-analyses improving 1) with time; 2) when the study met their definition of a meta-analysis; 3) when clinicians collaborated with a potential expert in meta-analysis; and 4) when the meta-analysis was the only focus of the paper? Seventy-two meta-analyses were published in the JNS Publishing Group journals and Neurosurgery between 1990 and 2012. The number of published meta-analyses has increased dramatically in the last several years. The most common topics were vascular, and most were based on observational studies. Only 11 papers were prepared using an established checklist. The average AMSTAR and PRISMA scores (proportion of items satisfactorily fulfilled divided by the total number of eligible items in the respective instrument) were 31% and 55%, respectively. Major deficiencies were identified, including the lack of a comprehensive search strategy, study selection and data extraction, assessment of heterogeneity, publication bias, and study quality. Almost one-third of the papers did not meet our basic definition of a meta-analysis. The quality of reporting and methodology was better 1) when the study met our definition of a meta-analysis; 2) when one or more of the authors had experience or expertise in conducting a meta-analysis; 3) when the meta-analysis was not conducted alongside an evaluation of the authors' own data; and 4) in more recent studies. Reporting and methodology of meta-analyses in the neurosurgery literature is excessively variable and overall poor. As these papers are being published with increasing frequency, neurosurgical journals need to adopt a clear definition of a meta-analysis and insist that they be created using checklists for both reporting and methodology. Standardization will ensure high-quality publications.
Hall, Deborah A; Szczepek, Agnieszka J; Kennedy, Veronica; Haider, Haúla
2015-01-01
Introduction In Europe alone, over 70 million people experience tinnitus. Despite its considerable socioeconomic relevance, progress in developing successful treatments has been limited. Clinical effectiveness is judged according to change in primary outcome measures, but because tinnitus is a subjective condition, the definition of outcomes is challenging and it remains unclear which distinct aspects of tinnitus (ie, ‘domains’) are most relevant for assessment. The development of a minimum outcome reporting standard would go a long way towards addressing these problems. In 2006, a consensus meeting recommended using 1 of 4 questionnaires for tinnitus severity as an outcome in clinical trials, in part because of availability in different language translations. Our initiative takes an approach motivated by clinimetrics, first by determining what to measure before seeking to determine how to measure it. Agreeing on the domains that contribute to tinnitus severity (ie, ‘what’) is the first step towards achieving a minimum outcome reporting standard for tinnitus that has been reached via a methodologically rigorous and transparent process. Methods and analysis Deciding what should be the core set of outcomes requires a great deal of discussion and so lends itself well to international effort. This protocol lays out the first-step methodology in defining a Core Domain Set for clinical trials of tinnitus by establishing existing knowledge and practice with respect to which outcome domains have been measured and which instruments used in recent registered and published clinical trials. Ethics and dissemination No ethical issues are foreseen. Findings will be reported at national and international ear, nose and throat (ENT) and audiology conferences and in a peer-reviewed journal, using PRISMA (Preferred Reporting Items for Systematic reviews and Meta-analysis) guidelines. Trial registration number The systematic review protocol is registered on PROSPERO (International Prospective Register of Systematic Reviews): CRD42015017525. PMID:26560061
Hall, Deborah A; Szczepek, Agnieszka J; Kennedy, Veronica; Haider, Haúla
2015-11-11
In Europe alone, over 70 million people experience tinnitus. Despite its considerable socioeconomic relevance, progress in developing successful treatments has been limited. Clinical effectiveness is judged according to change in primary outcome measures, but because tinnitus is a subjective condition, the definition of outcomes is challenging and it remains unclear which distinct aspects of tinnitus (ie, 'domains') are most relevant for assessment. The development of a minimum outcome reporting standard would go a long way towards addressing these problems. In 2006, a consensus meeting recommended using 1 of 4 questionnaires for tinnitus severity as an outcome in clinical trials, in part because of availability in different language translations. Our initiative takes an approach motivated by clinimetrics, first by determining what to measure before seeking to determine how to measure it. Agreeing on the domains that contribute to tinnitus severity (ie, 'what') is the first step towards achieving a minimum outcome reporting standard for tinnitus that has been reached via a methodologically rigorous and transparent process. Deciding what should be the core set of outcomes requires a great deal of discussion and so lends itself well to international effort. This protocol lays out the first-step methodology in defining a Core Domain Set for clinical trials of tinnitus by establishing existing knowledge and practice with respect to which outcome domains have been measured and which instruments used in recent registered and published clinical trials. No ethical issues are foreseen. Findings will be reported at national and international ear, nose and throat (ENT) and audiology conferences and in a peer-reviewed journal, using PRISMA (Preferred Reporting Items for Systematic reviews and Meta-analysis) guidelines. The systematic review protocol is registered on PROSPERO (International Prospective Register of Systematic Reviews): CRD42015017525. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-25
...] Request for Comments on Methodology for Conducting an Independent Study of the Burden of Patent-Related... methodologies for performing such a study (Methodology Report). ICF has now provided the USPTO with its Methodology Report, in which ICF recommends methodologies for addressing various topics about estimating the...
A scoping review on the conduct and reporting of scoping reviews.
Tricco, Andrea C; Lillie, Erin; Zarin, Wasifa; O'Brien, Kelly; Colquhoun, Heather; Kastner, Monika; Levac, Danielle; Ng, Carmen; Sharpe, Jane Pearson; Wilson, Katherine; Kenny, Meghan; Warren, Rachel; Wilson, Charlotte; Stelfox, Henry T; Straus, Sharon E
2016-02-09
Scoping reviews are used to identify knowledge gaps, set research agendas, and identify implications for decision-making. The conduct and reporting of scoping reviews is inconsistent in the literature. We conducted a scoping review to identify: papers that utilized and/or described scoping review methods; guidelines for reporting scoping reviews; and studies that assessed the quality of reporting of scoping reviews. We searched nine electronic databases for published and unpublished literature scoping review papers, scoping review methodology, and reporting guidance for scoping reviews. Two independent reviewers screened citations for inclusion. Data abstraction was performed by one reviewer and verified by a second reviewer. Quantitative (e.g. frequencies of methods) and qualitative (i.e. content analysis of the methods) syntheses were conducted. After searching 1525 citations and 874 full-text papers, 516 articles were included, of which 494 were scoping reviews. The 494 scoping reviews were disseminated between 1999 and 2014, with 45% published after 2012. Most of the scoping reviews were conducted in North America (53%) or Europe (38%), and reported a public source of funding (64%). The number of studies included in the scoping reviews ranged from 1 to 2600 (mean of 118). Using the Joanna Briggs Institute methodology guidance for scoping reviews, only 13% of the scoping reviews reported the use of a protocol, 36% used two reviewers for selecting citations for inclusion, 29% used two reviewers for full-text screening, 30% used two reviewers for data charting, and 43% used a pre-defined charting form. In most cases, the results of the scoping review were used to identify evidence gaps (85%), provide recommendations for future research (84%), or identify strengths and limitations (69%). We did not identify any guidelines for reporting scoping reviews or studies that assessed the quality of scoping review reporting. The number of scoping reviews conducted per year has steadily increased since 2012. Scoping reviews are used to inform research agendas and identify implications for policy or practice. As such, improvements in reporting and conduct are imperative. Further research on scoping review methodology is warranted, and in particular, there is need for a guideline to standardize reporting.
Methodological Review of Intimate Partner Violence Prevention Research
ERIC Educational Resources Information Center
Murray, Christine E.; Graybeal, Jennifer
2007-01-01
The authors present a methodological review of empirical program evaluation research in the area of intimate partner violence prevention. The authors adapted and utilized criterion-based rating forms to standardize the evaluation of the methodological strengths and weaknesses of each study. The findings indicate that the limited amount of…
Improving Mathematics Performance among Secondary Students with EBD: A Methodological Review
ERIC Educational Resources Information Center
Mulcahy, Candace A.; Krezmien, Michael P.; Travers, Jason
2016-01-01
In this methodological review, the authors apply special education research quality indicators and standards for single case design to analyze mathematics intervention studies for secondary students with emotional and behavioral disorders (EBD). A systematic methodological review of literature from 1975 to December 2012 yielded 19 articles that…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marinelli, R; Hamilton, T; Brown, T
2006-05-30
This report describes a standardized methodology used by researchers from the Center for Accelerator Mass Spectrometry (CAMS) (Energy and Environment Directorate) and the Environmental Radiochemistry Group (Chemistry and Materials Science Directorate) at the Lawrence Livermore National Laboratory (LLNL) for the full isotopic analysis of uranium from solution. The methodology has largely been developed for use in characterizing the uranium composition of selected nuclear materials but may also be applicable to environmental studies and assessments of public, military or occupational exposures to uranium using in-vitro bioassay monitoring techniques. Uranium isotope concentrations and isotopic ratios are measured using a combination of Multimore » Collector Inductively Coupled Plasma Mass Spectrometry (MC ICP-MS), Accelerator Mass Spectrometry (AMS) and Alpha Spectrometry.« less
Hallas, Gary; Monis, Paul
2015-01-01
The enumeration of bacteria using plate-based counts is a core technique used by food and water microbiology testing laboratories. However, manual counting of bacterial colonies is both time and labour intensive, can vary between operators and also requires manual entry of results into laboratory information management systems, which can be a source of data entry error. An alternative is to use automated digital colony counters, but there is a lack of peer-reviewed validation data to allow incorporation into standards. We compared the performance of digital counting technology (ProtoCOL3) against manual counting using criteria defined in internationally recognized standard methods. Digital colony counting provided a robust, standardized system suitable for adoption in a commercial testing environment. The digital technology has several advantages:•Improved measurement of uncertainty by using a standard and consistent counting methodology with less operator error.•Efficiency for labour and time (reduced cost).•Elimination of manual entry of data onto LIMS.•Faster result reporting to customers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Richard
As defined in the preamble of the final rule, the entire DOE facility on the Oak Ridge Reservation (ORR) must meet the 10 mrem/yr ED standard.1 In other words, the combined ED from all radiological air emission sources from Y-12 National Security Complex (Y-12 Complex), Oak Ridge National Laboratory (ORNL), East Tennessee Technology Park (ETTP), Oak Ridge Institute for Science and Education (ORISE) and any other DOE operation on the reservation must meet the 10 mrem/yr standard. Compliance with the standard is demonstrated through emission sampling, monitoring, calculations and radiation dose modeling in accordance with approved EPA methodologies and procedures.more » DOE estimates the ED to many individuals or receptor points in the vicinity of ORR, but it is the dose to the maximally exposed individual (MEI) that determines compliance with the standard.« less
Hasan, Haroon; Muhammed, Taaha; Yu, Jennifer; Taguchi, Kelsi; Samargandi, Osama A; Howard, A Fuchsia; Lo, Andrea C; Olson, Robert; Goddard, Karen
2017-10-01
The objective of our study was to evaluate the methodological quality of systematic reviews and meta-analyses in Radiation Oncology. A systematic literature search was conducted for all eligible systematic reviews and meta-analyses in Radiation Oncology from 1966 to 2015. Methodological characteristics were abstracted from all works that satisfied the inclusion criteria and quality was assessed using the critical appraisal tool, AMSTAR. Regression analyses were performed to determine factors associated with a higher score of quality. Following exclusion based on a priori criteria, 410 studies (157 systematic reviews and 253 meta-analyses) satisfied the inclusion criteria. Meta-analyses were found to be of fair to good quality while systematic reviews were found to be of less than fair quality. Factors associated with higher scores of quality in the multivariable analysis were including primary studies consisting of randomized control trials, performing a meta-analysis, and applying a recommended guideline related to establishing a systematic review protocol and/or reporting. Systematic reviews and meta-analyses may introduce a high risk of bias if applied to inform decision-making based on AMSTAR. We recommend that decision-makers in Radiation Oncology scrutinize the methodological quality of systematic reviews and meta-analyses prior to assessing their utility to inform evidence-based medicine and researchers adhere to methodological standards outlined in validated guidelines when embarking on a systematic review. Copyright © 2017 Elsevier Ltd. All rights reserved.
The economic burden of physical inactivity: a systematic review and critical appraisal.
Ding, Ding; Kolbe-Alexander, Tracy; Nguyen, Binh; Katzmarzyk, Peter T; Pratt, Michael; Lawson, Kenny D
2017-10-01
To summarise the literature on the economic burden of physical inactivity in populations, with emphases on appraising the methodologies and providing recommendations for future studies. Systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines (PROSPERO registration number CRD42016047705). Electronic databases for peer-reviewed and grey literature were systematically searched, followed by reference searching and consultation with experts. Studies that examined the economic consequences of physical inactivity in a population/population-based sample, with clearly stated methodologies and at least an abstract/summary written in English. Of the 40 eligible studies, 27 focused on direct healthcare costs only, 13 also estimated indirect costs and one study additionally estimated household costs. For direct costs, 23 studies used a population attributable fraction (PAF) approach with estimated healthcare costs attributable to physical inactivity ranging from 0.3% to 4.6% of national healthcare expenditure; 17 studies used an econometric approach, which tended to yield higher estimates than those using a PAF approach. For indirect costs, 10 studies used a human capital approach, two used a friction cost approach and one used a value of a statistical life approach. Overall, estimates varied substantially, even within the same country, depending on analytical approaches, time frame and other methodological considerations. Estimating the economic burden of physical inactivity is an area of increasing importance that requires further development. There is a marked lack of consistency in methodological approaches and transparency of reporting. Future studies could benefit from cross-disciplinary collaborations involving economists and physical activity experts, taking a societal perspective and following best practices in conducting and reporting analysis, including accounting for potential confounding, reverse causality and comorbidity, applying discounting and sensitivity analysis, and reporting assumptions, limitations and justifications for approaches taken. We have adapted the Consolidated Health Economic Evaluation Reporting Standards checklist as a guide for future estimates of the economic burden of physical inactivity and other risk factors. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Snaith, Henry J.; Hacke, Peter
2018-06-01
Photovoltaic modules are expected to operate in the field for more than 25 years, so reliability assessment is critical for the commercialization of new photovoltaic technologies. In early development stages, understanding and addressing the device degradation mechanisms are the priorities. However, any technology targeting large-scale deployment must eventually pass industry-standard qualification tests and undergo reliability testing to validate the module lifetime. In this Perspective, we review the methodologies used to assess the reliability of established photovoltaics technologies and to develop standardized qualification tests. We present the stress factors and stress levels for degradation mechanisms currently identified in pre-commercial perovskite devices, along with engineering concepts for mitigation of those degradation modes. Recommendations for complete and transparent reporting of stability tests are given, to facilitate future inter-laboratory comparisons and to further the understanding of field-relevant degradation mechanisms, which will benefit the development of accelerated stress tests.
Persson, M; Sandy, J R; Waylen, A; Wills, A K; Al-Ghatam, R; Ireland, A J; Hall, A J; Hollingworth, W; Jones, T; Peters, T J; Preston, R; Sell, D; Smallridge, J; Worthington, H; Ness, A R
2015-01-01
Structured Abstract Objectives We describe the methodology for a major study investigating the impact of reconfigured cleft care in the United Kingdom (UK) 15 years after an initial survey, detailed in the Clinical Standards Advisory Group (CSAG) report in 1998, had informed government recommendations on centralization. Setting and Sample Population This is a UK multicentre cross-sectional study of 5-year-olds born with non-syndromic unilateral cleft lip and palate. Children born between 1 April 2005 and 31 March 2007 were seen in cleft centre audit clinics. Materials and Methods Consent was obtained for the collection of routine clinical measures (speech recordings, hearing, photographs, models, oral health, psychosocial factors) and anthropometric measures (height, weight, head circumference). The methodology for each clinical measure followed those of the earlier survey as closely as possible. Results We identified 359 eligible children and recruited 268 (74.7%) to the study. Eleven separate records for each child were collected at the audit clinics. In total, 2666 (90.4%) were collected from a potential 2948 records. The response rates for the self-reported questionnaires, completed at home, were 52.6% for the Health and Lifestyle Questionnaire and 52.2% for the Satisfaction with Service Questionnaire. Conclusions Response rates and measures were similar to those achieved in the previous survey. There are practical, administrative and methodological challenges in repeating cross-sectional surveys 15 years apart and producing comparable data. PMID:26567851
Chhapola, Viswas; Tiwari, Soumya; Deepthi, Bobbity; Henry, Brandon Michael; Brar, Rekha; Kanwal, Sandeep Kumar
2018-06-01
A plethora of research is available on ultrasonographic kidney size standards. We performed a systematic review of methodological quality of ultrasound studies aimed at developing normative renal parameters in healthy children, by evaluating the risk of bias (ROB) using the 'Anatomical Quality Assessment (AQUA)' tool. We searched Medline, Scopus, CINAHL, and Google Scholar on June 04 2018, and observational studies measuring kidney size by ultrasonography in healthy children (0-18 years) were included. The ROB of each study was evaluated in five domains using a 20 item coding scheme based on AQUA tool framework. Fifty-four studies were included. Domain 1 (subject characteristics) had a high ROB in 63% of studies due to the unclear description of age, sex, and ethnicity. The performance in Domain 2 (study design) was the best with 85% of studies having a prospective design. Methodological characterization (Domain 3) was poor across the studies (< 10% compliance), with suboptimal performance in the description of patient positioning, operator experience, and assessment of intra/inter-observer reliability. About three-fourth of the studies had a low ROB in Domain 4 (descriptive anatomy). Domain 5 (reporting of results) had a high ROB in approximately half of the studies, the majority reporting results in the form of central tendency measures. Significant deficiencies and heterogeneity were observed in the methodological quality of USG studies performed to-date for measurement of kidney size in children. We hereby provide a framework for the conducting such studies in future. PROSPERO (CRD42017071601).
2012-01-01
Background The standardisation of the assessment methodology and case definition represents a major precondition for the comparison of study results and the conduction of meta-analyses. International guidelines provide recommendations for the standardisation of falls methodology; however, injurious falls have not been targeted. The aim of the present article was to review systematically the range of case definitions and methods used to measure and report on injurious falls in randomised controlled trials (RCTs) on fall prevention. Methods An electronic literature search of selected comprehensive databases was performed to identify injurious falls definitions in published trials. Inclusion criteria were: RCTs on falls prevention published in English, study population ≥ 65 years, definition of injurious falls as a study endpoint by using the terms "injuries" and "falls". Results The search yielded 2089 articles, 2048 were excluded according to defined inclusion criteria. Forty-one articles were included. The systematic analysis of the methodology applied in RCTs disclosed substantial variations in the definition and methods used to measure and document injurious falls. The limited standardisation hampered comparability of study results. Our results also highlight that studies which used a similar, standardised definition of injurious falls showed comparable outcomes. Conclusions No standard for defining, measuring, and documenting injurious falls could be identified among published RCTs. A standardised injurious falls definition enhances the comparability of study results as demonstrated by a subgroup of RCTs used a similar definition. Recommendations for standardising the methodology are given in the present review. PMID:22510239
Hajibandeh, Shahab; Hajibandeh, Shahin; Antoniou, George A; Green, Patrick A; Maden, Michelle; Torella, Francesco
2017-04-01
Purpose We aimed to investigate association between bibliometric parameters, reporting and methodological quality of vascular and endovascular surgery randomised controlled trials. Methods The most recent 75 and oldest 75 randomised controlled trials published in leading journals over a 10-year period were identified. The reporting quality was analysed using the CONSORT statement, and methodological quality with the Intercollegiate Guidelines Network checklist. We used exploratory univariate and multivariable linear regression analysis to investigate associations. Findings Bibliometric parameters such as type of journal, study design reported in title, number of pages; external funding, industry sponsoring and number of citations are associated with reporting quality. Moreover, parameters such as type of journal, subject area and study design reported in title are associated with methodological quality. Conclusions The bibliometric parameters of randomised controlled trials may be independent predictors for their reporting and methodological quality. Moreover, the reporting quality of randomised controlled trials is associated with their methodological quality and vice versa.
Jammer, Ib; Wickboldt, Nadine; Sander, Michael; Smith, Andrew; Schultz, Marcus J; Pelosi, Paolo; Leva, Brigitte; Rhodes, Andrew; Hoeft, Andreas; Walder, Bernhard; Chew, Michelle S; Pearse, Rupert M
2015-02-01
There is a need for large trials that test the clinical effectiveness of interventions in the field of perioperative medicine. Clinical outcome measures used in such trials must be robust, clearly defined and patient-relevant. Our objective was to develop standards for the use of clinical outcome measures to strengthen the methodological quality of perioperative medicine research. A literature search was conducted using PubMed and opinion leaders worldwide were invited to nominate papers that they believed the group should consider. The full texts of relevant articles were reviewed by the taskforce members and then discussed to reach a consensus on the required standards. The report was then circulated to opinion leaders for comment and review. This report describes definitions for 22 individual adverse events with a system of severity grading for each. In addition, four composite outcome measures were identified, which were designed to evaluate postoperative outcomes. The group also agreed on standards for four outcome measures for the evaluation of healthcare resource use and quality of life. Guidance for use of these outcome measures is provided, with particular emphasis on appropriate duration of follow-up. This report provides clearly defined and patient-relevant outcome measures for large clinical trials in perioperative medicine. These outcome measures may also be of use in clinical audit. This report is intended to complement and not replace other related work to improve assessment of clinical outcomes following specific surgical procedures.
DeBoer, D J; Hillier, A
2001-09-20
Serum-based in vitro "allergy tests" are commercially available to veterinarians, and are widely used in diagnostic evaluation of a canine atopic patient. Following initial clinical diagnosis, panels of allergen-specific IgE measurements may be performed in an attempt to identify to which allergens the atopic dog is hypersensitive. Methodology for these tests varies by laboratory; few critical studies have evaluated performance of these tests, and current inter-laboratory standardization and quality control measures are inadequate. Other areas where information is critically limited include the usefulness of these tests in diagnosis of food allergy, the effect of extrinsic factors such as season of the year on results, and the influence of corticosteroid treatment on test results. Allergen-specific IgE serological tests are never completely sensitive, nor completely specific. There is only partial correlation between the serum tests and intradermal testing; however, the significance of discrepant results is unknown and unstudied. Variation in test methodologies along with the absence of universal standardization and reporting procedures have created confusion, varying study results, and an inability to compare between studies performed by different investigators.
Cruz, Rebeca; Casal, Susana
2013-11-15
Vitamin E analysis in green vegetables is performed by an array of different methods, making it difficult to compare published data or choosing the adequate one for a particular sample. Aiming to achieve a consistent method with wide applicability, the current study reports the development and validation of a fast micro-method for quantification of vitamin E in green leafy vegetables. The methodology uses solid-liquid extraction based on the Folch method, with tocol as internal standard, and normal-phase HPLC with fluorescence detection. A large linear working range was confirmed, being highly reproducible, with inter-day precisions below 5% (RSD). Method sensitivity was established (below 0.02 μg/g fresh weight), and accuracy was assessed by recovery tests (>96%). The method was tested in different green leafy vegetables, evidencing diverse tocochromanol profiles, with variable ratios and amounts of α- and γ-tocopherol, and other minor compounds. The methodology is adequate for routine analyses, with a reduced chromatographic run (<7 min) and organic solvent consumption, and requires only standard chromatographic equipment available in most laboratories. Copyright © 2013 Elsevier Ltd. All rights reserved.
Macroeconomic effects on mortality revealed by panel analysis with nonlinear trends.
Ionides, Edward L; Wang, Zhen; Tapia Granados, José A
2013-10-03
Many investigations have used panel methods to study the relationships between fluctuations in economic activity and mortality. A broad consensus has emerged on the overall procyclical nature of mortality: perhaps counter-intuitively, mortality typically rises above its trend during expansions. This consensus has been tarnished by inconsistent reports on the specific age groups and mortality causes involved. We show that these inconsistencies result, in part, from the trend specifications used in previous panel models. Standard econometric panel analysis involves fitting regression models using ordinary least squares, employing standard errors which are robust to temporal autocorrelation. The model specifications include a fixed effect, and possibly a linear trend, for each time series in the panel. We propose alternative methodology based on nonlinear detrending. Applying our methodology on data for the 50 US states from 1980 to 2006, we obtain more precise and consistent results than previous studies. We find procyclical mortality in all age groups. We find clear procyclical mortality due to respiratory disease and traffic injuries. Predominantly procyclical cardiovascular disease mortality and countercyclical suicide are subject to substantial state-to-state variation. Neither cancer nor homicide have significant macroeconomic association.
Macroeconomic effects on mortality revealed by panel analysis with nonlinear trends
Ionides, Edward L.; Wang, Zhen; Tapia Granados, José A.
2013-01-01
Many investigations have used panel methods to study the relationships between fluctuations in economic activity and mortality. A broad consensus has emerged on the overall procyclical nature of mortality: perhaps counter-intuitively, mortality typically rises above its trend during expansions. This consensus has been tarnished by inconsistent reports on the specific age groups and mortality causes involved. We show that these inconsistencies result, in part, from the trend specifications used in previous panel models. Standard econometric panel analysis involves fitting regression models using ordinary least squares, employing standard errors which are robust to temporal autocorrelation. The model specifications include a fixed effect, and possibly a linear trend, for each time series in the panel. We propose alternative methodology based on nonlinear detrending. Applying our methodology on data for the 50 US states from 1980 to 2006, we obtain more precise and consistent results than previous studies. We find procyclical mortality in all age groups. We find clear procyclical mortality due to respiratory disease and traffic injuries. Predominantly procyclical cardiovascular disease mortality and countercyclical suicide are subject to substantial state-to-state variation. Neither cancer nor homicide have significant macroeconomic association. PMID:24587843
Abma, Femke I; van der Klink, Jac J L; Terwee, Caroline B; Amick, Benjamin C; Bültmann, Ute
2012-01-01
During the past decade, common mental disorders (CMD) have emerged as a major public and occupational health problem in many countries. Several instruments have been developed to measure the influence of health on functioning at work. To select appropriate instruments for use in occupational health practice and research, the measurement properties (eg, reliability, validity, responsiveness) must be evaluated. The objective of this study is to appraise critically and compare the measurement properties of self-reported health-related work-functioning instruments among workers with CMD. A systematic review was performed searching three electronic databases. Papers were included that: (i) mainly focused on the development and/or evaluation of the measurement properties of a self-reported health-related work-functioning instrument; (ii) were conducted in a CMD population; and (iii) were fulltext original papers. Quality appraisal was performed using the consensus-based standards for the selection of health status measurement instruments (COSMIN) checklist. Five papers evaluating measurement properties of five self-reported health-related work-functioning instruments in CMD populations were included. There is little evidence available for the measurement properties of the identified instruments in this population, mainly due to low methodological quality of the included studies. The available evidence on measurement properties is based on studies of poor-to-fair methodological quality. Information on a number of measurement properties, such as measurement error, content validity, and cross-cultural validity is still lacking. Therefore, no evidence-based decisions and recommendations can be made for the use of health-related work functioning instruments. Studies of high methodological quality are needed to properly assess the existing instruments' measurement properties.
2012-01-01
Background Healthcare accreditation standards are advocated as an important means of improving clinical practice and organisational performance. Standard development agencies have documented methodologies to promote open, transparent, inclusive development processes where standards are developed by members. They assert that their methodologies are effective and efficient at producing standards appropriate for the health industry. However, the evidence to support these claims requires scrutiny. The study’s purpose was to examine the empirical research that grounds the development methods and application of healthcare accreditation standards. Methods A multi-method strategy was employed over the period March 2010 to August 2011. Five academic health research databases (Medline, Psych INFO, Embase, Social work abstracts, and CINAHL) were interrogated, the websites of 36 agencies associated with the study topic were investigated, and a snowball search was undertaken. Search criteria included accreditation research studies, in English, addressing standards and their impact. Searching in stage 1 initially selected 9386 abstracts. In stage 2, this selection was refined against the inclusion criteria; empirical studies (n = 2111) were identified and refined to a selection of 140 papers with the exclusion of clinical or biomedical and commentary pieces. These were independently reviewed by two researchers and reduced to 13 articles that met the study criteria. Results The 13 articles were analysed according to four categories: overall findings; standards development; implementation issues; and impact of standards. Studies have only occurred in the acute care setting, predominately in 2003 (n = 5) and 2009 (n = 4), and in the United States (n = 8). A multidisciplinary focus (n = 9) and mixed method approach (n = 11) are common characteristics. Three interventional studies were identified, with the remaining 10 studies having research designs to investigate clinical or organisational impacts. No study directly examined standards development or other issues associated with their progression. Only one study noted implementation issues, identifying several enablers and barriers. Standards were reported to improve organisational efficiency and staff circumstances. However, the impact on clinical quality was mixed, with both improvements and a lack of measurable effects recorded. Conclusion Standards are ubiquitous within healthcare and are generally considered to be an important means by which to improve clinical practice and organisational performance. However, there is a lack of robust empirical evidence examining the development, writing, implementation and impacts of healthcare accreditation standards. PMID:22995152
Greenfield, David; Pawsey, Marjorie; Hinchcliff, Reece; Moldovan, Max; Braithwaite, Jeffrey
2012-09-20
Healthcare accreditation standards are advocated as an important means of improving clinical practice and organisational performance. Standard development agencies have documented methodologies to promote open, transparent, inclusive development processes where standards are developed by members. They assert that their methodologies are effective and efficient at producing standards appropriate for the health industry. However, the evidence to support these claims requires scrutiny. The study's purpose was to examine the empirical research that grounds the development methods and application of healthcare accreditation standards. A multi-method strategy was employed over the period March 2010 to August 2011. Five academic health research databases (Medline, Psych INFO, Embase, Social work abstracts, and CINAHL) were interrogated, the websites of 36 agencies associated with the study topic were investigated, and a snowball search was undertaken. Search criteria included accreditation research studies, in English, addressing standards and their impact. Searching in stage 1 initially selected 9386 abstracts. In stage 2, this selection was refined against the inclusion criteria; empirical studies (n = 2111) were identified and refined to a selection of 140 papers with the exclusion of clinical or biomedical and commentary pieces. These were independently reviewed by two researchers and reduced to 13 articles that met the study criteria. The 13 articles were analysed according to four categories: overall findings; standards development; implementation issues; and impact of standards. Studies have only occurred in the acute care setting, predominately in 2003 (n = 5) and 2009 (n = 4), and in the United States (n = 8). A multidisciplinary focus (n = 9) and mixed method approach (n = 11) are common characteristics. Three interventional studies were identified, with the remaining 10 studies having research designs to investigate clinical or organisational impacts. No study directly examined standards development or other issues associated with their progression. Only one study noted implementation issues, identifying several enablers and barriers. Standards were reported to improve organisational efficiency and staff circumstances. However, the impact on clinical quality was mixed, with both improvements and a lack of measurable effects recorded. Standards are ubiquitous within healthcare and are generally considered to be an important means by which to improve clinical practice and organisational performance. However, there is a lack of robust empirical evidence examining the development, writing, implementation and impacts of healthcare accreditation standards.
Sarikoc, Gamze; Ozcan, Celale Tangul; Elcin, Melih
2017-04-01
The use of standardized patients is not very common in psychiatric nursing education and there has been no study conducted in Turkey. This study evaluated the impact of using standardized patients in psychiatric cases on the levels of motivation and perceived learning of the nursing students. This manuscript addressed the quantitative aspect of a doctoral thesis study in which both quantitative and qualitative methods were used. A pre-test and post-test were employed in the quantitative analysis in a randomized and controlled study design. The motivation scores, and interim and post-test scores for perceived learning were higher in the experimental group compared to pre-test scores and the scores of the control group. The students in the experimental group reported that they felt more competent about practical training in clinical psychiatry, as well as in performing interviews with patients having mental problems, and reported less anxiety about performing an interview when compared to students in the control group. It is considered that the inclusion of standardized patient methodology in the nursing education curriculum in order to improve the knowledge level and skills of students would be beneficial in the training of mental health nurses. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tan, Siok Swan; Bakker, Jan; Hoogendoorn, Marga E; Kapila, Atul; Martin, Joerg; Pezzi, Angelo; Pittoni, Giovanni; Spronk, Peter E; Welte, Robert; Hakkaart-van Roijen, Leona
2012-01-01
The objective of the present study was to measure and compare the direct costs of intensive care unit (ICU) days at seven ICU departments in Germany, Italy, the Netherlands, and the United Kingdom by means of a standardized costing methodology. A retrospective cost analysis of ICU patients was performed from the hospital's perspective. The standardized costing methodology was developed on the basis of the availability of data at the seven ICU departments. It entailed the application of the bottom-up approach for "hotel and nutrition" and the top-down approach for "diagnostics," "consumables," and "labor." Direct costs per ICU day ranged from €1168 to €2025. Even though the distribution of costs varied by cost component, labor was the most important cost driver at all departments. The costs for "labor" amounted to €1629 at department G but were fairly similar at the other departments (€711 ± 115). Direct costs of ICU days vary widely between the seven departments. Our standardized costing methodology could serve as a valuable instrument to compare actual cost differences, such as those resulting from differences in patient case-mix. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Report: new guidelines for characterization of municipal solid waste: the Portuguese case.
da Graça Madeira Martinho, Maria; Silveira, Ana Isabel; Fernandes Duarte Branco, Elsa Maria
2008-10-01
This report proposes a new set of guidelines for the characterization of municipal solid waste. It is based on an analysis of reference methodologies, used internationally, and a case study of Valorsul (a company that handles recovery and treatment of solid waste in the North Lisbon Metropolitan Area). In particular, the suggested guidelines present a new definition of the waste to be analysed, change the sampling unit and establish statistical standards for the results obtained. In these new guidelines, the sampling level is the waste collection vehicle and contamination and moisture are taken into consideration. Finally, focus is on the quality of the resulting data, which is essential for comparability of data between countries. These new guidelines may also be applicable outside Portugal because the methodology includes, besides municipal mixed waste, separately collected fractions of municipal waste. They are a response to the need for information concerning Portugal (e.g. Eurostat or OECD inquiries) and follow European Union municipal solid waste management policies (e.g. packaging waste recovery and recycling targets and the reduction of biodegradable waste going to landfill).
Harden, Angela; Garcia, Jo; Oliver, Sandy; Rees, Rebecca; Shepherd, Jonathan; Brunton, Ginny; Oakley, Ann
2004-09-01
Methods for systematic reviews are well developed for trials, but not for non-experimental or qualitative research. This paper describes the methods developed for reviewing research on people's perspectives and experiences ("views" studies) alongside trials within a series of reviews on young people's mental health, physical activity, and healthy eating. Reports of views studies were difficult to locate; could not easily be classified as "qualitative" or "quantitative"; and often failed to meet seven basic methodological reporting standards used in a newly developed quality assessment tool. Synthesising views studies required the adaptation of qualitative analysis techniques. The benefits of bringing together views studies in a systematic way included gaining a greater breadth of perspectives and a deeper understanding of public health issues from the point of view of those targeted by interventions. A systematic approach also aided reflection on study methods that may distort, misrepresent, or fail to pick up people's views. This methodology is likely to create greater opportunities for people's own perspectives and experiences to inform policies to promote their health.
Harden, A.; Garcia, J.; Oliver, S.; Rees, R.; Shepherd, J.; Brunton, G.; Oakley, A.
2004-01-01
Methods for systematic reviews are well developed for trials, but not for non-experimental or qualitative research. This paper describes the methods developed for reviewing research on people's perspectives and experiences ("views" studies) alongside trials within a series of reviews on young people's mental health, physical activity, and healthy eating. Reports of views studies were difficult to locate; could not easily be classified as "qualitative" or "quantitative"; and often failed to meet seven basic methodological reporting standards used in a newly developed quality assessment tool. Synthesising views studies required the adaptation of qualitative analysis techniques. The benefits of bringing together views studies in a systematic way included gaining a greater breadth of perspectives and a deeper understanding of public health issues from the point of view of those targeted by interventions. A systematic approach also aided reflection on study methods that may distort, misrepresent, or fail to pick up people's views. This methodology is likely to create greater opportunities for people's own perspectives and experiences to inform policies to promote their health. PMID:15310807
Rating methodological quality: toward improved assessment and investigation.
Moyer, Anne; Finney, John W
2005-01-01
Assessing methodological quality is considered essential in deciding what investigations to include in research syntheses and in detecting potential sources of bias in meta-analytic results. Quality assessment is also useful in characterizing the strengths and limitations of the research in an area of study. Although numerous instruments to measure research quality have been developed, they have lacked empirically-supported components. In addition, different summary quality scales have yielded different findings when they were used to weight treatment effect estimates for the same body of research. Suggestions for developing improved quality instruments include: distinguishing distinct domains of quality, such as internal validity, external validity, the completeness of the study report, and adherence to ethical practices; focusing on individual aspects, rather than domains of quality; and focusing on empirically-verified criteria. Other ways to facilitate the constructive use of quality assessment are to improve and standardize the reporting of research investigations, so that the quality of studies can be more equitably and thoroughly compared, and to identify optimal methods for incorporating study quality ratings into meta-analyses.
Methodological, technical, and ethical issues of a computerized data system.
Rice, C A; Godkin, M A; Catlin, R J
1980-06-01
This report examines some methodological, technical, and ethical issues which need to be addressed in designing and implementing a valid and reliable computerized clinical data base. The report focuses on the data collection system used by four residency based family health centers, affiliated with the University of Massachusetts Medical Center. It is suggested that data reliability and validity can be maximized by: (1) standardizing encounter forms at affiliated health centers to eliminate recording biases and ensure data comparability; (2) using forms with a diagnosis checklist to reduce coding errors and increase the number of diagnoses recorded per encounter; (3) developing uniform diagnostic criteria; (4) identifying sources of error, including discrepancies of clinical data as recorded in medical records, encounter forms, and the computer; and (5) improving provider cooperation in recording data by distributing data summaries which reinforce the data's applicability to service provision. Potential applications of the data for research purposes are restricted by personnel and computer costs, confidentiality considerations, programming related issues, and, most importantly, health center priorities, largely focused on patient care, not research.
Simpson, I.; Durodie, J.; Knott, S.; Shea, B.; Wilson, J.; Machka, K.
1998-01-01
Amoxicillin-clavulanate (Augmentin), as a combination of two active agents, poses extra challenges over single agents in establishing clinically relevant breakpoints for in vitro susceptibility tests. Hence, reported differences in amoxicillin-clavulanate percent susceptibilities among Escherichia coli isolates may reflect localized resistance problems and/or methodological differences in susceptibility testing and breakpoint criteria. The objectives of the present study were to determine the effects of (i) methodology, e.g., those of the National Committee for Clinical Laboratory Standards (NCCLS) and the Deutsche Industrie Norm-Medizinische Mikrobiologie (DIN), (ii) country of origin (Spain, France, and Germany), and (iii) site of infection (urinary tract, intra-abdominal sepsis, or other site[s]) upon the incidence of susceptibility to amoxicillin-clavulanate in 185 clinical isolates of E. coli. Cefuroxime and cefotaxime were included for comparison. The use of NCCLS methodology resulted in different distribution of amoxicillin-clavulanate MICs than that obtained with the DIN methodology, a difference highlighted by the 10% more strains found to be within the 8- to 32-μg/ml MIC range. This difference reflects the differing amounts of clavulanic acid present. NCCLS and DIN methodologies also produce different MIC distributions for cefotaxime but not for cefuroxime. Implementation of NCCLS and DIN breakpoints produced markedly different incidences of strains that were found to be susceptible, intermediate or resistant to amoxicillin-clavulanate. A total of 86.5% strains were found to be susceptible to amoxicillin-clavulanate by the NCCLS methodology, whereas only 43.8% were found to be susceptible by the DIN methodology. Similarly, 4.3% of the strains were found to be resistant by NCCLS guidelines compared to 21.1% by the DIN guidelines. The use of DIN breakpoints resulted in a fivefold-higher incidence of strains categorized as resistant to cefuroxime. There were no marked differences due to country of origin upon the MIC distributions for amoxicillin-clavulanate, cefuroxime, or cefotaxime, as determined with the NCCLS guidelines. Isolates from urinary tract and intra-abdominal infections were generally more resistant to amoxicillin-clavulanate than were isolates from other sites of infection. PMID:9574706
Gabler, Nicole B; Duan, Naihua; Raneses, Eli; Suttner, Leah; Ciarametaro, Michael; Cooney, Elizabeth; Dubois, Robert W; Halpern, Scott D; Kravitz, Richard L
2016-07-16
When subgroup analyses are not correctly analyzed and reported, incorrect conclusions may be drawn, and inappropriate treatments provided. Despite the increased recognition of the importance of subgroup analysis, little information exists regarding the prevalence, appropriateness, and study characteristics that influence subgroup analysis. The objective of this study is to determine (1) if the use of subgroup analyses and multivariable risk indices has increased, (2) whether statistical methodology has improved over time, and (3) which study characteristics predict subgroup analysis. We randomly selected randomized controlled trials (RCTs) from five high-impact general medical journals during three time periods. Data from these articles were abstracted in duplicate using standard forms and a standard protocol. Subgroup analysis was defined as reporting any subgroup effect. Appropriate methods for subgroup analysis included a formal test for heterogeneity or interaction across treatment-by-covariate groups. We used logistic regression to determine the variables significantly associated with any subgroup analysis or, among RCTs reporting subgroup analyses, using appropriate methodology. The final sample of 416 articles reported 437 RCTs, of which 270 (62 %) reported subgroup analysis. Among these, 185 (69 %) used appropriate methods to conduct such analyses. Subgroup analysis was reported in 62, 55, and 67 % of the articles from 2007, 2010, and 2013, respectively. The percentage using appropriate methods decreased over the three time points from 77 % in 2007 to 63 % in 2013 (p < 0.05). Significant predictors of reporting subgroup analysis included industry funding (OR 1.94 (95 % CI 1.17, 3.21)), sample size (OR 1.98 per quintile (1.64, 2.40), and a significant primary outcome (OR 0.55 (0.33, 0.92)). The use of appropriate methods to conduct subgroup analysis decreased by year (OR 0.88 (0.76, 1.00)) and was less common with industry funding (OR 0.35 (0.18, 0.70)). Only 33 (18 %) of the RCTs examined subgroup effects using a multivariable risk index. While we found no significant increase in the reporting of subgroup analysis over time, our results show a significant decrease in the reporting of subgroup analyses using appropriate methods during recent years. Industry-sponsored trials may more commonly report subgroup analyses, but without utilizing appropriate methods. Suboptimal reporting of subgroup effects may impact optimal physician-patient decision-making.
Bueno de Souza, Roberta Oliveira; Marcon, Liliane de Faria; Arruda, Alex Sandro Faria de; Pontes Junior, Francisco Luciano; Melo, Ruth Caldeira de
2018-06-01
The present meta-analysis aimed to examine evidence from randomized controlled trials to determine the effects of mat Pilates on measures of physical functional performance in the older population. A search was conducted in the MEDLINE/PubMed, Scopus, Scielo, and PEDro databases between February and March 2017. Only randomized controlled trials that were written in English, included subjects aged 60 yrs who used mat Pilates exercises, included a comparison (control) group, and reported performance-based measures of physical function (balance, flexibility, muscle strength, and cardiorespiratory fitness) were included. The methodological quality of the studies was analyzed according to the PEDro scale and the best-evidence synthesis. The meta-analysis was conducted with the Review Manager 5.3 software. The search retrieved 518 articles, nine of which fulfilled the inclusion criteria. High methodological quality was found in five of these studies. Meta-analysis indicated a large effect of mat Pilates on dynamic balance (standardized mean difference = 1.10, 95% confidence interval = 0.29-1.90), muscle strength (standardized mean difference = 1.13, 95% confidence interval = 0.30-1.96), flexibility (standardized mean difference = 1.22, 95% confidence interval = 0.39-2.04), and cardiorespiratory fitness (standardized mean difference = 1.48, 95% confidence interval = 0.42-2.54) of elderly subjects. There is evidence that mat Pilates improves dynamic balance, lower limb strength, hip and lower back flexibility, and cardiovascular endurance in elderly individuals. Furthermore, high-quality studies are necessary to clarify the effects of mat Pilates on other physical functional measurements among older adults.
Whale, Katie; Fish, Daniel; Fayers, Peter; Cafaro, Valentina; Pusic, Andrea; Blazeby, Jane M.; Efficace, Fabio
2016-01-01
Purpose Randomised controlled trials (RCTs) are the most robust study design measuring outcomes of colorectal cancer (CRC) treatments, but to influence clinical practice trial design and reporting of patient-reported outcomes (PROs) must be of high quality. Objectives of this study were as follows: to examine the quality of PRO reporting in RCTs of CRC treatment; to assess the availability of robust data to inform clinical decision-making; and to investigate whether quality of reporting improved over time. Methods A systematic review from January 2004–February 2012 identified RCTs of CRC treatment describing PROs. Relevant abstracts were screened and manuscripts obtained. Methodological quality was assessed using International Society for Quality of Life Research—patient-reported outcome reporting standards. Changes in reporting quality over time were established by comparison with previous data, and risk of bias was assessed with the Cochrane risk of bias tool. Results Sixty-six RCTs were identified, seven studies (10 %) reported survival benefit favouring the experimental treatment, 35 trials (53 %) identified differences in PROs between treatment groups, and the clinical significance of these differences was discussed in 19 studies (29 %). The most commonly reported treatment type was chemotherapy (n = 45; 68 %). Improvements over time in key methodological issues including the documentation of missing data and the discussion of the clinical significance of PROs were found. Thirteen trials (20 %) had high-quality reporting. Conclusions Whilst improvements in PRO quality reporting over time were found, several recent studies still fail to robustly inform clinical practice. Quality of PRO reporting must continue to improve to maximise the clinical impact of PRO findings. PMID:25910987
Density matters: Review of approaches to setting organism-based ballast water discharge standards
Lee II,; Frazier,; Ruiz,
2010-01-01
As part of their effort to develop national ballast water discharge standards under NPDES permitting, the Office of Water requested that WED scientists identify and review existing approaches to generating organism-based discharge standards for ballast water. Six potential approaches were identified and the utility and uncertainties of each approach was evaluated. During the process of reviewing the existing approaches, the WED scientists, in conjunction with scientists at the USGS and Smithsonian Institution, developed a new approach (per capita invasion probability or "PCIP") that addresses many of the limitations of the previous methodologies. THE PCIP approach allows risk managers to generate quantitative discharge standards using historical invasion rates, ballast water discharge volumes, and ballast water organism concentrations. The statistical power of sampling ballast water for both the validation of ballast water treatment systems and ship-board compliance monitoring with the existing methods, though it should be possible to obtain sufficient samples during treatment validation. The report will go to a National Academy of Sciences expert panel that will use it in their evaluation of approaches to developing ballast water discharge standards for the Office of Water.
Bellido-Pérez, Mercedes; Monforte-Royo, Cristina; Tomás-Sábado, Joaquín; Porta-Sales, Josep; Balaguer, Albert
2017-06-01
Patients with advanced conditions may present a wish to hasten death. Assessing this wish is complex due to the nature of the phenomenon and the difficulty of conceptualising it. To identify and analyse existing instruments for assessing the wish to hasten death and to rate their reported psychometric properties. Systematic review based on PRISMA guidelines. The COnsensus-based Standards for the selection of health Measurement INstruments checklist was used to evaluate the methodological quality of validation studies and the measurement properties of the instrument described. The CINAHL, PsycINFO, Pubmed and Web of Science databases were searched from inception to November 2015. A total of 50 articles involving assessment of the wish to hasten death were included. Eight concerned instrument validation and were evaluated using COnsensus-based Standards for the selection of health Measurement INstruments criteria. They reported data for between two and seven measurement properties, with ratings between fair and excellent. Of the seven instruments identified, the Desire for Death Rating Scale or the Schedule of Attitudes toward Hastened Death feature in 48 of the 50 articles. The Schedule of Attitudes toward Hastened Death is the most widely used and is the instrument whose psychometric properties have been most often analysed. Versions of the Schedule of Attitudes toward Hastened Death are available in five languages other than the original English. This systematic review has analysed existing instruments for assessing the wish to hasten death. It has also explored the methodological quality of studies that have examined the measurement properties of these instruments and offers ratings of the reported properties. These results will be useful to clinicians and researchers with an interest in a phenomenon of considerable relevance to advanced patients.
Bellido-Pérez, Mercedes; Monforte-Royo, Cristina; Tomás-Sábado, Joaquín; Porta-Sales, Josep; Balaguer, Albert
2016-01-01
Background: Patients with advanced conditions may present a wish to hasten death. Assessing this wish is complex due to the nature of the phenomenon and the difficulty of conceptualising it. Aim: To identify and analyse existing instruments for assessing the wish to hasten death and to rate their reported psychometric properties. Design: Systematic review based on PRISMA guidelines. The COnsensus-based Standards for the selection of health Measurement INstruments checklist was used to evaluate the methodological quality of validation studies and the measurement properties of the instrument described. Data sources: The CINAHL, PsycINFO, Pubmed and Web of Science databases were searched from inception to November 2015. Results: A total of 50 articles involving assessment of the wish to hasten death were included. Eight concerned instrument validation and were evaluated using COnsensus-based Standards for the selection of health Measurement INstruments criteria. They reported data for between two and seven measurement properties, with ratings between fair and excellent. Of the seven instruments identified, the Desire for Death Rating Scale or the Schedule of Attitudes toward Hastened Death feature in 48 of the 50 articles. The Schedule of Attitudes toward Hastened Death is the most widely used and is the instrument whose psychometric properties have been most often analysed. Versions of the Schedule of Attitudes toward Hastened Death are available in five languages other than the original English. Conclusion: This systematic review has analysed existing instruments for assessing the wish to hasten death. It has also explored the methodological quality of studies that have examined the measurement properties of these instruments and offers ratings of the reported properties. These results will be useful to clinicians and researchers with an interest in a phenomenon of considerable relevance to advanced patients. PMID:28124578
Hoskin, Jordan D; Miyatani, Masae; Craven, B Catharine
2017-03-30
Carotid intima-media thickness (cIMT) may be used increasingly as a cardiovascular disease (CVD) screening tool in individuals with spinal cord injury (SCI) as other routine invasive diagnostic tests are often unfeasible. However, variation in cIMT acquisition and analysis methods is an issue in the current published literature. The growth of the field is dependent on cIMT quality acquisition and analysis to ensure accurate reporting of CVD risk. The purpose of this study is to evaluate the quality of the reported methodology used to collect cIMT values in SCI. Data from 12 studies, which measured cIMT in individuals with SCI, were identified from the Medline, Embase and CINAHL databases. The quality of the reported methodologies was scored based on adherence to cIMT methodological guidelines abstracted from two consensus papers. Five studies were scored as 'moderate quality' in methodological reporting, having specified 9 to 11 of 15 quality reporting criterion. The remaining seven studies were scored as 'low quality', having reported less than 9 of 15 quality reporting criterion. No study had methodological reporting that was scored as 'high quality'. The overall reporting of quality methodology was poor in the published SCI literature. A greater adherence to current methodological guidelines is needed to advance the field of cIMT in SCI. Further research is necessary to refine cIMT acquisition and analysis guidelines to aid authors designing research and journals in screening manuscripts for publication.
Bartella, Lucia; Mazzotti, Fabio; Napoli, Anna; Sindona, Giovanni; Di Donna, Leonardo
2018-03-01
A rapid and reliable method to assay the total amount of tyrosol and hydroxytyrosol derivatives in extra virgin olive oil has been developed. The methodology intends to establish the nutritional quality of this edible oil addressing recent international health claim legislations (the European Commission Regulation No. 432/2012) and changing the classification of extra virgin olive oil to the status of nutraceutical. The method is based on the use of high-performance liquid chromatography coupled with tandem mass spectrometry and labeled internal standards preceded by a fast hydrolysis reaction step performed through the aid of microwaves under acid conditions. The overall process is particularly time saving, much shorter than any methodology previously reported. The developed approach represents a mix of rapidity and accuracy whose values have been found near 100% on different fortified vegetable oils, while the RSD% values, calculated from repeatability and reproducibility experiments, are in all cases under 7%. Graphical abstract Schematic of the methodology applied to the determination of tyrosol and hydroxytyrosol ester conjugates.
Monitoring Detrusor Oxygenation and Hemodynamics Noninvasively during Dysfunctional Voiding
Macnab, Andrew J.; Stothers, Lynn S.; Shadgan, Babak
2012-01-01
The current literature indicates that lower urinary tract symptoms (LUTSs) related to benign prostatic hyperplasia (BPH) have a heterogeneous pathophysiology. Pressure flow studies (UDSs) remain the gold standard evaluation methodology for such patients. However, as the function of the detrusor muscle depends on its vasculature and perfusion, the underlying causes of LUTS likely include abnormalities of detrusor oxygenation and hemodynamics, and available treatment options include agents thought to act on the detrusor smooth muscle and/or vasculature. Hence, near infrared spectroscopy (NIRS), an established optical methodology for monitoring changes in tissue oxygenation and hemodynamics, has relevance as a means of expanding knowledge related to the pathophysiology of BPH and potential treatment options. This methodological report describes how to conduct simultaneous NIRS monitoring of detrusor oxygenation and hemodynamics during UDS, outlines the clinical implications and practical applications of NIRS, explains the principles of physiologic interpretation of NIRS voiding data, and proposes an exploratory hypothesis that the pathophysiological causes underlying LUTS include detrusor dysfunction due to an abnormal hemodynamic response or the onset of oxygen debt during voiding. PMID:23019422
Mhaskar, Rahul; Djulbegovic, Benjamin; Magazin, Anja; Soares, Heloisa P; Kumar, Ambuj
2012-06-01
To assess whether the reported methodological quality of randomized controlled trials (RCTs) reflects the actual methodological quality and to evaluate the association of effect size (ES) and sample size with methodological quality. Systematic review. This is a retrospective analysis of all consecutive phase III RCTs published by eight National Cancer Institute Cooperative Groups up to 2006. Data were extracted from protocols (actual quality) and publications (reported quality) for each study. Four hundred twenty-nine RCTs met the inclusion criteria. Overall reporting of methodological quality was poor and did not reflect the actual high methodological quality of RCTs. The results showed no association between sample size and actual methodological quality of a trial. Poor reporting of allocation concealment and blinding exaggerated the ES by 6% (ratio of hazard ratio [RHR]: 0.94; 95% confidence interval [CI]: 0.88, 0.99) and 24% (RHR: 1.24; 95% CI: 1.05, 1.43), respectively. However, actual quality assessment showed no association between ES and methodological quality. The largest study to date shows that poor quality of reporting does not reflect the actual high methodological quality. Assessment of the impact of quality on the ES based on reported quality can produce misleading results. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandor, Debra; Chung, Donald; Keyser, David
This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.
Methodological issues of genetic association studies.
Simundic, Ana-Maria
2010-12-01
Genetic association studies explore the association between genetic polymorphisms and a certain trait, disease or predisposition to disease. It has long been acknowledged that many genetic association studies fail to replicate their initial positive findings. This raises concern about the methodological quality of these reports. Case-control genetic association studies often suffer from various methodological flaws in study design and data analysis, and are often reported poorly. Flawed methodology and poor reporting leads to distorted results and incorrect conclusions. Many journals have adopted guidelines for reporting genetic association studies. In this review, some major methodological determinants of genetic association studies will be discussed.
[Nursing methodology applicated in patients with pressure ulcers. Clinical report].
Galvez Romero, Carmen
2014-05-01
The application of functional patterns lets us to make a systematic and premeditated nursing assessment, with which we obtain a lot of relevant patient data in an organized way, making easier to analize them. In our case, we use Marjory Gordon's functional health patterns and NANDA (North American Nursing Diagnosis Association), NOC (Nursing Outcomes Classification), NIC (Nursing Intervention Classification) taxonomy. The overall objective of this paper is to present the experience of implementation and development of nursing methodology in the care of patients with pressure ulcers. In this article it's reported a case of a 52-year-old female who presented necrosis of phalanxes in upper and lower limbs and suffered amputations of them after being hospitalized in an Intensive Care Unit. She was discharged with pressure ulcers on both heels. GENERAL ASSESSMENT: It was implemented the nursing theory known as "Gordon's functional health patterns" and the affected patterns were identified. The Second Pattern (Nutritional-Metabolic) was considered as reference, since this was the pattern which altered the rest. EVOLUTION OF THE PATIENT: The patient had a favourable evolution, improving all the altered patterns. The infections symptoms disappeared and the pressure ulcers of both heels healed completely. The application of nursing methodology to care patients with pressure ulcers using clinical practice guidelines, standardized procedures and rating scales of assessment improves the evaluation of results and the performance of nurses.
Payload training methodology study
NASA Technical Reports Server (NTRS)
1990-01-01
The results of the Payload Training Methodology Study (PTMS) are documented. Methods and procedures are defined for the development of payload training programs to be conducted at the Marshall Space Flight Center Payload Training Complex (PCT) for the Space Station Freedom program. The study outlines the overall training program concept as well as the six methodologies associated with the program implementation. The program concept outlines the entire payload training program from initial identification of training requirements to the development of detailed design specifications for simulators and instructional material. The following six methodologies are defined: (1) The Training and Simulation Needs Assessment Methodology; (2) The Simulation Approach Methodology; (3) The Simulation Definition Analysis Methodology; (4) The Simulator Requirements Standardization Methodology; (5) The Simulator Development Verification Methodology; and (6) The Simulator Validation Methodology.
Optimal assessment of parenting, or how I learned to stop worrying and love reporter disagreement.
Schofield, Thomas J; Parke, Ross D; Coltrane, Scott; Weaver, Jennifer M
2016-08-01
The purpose of this study was to examine differences and similarities across ratings of parenting by preadolescents, parents, and observers. Two hundred forty-one preadolescents rated their parents on warmth and harshness. Both mothers and fathers self-reported on these same dimensions, and observers rated each parents' warmth and harshness during a 10 min interaction task with the preadolescent. For the majority of outcomes assessed, the differences between preadolescent, parent, and observer ratings accounted for significant amounts of variance, beyond the levels accounted for by the average of their reports. A replication sample of 929 mother-child dyads provided a similar pattern of results. This methodology can help standardize the study of reporter differences, supports modeling of rater-specific variance as true score, and illustrates the benefits of collecting parenting data from multiple reporters. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
ERIC Educational Resources Information Center
Macy, Barry A.; Mirvis, Philip H.
1982-01-01
A standardized methodology for identifying, defining, and measuring work behavior and performance rather than production, and a methodology that estimates the costs and benefits of work innovation are presented for assessing organizational effectiveness and program costs versus benefits in organizational change programs. Factors in a cost-benefit…
Maintaining Equivalent Cut Scores for Small Sample Test Forms
ERIC Educational Resources Information Center
Dwyer, Andrew C.
2016-01-01
This study examines the effectiveness of three approaches for maintaining equivalent performance standards across test forms with small samples: (1) common-item equating, (2) resetting the standard, and (3) rescaling the standard. Rescaling the standard (i.e., applying common-item equating methodology to standard setting ratings to account for…
NASA Astrophysics Data System (ADS)
Aguilar Cisneros, Jorge; Vargas Martinez, Hector; Pedroza Melendez, Alejandro; Alonso Arevalo, Miguel
2013-09-01
Mexico is a country where the experience to build software for satellite applications is beginning. This is a delicate situation because in the near future we will need to develop software for the SATEX-II (Mexican Experimental Satellite). SATEX- II is a SOMECyTA's project (the Mexican Society of Aerospace Science and Technology). We have experienced applying software development methodologies, like TSP (Team Software Process) and SCRUM in other areas. Then, we analyzed these methodologies and we concluded: these can be applied to develop software for the SATEX-II, also, we supported these methodologies with SSP-05-0 Standard in particular with ESA PSS-05-11. Our analysis was focusing on main characteristics of each methodology and how these methodologies could be used with the ESA PSS 05-0 Standards. Our outcomes, in general, may be used by teams who need to build small satellites, but, in particular, these are going to be used when we will build the on board software applications for the SATEX-II.
Ziomber, Agata; Surowka, Artur Dawid; Antkiewicz-Michaluk, Lucyna; Romanska, Irena; Wrobel, Pawel; Szczerbowska-Boruchowska, Magdalena
2018-03-01
Obesity is a chronic, multifactorial origin disease that has recently become one of the most frequent lifestyle disorders. Unfortunately, current obesity treatments seem to be ineffective. At present, transcranial direct current brain stimulation (tDCS) represents a promising novel treatment methodology that seems to be efficient, well-tolerated and safe for a patient. Unfortunately, the biochemical action of tDCS remains unknown, which prevents its widespread use in the clinical arena, although neurobiochemical changes in brain signaling and metal metabolism are frequently reported. Therefore, our research aimed at exploring the biochemical response to tDCS in situ, in the brain areas triggering feeding behavior in obese animals. The objective was to propose a novel neurochemical (serotoninergic and dopaminergic signaling) and trace metal analysis of Fe, Cu and Zn. In doing so, we used energy-dispersive X-ray fluorescence (EDXRF) and high-performance liquid chromatography (HPLC). Anodal-type stimulation (atDCS) of the right frontal cortex was utilized to down-regulate food intake and body weight gain in obese rats. EDXRF was coupled with the external standard method in order to quantify the chemical elements within appetite-triggering brain areas. Major dopamine metabolites were assessed in the brains, based on the HPLC assay utilizing the external standard assay. Our study confirms that elemental analysis by EDXRF and brain metabolite assay by HPLC can be considered as a useful tool for the in situ investigation of the interplay between neurochemical and Fe/Cu/Zn metabolism in the brain upon atDCS. With this methodology, an increase in both Cu and Zn in the satiety center of the stimulated group could be reported. In turn, the most significant neurochemical changes involved dopaminergic and serotoninergic signaling in the brain reward system.
Hrovatin, Karin; Kunej, Tanja
2018-01-01
Erstwhile, sex was determined by observation, which is not always feasible. Nowadays, genetic methods are prevailing due to their accuracy, simplicity, low costs, and time-efficiency. However, there is no comprehensive review enabling overview and development of the field. The studies are heterogeneous, lacking a standardized reporting strategy. Therefore, our aim was to collect genetic sexing assays for mammals and assemble them in a catalogue with unified terminology. Publications were extracted from online databases using key words such as sexing and molecular. The collected data were supplemented with species and gene IDs and the type of sex-specific sequence variant (SSSV). We developed a catalogue and graphic presentation of diagnostic tests for molecular sex determination of mammals, based on 58 papers published from 2/1991 to 10/2016. The catalogue consists of five categories: species, genes, SSSVs, methods, and references. Based on the analysis of published literature, we propose minimal requirements for reporting, consisting of: species scientific name and ID, genetic sequence with name and ID, SSSV, methodology, genomic coordinates (e.g., restriction sites, SSSVs), amplification system, and description of detected amplicon and controls. The present study summarizes vast knowledge that has up to now been scattered across databases, representing the first step toward standardization regarding molecular sexing, enabling a better overview of existing tests and facilitating planned designs of novel tests. The project is ongoing; collecting additional publications, optimizing field development, and standardizing data presentation are needed.
Arefian, Habibollah; Vogel, Monique; Kwetkat, Anja; Hartmann, Michael
2016-01-01
This systematic review sought to assess the costs and benefits of interventions preventing hospital-acquired infections and to evaluate methodological and reporting quality. We systematically searched Medline via PubMed and the National Health Service Economic Evaluation Database from 2009 to 2014. We included quasi-experimental and randomized trails published in English or German evaluating the economic impact of interventions preventing the four most frequent hospital-acquired infections (urinary tract infections, surgical wound infections, pneumonia, and primary bloodstream infections). Characteristics and results of the included articles were extracted using a standardized data collection form. Study and reporting quality were evaluated using SIGN and CHEERS checklists. All costs were adjusted to 2013 US$. Savings-to-cost ratios and difference values with interquartile ranges (IQRs) per month were calculated, and the effects of study characteristics on the cost-benefit results were analyzed. Our search returned 2067 articles, of which 27 met the inclusion criteria. The median savings-to-cost ratio across all studies reporting both costs and savings values was US $7.0 (IQR 4.2-30.9), and the median net global saving was US $13,179 (IQR 5,106-65,850) per month. The studies' reporting quality was low. Only 14 articles reported more than half of CHEERS items appropriately. Similarly, an assessment of methodological quality found that only four studies (14.8%) were considered high quality. Prevention programs for hospital acquired infections have very positive cost-benefit ratios. Improved reporting quality in health economics publications is required.
Craven, M R; Kia, L; O'Dwyer, L C; Stern, E; Taft, T H; Keefer, L
2018-03-01
Health care disparities affecting the care of multiple disease groups are of growing concern internationally. Research guidelines, governmental institutions, and scientific journals have attempted to minimize disparities through policies regarding the collection and reporting of racial/ethnic data. One area where shortcomings remain is in gastroesophageal reflux disease (GERD). This systematic review, which adheres to the PRISMA statement, focuses on characterizing existing methodological weaknesses in research focusing on studies regarding the assessment, prevalence, treatment, and outcomes of GERD patients. Search terms included GERD and typical symptoms of GERD in ethnic groups or minorities. We reviewed 62 articles. The majority of studies did not report the race/ethnicity of all participants, and among those who did, very few followed accepted guidelines. While there were diverse participants, there was also diversity in the manner in which groups were labeled, making comparisons difficult. There appeared to be a disparity with respect to countries reporting race/ethnicity, with certain countries more likely to report this variable. Samples overwhelmingly consisted of the study country's majority population. The majority of studies justified the use of race/ethnicity as a study variable and investigated conceptually related factors such as socioeconomic status and environment. Yet, many studies wrote as if race/ethnicity reflected biological differences. Despite recommendations, it appears that GERD researchers around the world struggle with the appropriate and standard way to include, collect, report, and discuss race/ethnicity. Recommendations on ways to address these issues are included with the goal of preventing and identifying health care disparities.
Sjögren, P; Ordell, S; Halling, A
2003-12-01
The aim was to describe and systematically review the methodology and reporting of validation in publications describing epidemiological registration methods for dental caries. BASIC RESEARCH METHODOLOGY: Literature searches were conducted in six scientific databases. All publications fulfilling the predetermined inclusion criteria were assessed for methodology and reporting of validation using a checklist including items described previously as well as new items. The frequency of endorsement of the assessed items was analysed. Moreover, the type and strength of evidence, was evaluated. Reporting of predetermined items relating to methodology of validation and the frequency of endorsement of the assessed items were of primary interest. Initially 588 publications were located. 74 eligible publications were obtained, 23 of which fulfilled the inclusion criteria and remained throughout the analyses. A majority of the studies reported the methodology of validation. The reported methodology of validation was generally inadequate, according to the recommendations of evidence-based medicine. The frequencies of reporting the assessed items (frequencies of endorsement) ranged from four to 84 per cent. A majority of the publications contributed to a low strength of evidence. There seems to be a need to improve the methodology and the reporting of validation in publications describing professionally registered caries epidemiology. Four of the items assessed in this study are potentially discriminative for quality assessments of reported validation.
Brown, David; Cuccurullo, Sara; Lee, Joseph; Petagna, Ann; Strax, Thomas
2008-08-01
This project sought to create an educational module including evaluation methodology to instruct physical medicine and rehabilitation (PM&R) residents in electrodiagnostic evaluation of patients with neuromuscular problems, and to verify acquired competencies in those electrodiagnostic skills through objective evaluation methodology. Sixteen residents were trained by board-certified neuromuscular and electrodiagnostic medicine physicians through technical training, lectures, and review of self-assessment examination (SAE) concepts from the American Academy of Physical Medicine & Rehabilitation syllabus provided in the Archives of Physical Medicine and Rehabilitation. After delivery of the educational module, knowledge acquisition and skill attainment were measured in (1) clinical skill in diagnostic procedures via a procedure checklist, (2) diagnosis and ability to design a patient-care management plan via chart simulated recall (CSR) exams, (3) physician/patient interaction via patient surveys, (4) physician/staff interaction via 360-degree global ratings, and (5) ability to write a comprehensive patient-care report and to document a patient-care management plan in accordance with Medicare guidelines via written patient reports. Assessment tools developed for this program address the basic competencies outlined by the Accreditation Council for Graduate Medical Education (ACGME). To test the success of the standardized educational module, data were collected on an ongoing basis. Objective measures compared resident SAE scores in electrodiagnostics (EDX) before and after institution of the comprehensive EDX competency module in a PM&R residency program. Fifteen of 16 residents (94%) successfully demonstrated proficiency in every segment of the evaluation element of the educational module by the end of their PGY-4 electrodiagnostic rotation. The resident who did not initially pass underwent remedial coursework and passed on the second attempt. Furthermore, the residents' proficiency as demonstrated by the evaluation after implementation of the standardized educational module positively correlated to an increase in resident SAE scores in EDX compared with resident scores before implementation of the educational module. Resident proficiency in EDX medicine skills and knowledge was objectively verified after completion of the standardized educational module. Validation of the assessment tools is evidenced by collected data correlating with significantly improved SAE scores and American Association of Neuromuscular and Electrodiagnostic Medicine (AANEM) exam scores, as outlined in the result section. In addition, the clinical development tool (procedure checklist) was validated by residents being individually observed performing skills and deemed competent by an AANEM-certified physician. The standardized educational module and evaluation methodology provide a potential framework for the definition of baseline competency in the clinical skill area of EDX.
NASA Astrophysics Data System (ADS)
Sallaberry, Fabienne; Fernández-García, Aránzazu; Lüpfert, Eckhard; Morales, Angel; Vicente, Gema San; Sutter, Florian
2017-06-01
Precise knowledge of the optical properties of the components used in the solar field of concentrating solar thermal power plants is primordial to ensure their optimum power production. Those properties are measured and evaluated by different techniques and equipment, in laboratory conditions and/or in the field. Standards for such measurements and international consensus for the appropriate techniques are in preparation. The reference materials used as a standard for the calibration of the equipment are under discussion. This paper summarizes current testing methodologies and guidelines for the characterization of optical properties of solar mirrors and absorbers.
Predictive Inference Using Latent Variables with Covariates*
Schofield, Lynne Steuerle; Junker, Brian; Taylor, Lowell J.; Black, Dan A.
2014-01-01
Plausible Values (PVs) are a standard multiple imputation tool for analysis of large education survey data that measures latent proficiency variables. When latent proficiency is the dependent variable, we reconsider the standard institutionally-generated PV methodology and find it applies with greater generality than shown previously. When latent proficiency is an independent variable, we show that the standard institutional PV methodology produces biased inference because the institutional conditioning model places restrictions on the form of the secondary analysts’ model. We offer an alternative approach that avoids these biases based on the mixed effects structural equations (MESE) model of Schofield (2008). PMID:25231627
Measuring attitudes towards the dying process: A systematic review of tools.
Groebe, Bernadette; Strupp, Julia; Eisenmann, Yvonne; Schmidt, Holger; Schlomann, Anna; Rietz, Christian; Voltz, Raymond
2018-04-01
At the end of life, anxious attitudes concerning the dying process are common in patients in Palliative Care. Measurement tools can identify vulnerabilities, resources and the need for subsequent treatment to relieve suffering and support well-being. To systematically review available tools measuring attitudes towards dying, their operationalization, the method of measurement and the methodological quality including generalizability to different contexts. Systematic review according to the PRISMA Statement. Methodological quality of tools assessed by standardized review criteria. MEDLINE, PsycINFO, PsyndexTests and the Health and Psychosocial Instruments were searched from their inception to April 2017. A total of 94 identified studies reported the development and/or validation of 44 tools. Of these, 37 were questionnaires and 7 alternative measurement methods (e.g. projective measures). In 34 of 37 questionnaires, the emotional evaluation (e.g. anxiety) towards dying is measured. Dying is operationalized in general items ( n = 20), in several specific aspects of dying ( n = 34) and as dying of others ( n = 14). Methodological quality of tools was reported inconsistently. Nine tools reported good internal consistency. Of 37 tools, 4 were validated in a clinical sample (e.g. terminal cancer; Huntington disease), indicating questionable generalizability to clinical contexts for most tools. Many tools exist to measure attitudes towards the dying process using different endpoints. This overview can serve as decision framework on which tool to apply in which contexts. For clinical application, only few tools were available. Further validation of existing tools and potential alternative methods in various populations is needed.
Maru, Shoko; Byrnes, Joshua; Carrington, Melinda J; Stewart, Simon; Scuffham, Paul A
2015-01-01
Substantial variation in economic analyses of cardiovascular disease management programs hinders not only the proper assessment of cost-effectiveness but also the identification of heterogeneity of interest such as patient characteristics. The authors discuss the impact of reporting and methodological variation on the cost-effectiveness of cardiovascular disease management programs by introducing issues that could lead to different policy or clinical decisions, followed by the challenges associated with net intervention effects and generalizability. The authors conclude with practical suggestions to mitigate the identified issues. Improved transparency through standardized reporting practice is the first step to advance beyond one-off experiments (limited applicability outside the study itself). Transparent reporting is a prerequisite for rigorous cost-effectiveness analyses that provide unambiguous implications for practice: what type of program works for whom and how.
Pinnock, Hilary; Epiphaniou, Eleni; Sheikh, Aziz; Griffiths, Chris; Eldridge, Sandra; Craig, Peter; Taylor, Stephanie J C
2015-03-30
Dissemination and implementation of health care interventions are currently hampered by the variable quality of reporting of implementation research. Reporting of other study types has been improved by the introduction of reporting standards (e.g. CONSORT). We are therefore developing guidelines for reporting implementation studies (StaRI). Using established methodology for developing health research reporting guidelines, we systematically reviewed the literature to generate items for a checklist of reporting standards. We then recruited an international, multidisciplinary panel for an e-Delphi consensus-building exercise which comprised an initial open round to revise/suggest a list of potential items for scoring in the subsequent two scoring rounds (scale 1 to 9). Consensus was defined a priori as 80% agreement with the priority scores of 7, 8, or 9. We identified eight papers from the literature review from which we derived 36 potential items. We recruited 23 experts to the e-Delphi panel. Open round comments resulted in revisions, and 47 items went forward to the scoring rounds. Thirty-five items achieved consensus: 19 achieved 100% agreement. Prioritised items addressed the need to: provide an evidence-based justification for implementation; describe the setting, professional/service requirements, eligible population and intervention in detail; measure process and clinical outcomes at population level (using routine data); report impact on health care resources; describe local adaptations to the implementation strategy and describe barriers/facilitators. Over-arching themes from the free-text comments included balancing the need for detailed descriptions of interventions with publishing constraints, addressing the dual aims of reporting on the process of implementation and effectiveness of the intervention and monitoring fidelity to an intervention whilst encouraging adaptation to suit diverse local contexts. We have identified priority items for reporting implementation studies and key issues for further discussion. An international, multidisciplinary workshop, where participants will debate the issues raised, clarify specific items and develop StaRI standards that fit within the suite of EQUATOR reporting guidelines, is planned. The protocol is registered with Equator: http://www.equator-network.org/library/reporting-guidelines-under-development/#17 .
Methodology or method? A critical review of qualitative case study reports.
Hyett, Nerida; Kenny, Amanda; Dickson-Swift, Virginia
2014-01-01
Despite on-going debate about credibility, and reported limitations in comparison to other approaches, case study is an increasingly popular approach among qualitative researchers. We critically analysed the methodological descriptions of published case studies. Three high-impact qualitative methods journals were searched to locate case studies published in the past 5 years; 34 were selected for analysis. Articles were categorized as health and health services (n=12), social sciences and anthropology (n=7), or methods (n=15) case studies. The articles were reviewed using an adapted version of established criteria to determine whether adequate methodological justification was present, and if study aims, methods, and reported findings were consistent with a qualitative case study approach. Findings were grouped into five themes outlining key methodological issues: case study methodology or method, case of something particular and case selection, contextually bound case study, researcher and case interactions and triangulation, and study design inconsistent with methodology reported. Improved reporting of case studies by qualitative researchers will advance the methodology for the benefit of researchers and practitioners.
NASA Technical Reports Server (NTRS)
Bogert, Philip B.; Satyanarayana, Arunkumar; Chunchu, Prasad B.
2006-01-01
Splitting, ultimate failure load and the damage path in center notched composite specimens subjected to in-plane tension loading are predicted using progressive failure analysis methodology. A 2-D Hashin-Rotem failure criterion is used in determining intra-laminar fiber and matrix failures. This progressive failure methodology has been implemented in the Abaqus/Explicit and Abaqus/Standard finite element codes through user written subroutines "VUMAT" and "USDFLD" respectively. A 2-D finite element model is used for predicting the intra-laminar damages. Analysis results obtained from the Abaqus/Explicit and Abaqus/Standard code show good agreement with experimental results. The importance of modeling delamination in progressive failure analysis methodology is recognized for future studies. The use of an explicit integration dynamics code for simple specimen geometry and static loading establishes a foundation for future analyses where complex loading and nonlinear dynamic interactions of damage and structure will necessitate it.
Hauben, Manfred; Hung, Eric Y.
2016-01-01
Introduction: There is an interest in methodologies to expeditiously detect credible signals of drug-induced pancreatitis. An example is the reported signal of pancreatitis with rasburicase emerging from a study [the ‘index publication’ (IP)] combining quantitative signal detection findings from a spontaneous reporting system (SRS) and electronic health records (EHRs). The signal was reportedly supported by a clinical review with a case series manuscript in progress. The reported signal is noteworthy, being initially classified as a false-positive finding for the chosen reference standard, but reclassified as a ‘clinically supported’ signal. Objective: This paper has dual objectives: to revisit the signal of rasburicase and acute pancreatitis and extend the original analysis via reexamination of its findings, in light of more contemporary data; and to motivate discussions on key issues in signal detection and evaluation, including recent findings from a major international pharmacovigilance research initiative. Methodology: We used the same methodology as the IP, including the same disproportionality analysis software/dataset for calculating observed to expected reporting frequencies (O/Es), Medical Dictionary for Regulatory Activities Preferred Term, and O/E metric/threshold combination defining a signal of disproportionate reporting. Baseline analysis results prompted supplementary analyses using alternative analytical choices. We performed a comprehensive literature search to identify additional published case reports of rasburicase and pancreatitis. Results: We could not replicate positive findings (e.g. a signal or statistic of disproportionate reporting) from the SRS data using the same algorithm, software, dataset and vendor specified in the IP. The reporting association was statistically highlighted in default and supplemental analysis when more sensitive forms of disproportionality analysis were used. Two of three reports in the FAERS database were assessed as likely duplicate reports. We did not identify any additional reports in the FAERS corresponding to the three cases identified in the IP using EHRs. We did not identify additional published reports of pancreatitis associated with rasburicase. Discussion: Our exercise stimulated interesting discussions of key points in signal detection and evaluation, including causality assessment, signal detection algorithm performance, pharmacovigilance terminology, duplicate reporting, mechanisms for communicating signals, the structure of the FAERs database, and recent results from a major international pharmacovigilance research initiative. PMID:27298720
Variable Star Signature Classification using Slotted Symbolic Markov Modeling
NASA Astrophysics Data System (ADS)
Johnston, K. B.; Peter, A. M.
2017-01-01
With the advent of digital astronomy, new benefits and new challenges have been presented to the modern day astronomer. No longer can the astronomer rely on manual processing, instead the profession as a whole has begun to adopt more advanced computational means. This paper focuses on the construction and application of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern classification algorithm for the identification of variable stars. A methodology for the reduction of stellar variable observations (time-domain data) into a novel feature space representation is introduced. The methodology presented will be referred to as Slotted Symbolic Markov Modeling (SSMM) and has a number of advantages which will be demonstrated to be beneficial; specifically to the supervised classification of stellar variables. It will be shown that the methodology outperformed a baseline standard methodology on a standardized set of stellar light curve data. The performance on a set of data derived from the LINEAR dataset will also be shown.
Variable Star Signature Classification using Slotted Symbolic Markov Modeling
NASA Astrophysics Data System (ADS)
Johnston, Kyle B.; Peter, Adrian M.
2016-01-01
With the advent of digital astronomy, new benefits and new challenges have been presented to the modern day astronomer. No longer can the astronomer rely on manual processing, instead the profession as a whole has begun to adopt more advanced computational means. Our research focuses on the construction and application of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern classification algorithm for the identification of variable stars. A methodology for the reduction of stellar variable observations (time-domain data) into a novel feature space representation is introduced. The methodology presented will be referred to as Slotted Symbolic Markov Modeling (SSMM) and has a number of advantages which will be demonstrated to be beneficial; specifically to the supervised classification of stellar variables. It will be shown that the methodology outperformed a baseline standard methodology on a standardized set of stellar light curve data. The performance on a set of data derived from the LINEAR dataset will also be shown.
Stoll, S; Roelcke, V; Raspe, H
2005-07-29
The article addresses the history of evidence-based medicine in Germany. Its aim was to reconstruct the standard of clinical-therapeutic investigation in Germany at the beginning of the 20 (th) century. By a historical investigation of five important German general medical journals for the time between 1918 and 1932 an overview of the situation of clinical investigation is given. 268 clinical trails are identified, and are analysed in view of their methodological design. Heterogeneous results are found: While few examples of sophisticated methodology exist, the design of the majority of the studies is poor. A response to the situation described can be seen in Paul Martini's book "Methodology of Therapeutic Investigation", first published in 1932. Paul Martini's biography, his criticism of the situation of clinical-therapeutic investigation of his time, the major points of his methodology and the reception of the book in Germany and abroad are described.
Generation of an annotated reference standard for vaccine adverse event reports.
Foster, Matthew; Pandey, Abhishek; Kreimeyer, Kory; Botsis, Taxiarchis
2018-07-05
As part of a collaborative project between the US Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention for the development of a web-based natural language processing (NLP) workbench, we created a corpus of 1000 Vaccine Adverse Event Reporting System (VAERS) reports annotated for 36,726 clinical features, 13,365 temporal features, and 22,395 clinical-temporal links. This paper describes the final corpus, as well as the methodology used to create it, so that clinical NLP researchers outside FDA can evaluate the utility of the corpus to aid their own work. The creation of this standard went through four phases: pre-training, pre-production, production-clinical feature annotation, and production-temporal annotation. The pre-production phase used a double annotation followed by adjudication strategy to refine and finalize the annotation model while the production phases followed a single annotation strategy to maximize the number of reports in the corpus. An analysis of 30 reports randomly selected as part of a quality control assessment yielded accuracies of 0.97, 0.96, and 0.83 for clinical features, temporal features, and clinical-temporal associations, respectively and speaks to the quality of the corpus. Copyright © 2018 Elsevier Ltd. All rights reserved.
Wehbe-Janek, Hania; Hochhalter, Angela K; Castilla, Theresa; Jo, Chanhee
2015-02-01
Patient engagement in health care is increasingly recognized as essential for promoting the health of individuals and populations. This study pilot tested the standardized clinician (SC) methodology, a novel adaptation of standardized patient methodology, for teaching patient engagement skills for the complex health care situation of transitioning from a hospital back to home. Sixty-seven participants at heightened risk for hospitalization were randomly assigned to either simulation exposure-only or full-intervention group. Both groups participated in simulation scenarios with "standardized clinicians" around tasks related to hospital discharge and follow-up. The full-intervention group was also debriefed after scenario sets and learned about tools for actively participating in hospital-to-home transitions. Measures included changes in observed behaviors at baseline and follow-up and an overall program evaluation. The full-intervention group showed increases in observed tool possession (P = 0.014) and expression of their preferences and values (P = 0.043). The simulation exposure-only group showed improvement in worksheet scores (P = 0.002) and fewer engagement skills (P = 0.021). Both groups showed a decrease in telling an SC about their hospital admission (P < 0.05). Open-ended comments from the program evaluation were largely positive. Both groups benefited from exposure to the SC intervention. Program evaluation data suggest that simulation training is feasible and may provide a useful methodology for teaching patient skills for active engagement in health care. Future studies are warranted to determine if this methodology can be used to assess overall patient engagement and whether new patient learning transfers to health care encounters.
Gu, Huidong; Wang, Jian; Aubry, Anne-Françoise; Jiang, Hao; Zeng, Jianing; Easter, John; Wang, Jun-sheng; Dockens, Randy; Bifano, Marc; Burrell, Richard; Arnold, Mark E
2012-06-05
A methodology for the accurate calculation and mitigation of isotopic interferences in liquid chromatography-mass spectrometry/mass spectrometry (LC-MS/MS) assays and its application in supporting microdose absolute bioavailability studies are reported for the first time. For simplicity, this calculation methodology and the strategy to minimize the isotopic interference are demonstrated using a simple molecule entity, then applied to actual development drugs. The exact isotopic interferences calculated with this methodology were often much less than the traditionally used, overestimated isotopic interferences simply based on the molecular isotope abundance. One application of the methodology is the selection of a stable isotopically labeled internal standard (SIL-IS) for an LC-MS/MS bioanalytical assay. The second application is the selection of an SIL analogue for use in intravenous (i.v.) microdosing for the determination of absolute bioavailability. In the case of microdosing, the traditional approach of calculating isotopic interferences can result in selecting a labeling scheme that overlabels the i.v.-dosed drug or leads to incorrect conclusions on the feasibility of using an SIL drug and analysis by LC-MS/MS. The methodology presented here can guide the synthesis by accurately calculating the isotopic interferences when labeling at different positions, using different selective reaction monitoring (SRM) transitions or adding more labeling positions. This methodology has been successfully applied to the selection of the labeled i.v.-dosed drugs for use in two microdose absolute bioavailability studies, before initiating the chemical synthesis. With this methodology, significant time and cost saving can be achieved in supporting microdose absolute bioavailability studies with stable labeled drugs.
Tuck, Melissa K; Chan, Daniel W; Chia, David; Godwin, Andrew K; Grizzle, William E; Krueger, Karl E; Rom, William; Sanda, Martin; Sorbara, Lynn; Stass, Sanford; Wang, Wendy; Brenner, Dean E
2009-01-01
Specimen collection is an integral component of clinical research. Specimens from subjects with various stages of cancers or other conditions, as well as those without disease, are critical tools in the hunt for biomarkers, predictors, or tests that will detect serious diseases earlier or more readily than currently possible. Analytic methodologies evolve quickly. Access to high-quality specimens, collected and handled in standardized ways that minimize potential bias or confounding factors, is key to the "bench to bedside" aim of translational research. It is essential that standard operating procedures, "the how" of creating the repositories, be defined prospectively when designing clinical trials. Small differences in the processing or handling of a specimen can have dramatic effects in analytical reliability and reproducibility, especially when multiplex methods are used. A representative working group, Standard Operating Procedures Internal Working Group (SOPIWG), comprised of members from across Early Detection Research Network (EDRN) was formed to develop standard operating procedures (SOPs) for various types of specimens collected and managed for our biomarker discovery and validation work. This report presents our consensus on SOPs for the collection, processing, handling, and storage of serum and plasma for biomarker discovery and validation.
The Body Composition Project: A Summary Report and Descriptive Data
1986-12-01
abundance of muscle as opposed to excess fat . This issue was address- "-o ed inA epartment of Defense DirecLv-.% 3O8 .-- a-a -resu-lt -,A---4ronr % 11L I...state in terms of an individual’s relative body fat as estimated by-the sur .-f 4 s"nfolds. Shortly after implementation, the validity of the height...weight and body fat standards as -well as the appropriate- ness of the skinfold methodology was questioned. A study was designed to create a data base
MOEMS optical delay line for optical coherence tomography
NASA Astrophysics Data System (ADS)
Choudhary, Om P.; Chouksey, S.; Sen, P. K.; Sen, P.; Solanki, J.; Andrews, J. T.
2014-09-01
Micro-Opto-Electro-Mechanical optical coherence tomography, a lab-on-chip for biomedical applications is designed, studied, fabricated and characterized. To fabricate the device standard PolyMUMPS processes is adopted. We report the utilization of electro-optic modulator for a fast scanning optical delay line for time domain optical coherence tomography. Design optimization are performed using Tanner EDA while simulations are performed using COMSOL. The paper summarizes various results and fabrication methodology adopted. The success of the device promises a future hand-held or endoscopic optical coherence tomography for biomedical applications.
Fast-mode duplex qPCR for BCR-ABL1 molecular monitoring: innovation, automation, and harmonization.
Gerrard, Gareth; Mudge, Katherine; Foskett, Pierre; Stevens, David; Alikian, Mary; White, Helen E; Cross, Nicholas C P; Apperley, Jane; Foroni, Letizia
2012-07-01
Reverse transcription quantitative polymerase chain reaction (RTqPCR)is currently the most sensitive tool available for the routine monitoring of disease level in patients undergoing treatment for BCRABL1 associated malignancies. Considerable effort has been invested at both the local and international levels to standardise the methodology and reporting criteria used to assess this critical metric. In an effort to accommodate the demands of increasing sample throughput and greater standardization, we adapted the current best-practice guidelines to encompass automation platforms and improved multiplex RT-qPCR technology.
Outcome reporting following navigated high tibial osteotomy of the knee: a systematic review.
Yan, James; Musahl, Volker; Kay, Jeffrey; Khan, Moin; Simunovic, Nicole; Ayeni, Olufemi R
2016-11-01
This systematic review evaluates radiographic and clinical outcome reporting following navigated high tibial osteotomy (HTO). Conventional HTO was used as a control to compare outcomes and furthermore investigate the quality of evidence in studies reporting outcomes for navigated HTO. It was hypothesized that navigated HTO will show superior clinical and radiographic outcomes compared to conventional HTO. Two independent reviewers searched PubMed, Ovid (MEDLINE), EMBASE, and Cochrane databases for studies reporting outcomes following navigated HTO. Titles, abstracts, and full-text were screened in duplicate using an a priori inclusion and exclusion criteria. Descriptive statistics were calculated using Minitab ® statistical software. Methodological Index for Nonrandomized Studies (MINORS) and Cochrane Risk of Bias Scores were used to evaluate methodological quality. Thirty-four studies which involved 2216 HTOs were analysed in this review, 1608 (72.6 %) navigated HTOs and 608 (27.4 %) conventional HTOs. The majority of studies were of level IV evidence (16). Clinical outcomes were reported in knee and function scores or range of motion comparisons. Postoperative clinical and functional scores were improved by navigated HTO although it is not demonstrated if there is significant improvement compared to conventional HTO. Most common clinical outcome score reported was Lysholm scores (6) which report postoperative scores of 87.8 (standard deviation 5.9) and 88.8 (standard deviation 5.9) for conventional and navigation-assisted HTO, respectively. Radiographic outcomes reported commonly were weight-bearing mechanical axis, coronal plane angle, and posterior tibial slope angle in the sagittal plane. Studies have shown HTO gives significant correction of mechanical alignment and navigated HTO produces significantly less change in posterior tibial slope postoperatively compared to conventional. The mean MINORS for the 17 non-comparative studies was 9/16, and 15/24 for the 14 non-randomized comparative studies. Navigation HTO results in improved mechanical axis alignment and demonstrates significantly better control over the tibial slope angle change postoperatively compared to conventional methods; however, these improvements have not yet been reflected in clinical outcome scores. Overall the studies report HTO does create significantly improved knee scores and functions compared to patients' preoperative ratings regardless of technique. Future studies on HTO outcomes need to focus on consistency of outcome reporting. IV.
Manterola, Carlos; Torres, Rodrigo; Burgos, Luis; Vial, Manuel; Pineda, Viviana
2006-07-01
Surgery is a curative treatment for gastric cancer (GC). As relapse is frequent, adjuvant therapies such as postoperative chemo radiotherapy have been tried. In Chile, some hospitals adopted Macdonald's study as a protocol for the treatment of GC. To determine methodological quality and internal and external validity of the Macdonald study. Three instruments were applied that assess methodological quality. A critical appraisal was done and the internal and external validity of the methodological quality was analyzed with two scales: MINCIR (Methodology and Research in Surgery), valid for therapy studies and CONSORT (Consolidated Standards of Reporting Trials), valid for randomized controlled trials (RCT). Guides and scales were applied by 5 researchers with training in clinical epidemiology. The reader's guide verified that the Macdonald study was not directed to answer a clearly defined question. There was random assignment, but the method used is not described and the patients were not considered until the end of the study (36% of the group with surgery plus chemo radiotherapy did not complete treatment). MINCIR scale confirmed a multicentric RCT, not blinded, with an unclear randomized sequence, erroneous sample size estimation, vague objectives and no exclusion criteria. CONSORT system proved the lack of working hypothesis and specific objectives as well as an absence of exclusion criteria and identification of the primary variable, an imprecise estimation of sample size, ambiguities in the randomization process, no blinding, an absence of statistical adjustment and the omission of a subgroup analysis. The instruments applied demonstrated methodological shortcomings that compromise the internal and external validity of the.
Atashi, Alireza; Verburg, Ilona W; Karim, Hesam; Miri, Mirmohammad; Abu-Hanna, Ameen; de Jonge, Evert; de Keizer, Nicolette F; Eslami, Saeid
2018-06-01
Intensive Care Units (ICU) length of stay (LoS) prediction models are used to compare different institutions and surgeons on their performance, and is useful as an efficiency indicator for quality control. There is little consensus about which prediction methods are most suitable to predict (ICU) length of stay. The aim of this study is to systematically review models for predicting ICU LoS after coronary artery bypass grafting and to assess the reporting and methodological quality of these models to apply them for benchmarking. A general search was conducted in Medline and Embase up to 31-12-2016. Three authors classified the papers for inclusion by reading their title, abstract and full text. All original papers describing development and/or validation of a prediction model for LoS in the ICU after CABG surgery were included. We used a checklist developed for critical appraisal and data extraction for systematic reviews of prediction modeling and extended it on handling specific patients subgroups. We also defined other items and scores to assess the methodological and reporting quality of the models. Of 5181 uniquely identified articles, fifteen studies were included of which twelve on development of new models and three on validation of existing models. All studies used linear or logistic regression as method for model development, and reported various performance measures based on the difference between predicted and observed ICU LoS. Most used a prospective (46.6%) or retrospective study design (40%). We found heterogeneity in patient inclusion/exclusion criteria; sample size; reported accuracy rates; and methods of candidate predictor selection. Most (60%) studies have not mentioned the handling of missing values and none compared the model outcome measure of survivors with non-survivors. For model development and validation studies respectively, the maximum reporting (methodological) scores were 66/78 and 62/62 (14/22 and 12/22). There are relatively few models for predicting ICU length of stay after CABG. Several aspects of methodological and reporting quality of studies in this field should be improved. There is a need for standardizing outcome and risk factor definitions in order to develop/validate a multi-institutional and international risk scoring system.
Code of Federal Regulations, 2011 CFR
2011-01-01
.... agricultural and rural economy. (2) Administering a methodological research program to improve agricultural... design and data collection methodologies to the agricultural statistics program. Major functions include...) Designing, testing, and establishing survey techniques and standards, including sample design, sample...
Code of Federal Regulations, 2010 CFR
2010-01-01
.... agricultural and rural economy. (2) Administering a methodological research program to improve agricultural... design and data collection methodologies to the agricultural statistics program. Major functions include...) Designing, testing, and establishing survey techniques and standards, including sample design, sample...
Code of Federal Regulations, 2012 CFR
2012-01-01
.... agricultural and rural economy. (2) Administering a methodological research program to improve agricultural... design and data collection methodologies to the agricultural statistics program. Major functions include...) Designing, testing, and establishing survey techniques and standards, including sample design, sample...
Code of Federal Regulations, 2013 CFR
2013-01-01
.... agricultural and rural economy. (2) Administering a methodological research program to improve agricultural... design and data collection methodologies to the agricultural statistics program. Major functions include...) Designing, testing, and establishing survey techniques and standards, including sample design, sample...
Code of Federal Regulations, 2014 CFR
2014-01-01
.... agricultural and rural economy. (2) Administering a methodological research program to improve agricultural... design and data collection methodologies to the agricultural statistics program. Major functions include...) Designing, testing, and establishing survey techniques and standards, including sample design, sample...
Methodological Gaps in Left Atrial Function Assessment by 2D Speckle Tracking Echocardiography
Rimbaş, Roxana Cristina; Dulgheru, Raluca Elena; Vinereanu, Dragoş
2015-01-01
The assessment of left atrial (LA) function is used in various cardiovascular diseases. LA plays a complementary role in cardiac performance by modulating left ventricular (LV) function. Transthoracic two-dimensional (2D) phasic volumes and Doppler echocardiography can measure LA function non-invasively. However, evaluation of LA deformation derived from 2D speckle tracking echocardiography (STE) is a new feasible and promising approach for assessment of LA mechanics. These parameters are able to detect subclinical LA dysfunction in different pathological condition. Normal ranges for LA deformation and cut-off values to diagnose LA dysfunction with different diseases have been reported, but data are still conflicting, probably because of some methodological and technical issues. This review highlights the importance of an unique standardized technique to assess the LA phasic functions by STE, and discusses recent studies on the most important clinical applications of this technique. PMID:26761370
Telemedicine security: a systematic review.
Garg, Vaibhav; Brewer, Jeffrey
2011-05-01
Telemedicine is a technology-based alternative to traditional health care delivery. However, poor security measures in telemedicine services can have an adverse impact on the quality of care provided, regardless of the chronic condition being studied. We undertook a systematic review of 58 journal articles pertaining to telemedicine security. These articles were selected based on a keyword search on 14 relevant journals. The articles were coded to evaluate the methodology and to identify the key areas of research in security that are being reviewed. Seventy-six percent of the articles defined the security problem they were addressing, and only 47% formulated a research question pertaining to security. Sixty-one percent proposed a solution, and 20% of these tested the security solutions that they proposed. Prior research indicates inadequate reporting of methodology in telemedicine research. We found that to be true for security research as well. We also identified other issues such as using outdated security standards. © 2011 Diabetes Technology Society.
Telemedicine Security: A Systematic Review
Garg, Vaibhav; Brewer, Jeffrey
2011-01-01
Telemedicine is a technology-based alternative to traditional health care delivery. However, poor security measures in telemedicine services can have an adverse impact on the quality of care provided, regardless of the chronic condition being studied. We undertook a systematic review of 58 journal articles pertaining to telemedicine security. These articles were selected based on a keyword search on 14 relevant journals. The articles were coded to evaluate the methodology and to identify the key areas of research in security that are being reviewed. Seventy-six percent of the articles defined the security problem they were addressing, and only 47% formulated a research question pertaining to security. Sixty-one percent proposed a solution, and 20% of these tested the security solutions that they proposed. Prior research indicates inadequate reporting of methodology in telemedicine research. We found that to be true for security research as well. We also identified other issues such as using outdated security standards. PMID:21722592
Childhood obesity in Asia: the value of accurate body composition methodology.
Hills, Andrew P; Mokhtar, Najat; Brownie, Sharon; Byrne, Nuala M
2014-01-01
Childhood obesity, a significant global public health problem, affects an increasing number of low- and middle-income countries, including in Asia. The obesity epidemic has been fuelled by the rapid nutrition and physical activity transition with the availability of more energy-dense nutrient-poor foods and lifestyles of many children dominated by physical inactivity. During the growing years the pace and quality of grow this best quantified by a combination of anthropometric and body composition measures. However, where normative data are available, this has typically been collected on Caucasian children. To better define and characterise overweight and obesity in Asian children, and to monitor nutrition and physical activity interventions, there is a need to increase the use of standardized anthropometric and body composition methodologies. The current paper reports on initiatives facilitated by the International Atomic Energy Agency (IAEA) and outlines future research needs for the prevention and management of childhood obesity in Asia.
Lupker, Stephen J.
2017-01-01
The experiments reported here used “Reversed-Interior” (RI) primes (e.g., cetupmor-COMPUTER) in three different masked priming paradigms in order to test between different models of orthographic coding/visual word recognition. The results of Experiment 1, using a standard masked priming methodology, showed no evidence of priming from RI primes, in contrast to the predictions of the Bayesian Reader and LTRS models. By contrast, Experiment 2, using a sandwich priming methodology, showed significant priming from RI primes, in contrast to the predictions of open bigram models, which predict that there should be no orthographic similarity between these primes and their targets. Similar results were obtained in Experiment 3, using a masked prime same-different task. The results of all three experiments are most consistent with the predictions derived from simulations of the Spatial-coding model. PMID:29244824
Supplement to a Methodology for Succession Planning for Technical Experts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirk, Bernadette Lugue; Cain, Ronald A.; Agreda, Carla L.
This report complements A Methodology for Succession Planning for Technical Experts (Ron Cain, Shaheen Dewji, Carla Agreda, Bernadette Kirk, July 2017), which describes a draft methodology for identifying and evaluating the loss of key technical skills at nuclear operations facilities. This report targets the methodology for identifying critical skills, and the methodology is tested through interviews with selected subject matter experts.
Model driven development of clinical information sytems using openEHR.
Atalag, Koray; Yang, Hong Yul; Tempero, Ewan; Warren, Jim
2011-01-01
openEHR and the recent international standard (ISO 13606) defined a model driven software development methodology for health information systems. However there is little evidence in the literature describing implementation; especially for desktop clinical applications. This paper presents an implementation pathway using .Net/C# technology for Microsoft Windows desktop platforms. An endoscopy reporting application driven by openEHR Archetypes and Templates has been developed. A set of novel GUI directives has been defined and presented which guides the automatic graphical user interface generator to render widgets properly. We also reveal the development steps and important design decisions; from modelling to the final software product. This might provide guidance for other developers and form evidence required for the adoption of these standards for vendors and national programs alike.
Esteves, Sandro C; Chan, Peter
2015-09-01
We systematically identified and reviewed the methods and consistency of recommendations of recently developed clinical practice guidelines (CPG) and best practice statements (BPS) on the evaluation of the infertile male. MEDLINE and related engines as well as guidelines' Web sites were searched for CPG and BPS written in English on the general evaluation of male infertility published between January 2008 and April 2015. Four guidelines were identified, all of which reported to have been recently updated. Systematic review was not consistently used in the BPS despite being reported in the CPG. Only one of them reported having a patient representative in its development team. The CPG issued by the European Association of Urology (EAU) graded some recommendations and related that to levels (but not quality) of evidence. Overall, the BPS issued respectively by the American Urological Association and American Society for Reproductive Medicine concurred with each other, but both differed from the EAU guidelines with regard to methods of collection, extraction and interpretation of data. None of the guidelines incorporated health economics. Important specific limitations of conventional semen analysis results were ignored by all guidelines. Besides variation in the methodological quality, implementation strategies were not reported in two out of four guidelines. While the various panels of experts who contributed to the development of the CPG and BPS reviewed should be commended on their tremendous efforts aiming to establish a clinical standard in both the evaluation and management of male infertility, we recognized inconsistencies in the methodology of their synthesis and in the contents of their final recommendations. These discrepancies pose a barrier in the general implementation of these guidelines and may limit their utility in standardizing clinical practice or improving health-related outcomes. Continuous efforts are needed to generate high-quality evidence to allow further development of these important guidelines for the evaluation and management of males suffering from infertility.
Henschke, Nicholas; Keuerleber, Julia; Ferreira, Manuela; Maher, Christopher G; Verhagen, Arianne P
2014-04-01
To provide an overview of reporting and methodological quality in diagnostic test accuracy (DTA) studies in the musculoskeletal field and evaluate the use of the QUality Assessment of Diagnostic Accuracy Studies (QUADAS) checklist. A literature review identified all systematic reviews that evaluated the accuracy of clinical tests to diagnose musculoskeletal conditions and used the QUADAS checklist. Two authors screened all identified reviews and extracted data on the target condition, index tests, reference standard, included studies, and QUADAS items. A descriptive analysis of the QUADAS checklist was performed, along with Rasch analysis to examine the construct validity and internal reliability. A total of 19 systematic reviews were included, which provided data on individual items of the QUADAS checklist for 392 DTA studies. In the musculoskeletal field, uninterpretable or intermediate test results are commonly not reported, with 175 (45%) studies scoring "no" to this item. The proportion of studies fulfilling certain items varied from 22% (item 11) to 91% (item 3). The interrater reliability of the QUADAS checklist was good and Rasch analysis showed excellent construct validity and internal consistency. This overview identified areas where the reporting and performance of diagnostic studies within the musculoskeletal field can be improved. Copyright © 2014 Elsevier Inc. All rights reserved.
Wang, Weihao; Xing, Zhihua
2014-01-01
Objective. Xingnaojing injection (XNJ) is a well-known traditional Chinese patent medicine (TCPM) for stroke. The aim of this study is to assess the efficacy of XNJ for stroke including ischemic stroke, intracerebral hemorrhage (ICH), and subarachnoid hemorrhage (SAH). Methods. An extensive search was performed within using eight databases up to November 2013. Randomized controlled trials (RCTs) on XNJ for treatment of stroke were collected. Study selection, data extraction, quality assessment, and meta-analysis were conducted according to the Cochrane standards, and RevMan5.0 was used for meta-analysis. Results. This review included 13 RCTs and a total of 1,514 subjects. The overall methodological quality was poor. The meta-analysis showed that XNJ combined with conventional treatment was more effective for total efficacy, neurological deficit improvement, and reduction of TNF-α levels compared with those of conventional treatment alone. Three trials reported adverse events, of these one trial reported mild impairment of kidney and liver function, whereas the other two studies failed to report specific adverse events. Conclusion. Despite the limitations of this review, we suggest that XNJ in combination with conventional medicines might be beneficial for the treatment of stroke. Currently there are various methodological problems in the studies. Therefore, high-quality, large-scale RCTs are urgently needed. PMID:24707306
Sperber, A D; Gwee, K A; Hungin, A P; Corazziari, E; Fukudo, S; Gerson, C; Ghoshal, U C; Kang, J-Y; Levy, R L; Schmulson, M; Dumitrascu, D; Gerson, M-J; Chen, M; Myung, S-J; Quigley, E M M; Whorwell, P J; Zarzar, K; Whitehead, W E
2014-11-01
Cross-cultural, multinational research can advance the field of functional gastrointestinal disorders (FGIDs). Cross-cultural comparative research can make a significant contribution in areas such as epidemiology, genetics, psychosocial modulators, symptom reporting and interpretation, extra-intestinal co-morbidity, diagnosis and treatment, determinants of disease severity, health care utilisation, and health-related quality of life, all issues that can be affected by geographical region, culture, ethnicity and race. To identify methodological challenges for cross-cultural, multinational research, and suggest possible solutions. This report, which summarises the full report of a working team established by the Rome Foundation that is available on the Internet, reflects an effort by an international committee of FGID clinicians and researchers. It is based on comprehensive literature reviews and expert opinion. Cross-cultural, multinational research is important and feasible, but has barriers to successful implementation. This report contains recommendations for future research relating to study design, subject recruitment, availability of appropriate study instruments, translation and validation of study instruments, documenting confounders, statistical analyses and reporting of results. Advances in study design and methodology, as well as cross-cultural research competence, have not matched technological advancements. The development of multinational research networks and cross-cultural research collaboration is still in its early stages. This report is intended to be aspirational rather than prescriptive, so we present recommendations, not guidelines. We aim to raise awareness of these issues and to pose higher standards, but not to discourage investigators from doing what is feasible in any particular setting. © 2014 John Wiley & Sons Ltd.
A Methodology for Phased Array Radar Threshold Modeling Using the Advanced Propagation Model (APM)
2017-10-01
TECHNICAL REPORT 3079 October 2017 A Methodology for Phased Array Radar Threshold Modeling Using the Advanced Propagation Model (APM...Head 55190 Networks Division iii EXECUTIVE SUMMARY This report summarizes the methodology developed to improve the radar threshold modeling...PHASED ARRAY RADAR CONFIGURATION ..................................................................... 1 3. METHODOLOGY
No evidence of purported lunar effect on hospital admission rates or birth rates.
Margot, Jean-Luc
2015-01-01
Studies indicate that a fraction of nursing professionals believe in a "lunar effect"-a purported correlation between the phases of the Earth's moon and human affairs, such as birth rates, blood loss, or fertility. This article addresses some of the methodological errors and cognitive biases that can explain the human tendency of perceiving a lunar effect where there is none. This article reviews basic standards of evidence and, using an example from the published literature, illustrates how disregarding these standards can lead to erroneous conclusions. Román, Soriano, Fuentes, Gálvez, and Fernández (2004) suggested that the number of hospital admissions related to gastrointestinal bleeding was somehow influenced by the phases of the Earth's moon. Specifically, the authors claimed that the rate of hospital admissions to their bleeding unit is higher during the full moon than at other times. Their report contains a number of methodological and statistical flaws that invalidate their conclusions. Reanalysis of their data with proper procedures shows no evidence that the full moon influences the rate of hospital admissions, a result that is consistent with numerous peer-reviewed studies and meta-analyses. A review of the literature shows that birth rates are also uncorrelated to lunar phases. Data collection and analysis shortcomings, as well as powerful cognitive biases, can lead to erroneous conclusions about the purported lunar effect on human affairs. Adherence to basic standards of evidence can help assess the validity of questionable beliefs.
Huettig, Fabian; Axmann, Detlef
2014-10-16
To identify standards, how entities of dental status are assessed and reported from full-arch radiographs of adults. A PubMed (Medline) search was performed in November 2011. Literature had to report at least one out of four defined entities using radiographs: number of teeth or implants; caries, fillings or restorations; root-canal fillings and apical health; alveolar bone level. Cohorts included to the study had to be of adult age. Methods of radiographic assessment were noted and checked for the later mode of report in text, tables or diagrams. For comparability, the encountered mode of report was operationalized to a logical expression. Thirty-seven out of 199 articles were evaluated via full-text review. Only one article reported all four entities. Eight articles reported at the maximum 3 comparable entities. However, comparability is impeded because of the usage of absolute or relative frequency, mean or median values as well as grouping. Furthermore the methods of assessment were different or not described sufficiently. Consequently, established sum scores turned out to be highly questionable, too. The amount of missing data within all studies remained unclear. It is even so remissed to mention supernumerary and aplased teeth as well as the count of third molars. Data about dental findings from radiographs is, if at all possible, only comparable with serious limitations. A standardization of both, assessing and reporting entities of dental status from radiographs is missing and has to be established within a report guideline.
Refining a methodology for determining the economic impacts of transportation improvements.
DOT National Transportation Integrated Search
2012-07-01
Estimating the economic impact of transportation improvements has previously proven to be a difficult task. After an exhaustive literature review, it was clear that the transportation profession lacked standards and methodologies for determining econ...
Expanded uncertainty estimation methodology in determining the sandy soils filtration coefficient
NASA Astrophysics Data System (ADS)
Rusanova, A. D.; Malaja, L. D.; Ivanov, R. N.; Gruzin, A. V.; Shalaj, V. V.
2018-04-01
The combined standard uncertainty estimation methodology in determining the sandy soils filtration coefficient has been developed. The laboratory researches were carried out which resulted in filtration coefficient determination and combined uncertainty estimation obtaining.
Laborde, Sylvain; Mosley, Emma; Thayer, Julian F.
2017-01-01
Psychophysiological research integrating heart rate variability (HRV) has increased during the last two decades, particularly given the fact that HRV is able to index cardiac vagal tone. Cardiac vagal tone, which represents the contribution of the parasympathetic nervous system to cardiac regulation, is acknowledged to be linked with many phenomena relevant for psychophysiological research, including self-regulation at the cognitive, emotional, social, and health levels. The ease of HRV collection and measurement coupled with the fact it is relatively affordable, non-invasive and pain free makes it widely accessible to many researchers. This ease of access should not obscure the difficulty of interpretation of HRV findings that can be easily misconstrued, however, this can be controlled to some extent through correct methodological processes. Standards of measurement were developed two decades ago by a Task Force within HRV research, and recent reviews updated several aspects of the Task Force paper. However, many methodological aspects related to HRV in psychophysiological research have to be considered if one aims to be able to draw sound conclusions, which makes it difficult to interpret findings and to compare results across laboratories. Those methodological issues have mainly been discussed in separate outlets, making difficult to get a grasp on them, and thus this paper aims to address this issue. It will help to provide psychophysiological researchers with recommendations and practical advice concerning experimental designs, data analysis, and data reporting. This will ensure that researchers starting a project with HRV and cardiac vagal tone are well informed regarding methodological considerations in order for their findings to contribute to knowledge advancement in their field. PMID:28265249
Test-Retest Reliability of Pediatric Heart Rate Variability: A Meta-Analysis.
Weiner, Oren M; McGrath, Jennifer J
2017-01-01
Heart rate variability (HRV), an established index of autonomic cardiovascular modulation, is associated with health outcomes (e.g., obesity, diabetes) and mortality risk. Time- and frequency-domain HRV measures are commonly reported in longitudinal adult and pediatric studies of health. While test-retest reliability has been established among adults, less is known about the psychometric properties of HRV among infants, children, and adolescents. The objective was to conduct a meta-analysis of the test-retest reliability of time- and frequency-domain HRV measures from infancy to adolescence. Electronic searches (PubMed, PsycINFO; January 1970-December 2014) identified studies with nonclinical samples aged ≤ 18 years; ≥ 2 baseline HRV recordings separated by ≥ 1 day; and sufficient data for effect size computation. Forty-nine studies ( N = 5,170) met inclusion criteria. Methodological variables coded included factors relevant to study protocol, sample characteristics, electrocardiogram (ECG) signal acquisition and preprocessing, and HRV analytical decisions. Fisher's Z was derived as the common effect size. Analyses were age-stratified (infant/toddler < 5 years, n = 3,329; child/adolescent 5-18 years, n = 1,841) due to marked methodological differences across the pediatric literature. Meta-analytic results revealed HRV demonstrated moderate reliability; child/adolescent studies ( Z = 0.62, r = 0.55) had significantly higher reliability than infant/toddler studies ( Z = 0.42, r = 0.40). Relative to other reported measures, HF exhibited the highest reliability among infant/toddler studies ( Z = 0.42, r = 0.40), while rMSSD exhibited the highest reliability among child/adolescent studies ( Z = 1.00, r = 0.76). Moderator analyses indicated greater reliability with shorter test-retest interval length, reported exclusion criteria based on medical illness/condition, lower proportion of males, prerecording acclimatization period, and longer recording duration; differences were noted across age groups. HRV is reliable among pediatric samples. Reliability is sensitive to pertinent methodological decisions that require careful consideration by the researcher. Limited methodological reporting precluded several a priori moderator analyses. Suggestions for future research, including standards specified by Task Force Guidelines, are discussed.
Test-Retest Reliability of Pediatric Heart Rate Variability
Weiner, Oren M.; McGrath, Jennifer J.
2017-01-01
Heart rate variability (HRV), an established index of autonomic cardiovascular modulation, is associated with health outcomes (e.g., obesity, diabetes) and mortality risk. Time- and frequency-domain HRV measures are commonly reported in longitudinal adult and pediatric studies of health. While test-retest reliability has been established among adults, less is known about the psychometric properties of HRV among infants, children, and adolescents. The objective was to conduct a meta-analysis of the test-retest reliability of time- and frequency-domain HRV measures from infancy to adolescence. Electronic searches (PubMed, PsycINFO; January 1970–December 2014) identified studies with nonclinical samples aged ≤ 18 years; ≥ 2 baseline HRV recordings separated by ≥ 1 day; and sufficient data for effect size computation. Forty-nine studies (N = 5,170) met inclusion criteria. Methodological variables coded included factors relevant to study protocol, sample characteristics, electrocardiogram (ECG) signal acquisition and preprocessing, and HRV analytical decisions. Fisher’s Z was derived as the common effect size. Analyses were age-stratified (infant/toddler < 5 years, n = 3,329; child/adolescent 5–18 years, n = 1,841) due to marked methodological differences across the pediatric literature. Meta-analytic results revealed HRV demonstrated moderate reliability; child/adolescent studies (Z = 0.62, r = 0.55) had significantly higher reliability than infant/toddler studies (Z = 0.42, r = 0.40). Relative to other reported measures, HF exhibited the highest reliability among infant/toddler studies (Z = 0.42, r = 0.40), while rMSSD exhibited the highest reliability among child/adolescent studies (Z = 1.00, r = 0.76). Moderator analyses indicated greater reliability with shorter test-retest interval length, reported exclusion criteria based on medical illness/condition, lower proportion of males, prerecording acclimatization period, and longer recording duration; differences were noted across age groups. HRV is reliable among pediatric samples. Reliability is sensitive to pertinent methodological decisions that require careful consideration by the researcher. Limited methodological reporting precluded several a priori moderator analyses. Suggestions for future research, including standards specified by Task Force Guidelines, are discussed. PMID:29307951
Methodologic European external quality assurance for DNA sequencing: the EQUALseq program.
Ahmad-Nejad, Parviz; Dorn-Beineke, Alexandra; Pfeiffer, Ulrike; Brade, Joachim; Geilenkeuser, Wolf-Jochen; Ramsden, Simon; Pazzagli, Mario; Neumaier, Michael
2006-04-01
DNA sequencing is a key technique in molecular diagnostics, but to date no comprehensive methodologic external quality assessment (EQA) programs have been instituted. Between 2003 and 2005, the European Union funded, as specific support actions, the EQUAL initiative to develop methodologic EQA schemes for genotyping (EQUALqual), quantitative PCR (EQUALquant), and sequencing (EQUALseq). Here we report on the results of the EQUALseq program. The participating laboratories received a 4-sample set comprising 2 DNA plasmids, a PCR product, and a finished sequencing reaction to be analyzed. Data and information from detailed questionnaires were uploaded online and evaluated by use of a scoring system for technical skills and proficiency of data interpretation. Sixty laboratories from 21 European countries registered, and 43 participants (72%) returned data and samples. Capillary electrophoresis was the predominant platform (n = 39; 91%). The median contiguous correct sequence stretch was 527 nucleotides with considerable variation in quality of both primary data and data evaluation. The association between laboratory performance and the number of sequencing assays/year was statistically significant (P <0.05). Interestingly, more than 30% of participants neither added comments to their data nor made efforts to identify the gene sequences or mutational positions. Considerable variations exist even in a highly standardized methodology such as DNA sequencing. Methodologic EQAs are appropriate tools to uncover strengths and weaknesses in both technique and proficiency, and our results emphasize the need for mandatory EQAs. The results of EQUALseq should help improve the overall quality of molecular genetics findings obtained by DNA sequencing.
Methodological factors conducting research with incarcerated persons with diabetes.
Reagan, Louise; Shelton, Deborah
2016-02-01
The aim of this study was to describe methodological issues specific to conducting research with incarcerated vulnerable populations who have diabetes. Much has been written about the ethical and logistical challenges of conducting research with vulnerable incarcerated populations. However, conducting research with incarcerated persons with diabetes is associated with additional issues related to research design, measurement, sampling and recruitment, and data collection procedures. A cross-sectional study examining the relationships of diabetes knowledge, illness representation and self-care behaviors with glycemic control in 124 incarcerated persons was conducted and serves as the basis for describing methodological factors for the conduct of research with an incarcerated population with diabetes. Within this incarcerated population with diabetes, sampling bias due to gender inequity, recruitment of participants not using insulin, self-reported vision impairment, and a lack of standardized instruments especially for measuring diabetes self-care were methodological challenges. Clinical factors that serve as potential barriers for study conduct were identified as risk for hypoglycemia due to insulin timing and other activities. Conducting research with incarcerated persons diagnosed with diabetes requires attention to a set of methodological concerns above and beyond that of the ethical and legal regulations for protecting the rights of this vulnerable population. To increase opportunities for conducting rigorous as well as facility- and patient-friendly research, researchers need to blend their knowledge of diabetes with an understanding of prison rules and routines. Copyright © 2015 Elsevier Inc. All rights reserved.
76 FR 61287 - Request for Public Comment on the United States Standards for Barley
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-04
... barley marketing and define U.S. barley quality in the domestic and global marketplace. The standards define commonly used industry terms; contain basic principles governing the application of standards... standards using approved methodologies and can be applied at any point in the marketing chain. Furthermore...
NASA Astrophysics Data System (ADS)
Schum, Paul A.
If international report cards were issued today, to all industrialized nations world wide, the United States would receive a "C" at best in mathematics and science. This is not simply a temporary or simple cause and effect circumstance that can easily be addressed. The disappointing truth is that this downward trend in mathematics and science mastery by American students has been occurring steadily for at least the last eight years of international testing, and that there are numerous and varied bases for this reality. In response to this crisis, The National Science Teachers Association (NSTA), The American Association for the Advancement of Science (AAAS), and The National Research Council (NRC) each have proposed relatively consistent, but individual sets of professional science teaching standards, designed to improve science instruction in American schools. It is of extreme value to the scientific, educational community to know if any or all of these standards lead to improved student performance. This study investigates the correlation between six, specific teacher behaviors that are common to these national standards and which behaviors, if any, result in improved student performance, as demonstrated on the Science Reasoning sub-test of the ACT Assessment. These standards focus classroom science teachers on professional development, leading toward student mastery of scientific interpretation, concept development, and constructive relationship building. Because all individual teachers interpret roles, expectations, and guiding philosophies from different lenses, effective professional practice may reflect consistency in rationale and methodology yet will be best evidenced by an examination of specific teaching techniques. In this study, these teaching techniques are evidenced by self-reported teacher awareness and adherence to these consensual standards. Assessment instruments vary widely, and the results of student performance often reflect the congruency of curricular methodology and explicit testing domains. Although the recent educational impetus for change is most notably governed numerically by test scores, the true goal of scientific literacy is in the application of logic. Therefore, the ultimate thematic analysis in this study attempts to relate both educational theory and practice with positive change at the classroom level. The data gathered in this study is insufficient in establishing a significant correlation between adherence to national science teaching standards and student performance on the ACT in Jefferson County, Kentucky, for either public or Catholic school students. However, with respect to mean student scores on the Science Reasoning sub-test of the ACT, there is statistically significant evidence for superior performance of Catholic school students compared with that of public school students in this region.
Godah, Mohammad W; Abdul Khalek, Rima A; Kilzar, Lama; Zeid, Hiba; Nahlawi, Acile; Lopes, Luciane Cruz; Darzi, Andrea J; Schünemann, Holger J; Akl, Elie A
2016-12-01
Low- and middle-income countries adapt World Health Organization (WHO) guidelines instead of de novo development for financial, epidemiologic, sociopolitical, cultural, organizational, and other reasons. To systematically evaluate reported processes used in the adaptation of WHO guidelines for human immunodeficiency virus (HIV) and tuberculosis (TB). We searched three online databases/repositories: United States Agency for International Development (USAID) AIDS Support and Technical Resources - Sector One program (AIDSTAR-One) National Treatment Database; the AIDSspace Guideline Repository, and WHO Database of national HIV and TB guidelines. We assessed the rigor and quality of reported adaptation methodology using the ADAPTE process as benchmark. Of 170 eligible guidelines, only 32 (19%) reported documentation on the adaptation process. The median and interquartile range of the number of ADAPTE steps fulfilled by the eligible guidelines were 11.5 (10, 13.5) (out of 23 steps). The number of guidelines (out of 32 steps) fulfilling each ADAPTE step was 18 (interquartile range, 5-27). Seventeen of 32 guidelines (53%) met all steps relevant to the setup phase, whereas none met all steps relevant to the adaptation phase. The number of well-documented adaptation methodologies in national HIV and/or TB guidelines is very low. There is a need for the use of standardized and systematic framework for guideline adaptation and improved reporting of processes used. Copyright © 2016 Elsevier Inc. All rights reserved.
The reliability of the Glasgow Coma Scale: a systematic review.
Reith, Florence C M; Van den Brande, Ruben; Synnot, Anneliese; Gruen, Russell; Maas, Andrew I R
2016-01-01
The Glasgow Coma Scale (GCS) provides a structured method for assessment of the level of consciousness. Its derived sum score is applied in research and adopted in intensive care unit scoring systems. Controversy exists on the reliability of the GCS. The aim of this systematic review was to summarize evidence on the reliability of the GCS. A literature search was undertaken in MEDLINE, EMBASE and CINAHL. Observational studies that assessed the reliability of the GCS, expressed by a statistical measure, were included. Methodological quality was evaluated with the consensus-based standards for the selection of health measurement instruments checklist and its influence on results considered. Reliability estimates were synthesized narratively. We identified 52 relevant studies that showed significant heterogeneity in the type of reliability estimates used, patients studied, setting and characteristics of observers. Methodological quality was good (n = 7), fair (n = 18) or poor (n = 27). In good quality studies, kappa values were ≥0.6 in 85%, and all intraclass correlation coefficients indicated excellent reliability. Poor quality studies showed lower reliability estimates. Reliability for the GCS components was higher than for the sum score. Factors that may influence reliability include education and training, the level of consciousness and type of stimuli used. Only 13% of studies were of good quality and inconsistency in reported reliability estimates was found. Although the reliability was adequate in good quality studies, further improvement is desirable. From a methodological perspective, the quality of reliability studies needs to be improved. From a clinical perspective, a renewed focus on training/education and standardization of assessment is required.
Hall, Deborah A; Mehta, Rajnikant L; Fackrell, Kathryn
2018-03-08
The authors respond to a letter to the editor (Sabour, 2018) concerning the interpretation of validity in the context of evaluating treatment-related change in tinnitus loudness over time. The authors refer to several landmark methodological publications and an international standard concerning the validity of patient-reported outcome measurement instruments. The tinnitus loudness rating performed better against our reported acceptability criteria for (face and convergent) validity than did the tinnitus loudness matching test. It is important to distinguish between tests that evaluate the validity of measuring treatment-related change over time and tests that quantify the accuracy of diagnosing tinnitus as a case and non-case.
Methodology or method? A critical review of qualitative case study reports
Hyett, Nerida; Kenny, Amanda; Dickson-Swift, Virginia
2014-01-01
Despite on-going debate about credibility, and reported limitations in comparison to other approaches, case study is an increasingly popular approach among qualitative researchers. We critically analysed the methodological descriptions of published case studies. Three high-impact qualitative methods journals were searched to locate case studies published in the past 5 years; 34 were selected for analysis. Articles were categorized as health and health services (n=12), social sciences and anthropology (n=7), or methods (n=15) case studies. The articles were reviewed using an adapted version of established criteria to determine whether adequate methodological justification was present, and if study aims, methods, and reported findings were consistent with a qualitative case study approach. Findings were grouped into five themes outlining key methodological issues: case study methodology or method, case of something particular and case selection, contextually bound case study, researcher and case interactions and triangulation, and study design inconsistent with methodology reported. Improved reporting of case studies by qualitative researchers will advance the methodology for the benefit of researchers and practitioners. PMID:24809980
NASA Technical Reports Server (NTRS)
Broderick, Ron
1997-01-01
The ultimate goal of this report was to integrate the powerful tools of artificial intelligence into the traditional process of software development. To maintain the US aerospace competitive advantage, traditional aerospace and software engineers need to more easily incorporate the technology of artificial intelligence into the advanced aerospace systems being designed today. The future goal was to transition artificial intelligence from an emerging technology to a standard technology that is considered early in the life cycle process to develop state-of-the-art aircraft automation systems. This report addressed the future goal in two ways. First, it provided a matrix that identified typical aircraft automation applications conducive to various artificial intelligence methods. The purpose of this matrix was to provide top-level guidance to managers contemplating the possible use of artificial intelligence in the development of aircraft automation. Second, the report provided a methodology to formally evaluate neural networks as part of the traditional process of software development. The matrix was developed by organizing the discipline of artificial intelligence into the following six methods: logical, object representation-based, distributed, uncertainty management, temporal and neurocomputing. Next, a study of existing aircraft automation applications that have been conducive to artificial intelligence implementation resulted in the following five categories: pilot-vehicle interface, system status and diagnosis, situation assessment, automatic flight planning, and aircraft flight control. The resulting matrix provided management guidance to understand artificial intelligence as it applied to aircraft automation. The approach taken to develop a methodology to formally evaluate neural networks as part of the software engineering life cycle was to start with the existing software quality assurance standards and to change these standards to include neural network development. The changes were to include evaluation tools that can be applied to neural networks at each phase of the software engineering life cycle. The result was a formal evaluation approach to increase the product quality of systems that use neural networks for their implementation.
Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C
2018-03-07
Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses.
2009-11-24
production on Air Bases Field the Critical Asset Prioritization Methodology ( CAPM ) tool Manage costs Provide energy leadership throughout the Air...residing on military installations • Field the Critical Asset Prioritization Methodology ( CAPM ) tool. This CAPM tool will allow prioritization of Air...fielding of the Critical Asset Prioritization Methodology ( CAPM ) tool and the adoption of financial standards to enable transparency across Air
Standards to support information systems integration in anatomic pathology.
Daniel, Christel; García Rojo, Marcial; Bourquard, Karima; Henin, Dominique; Schrader, Thomas; Della Mea, Vincenzo; Gilbertson, John; Beckwith, Bruce A
2009-11-01
Integrating anatomic pathology information- text and images-into electronic health care records is a key challenge for enhancing clinical information exchange between anatomic pathologists and clinicians. The aim of the Integrating the Healthcare Enterprise (IHE) international initiative is precisely to ensure interoperability of clinical information systems by using existing widespread industry standards such as Digital Imaging and Communication in Medicine (DICOM) and Health Level Seven (HL7). To define standard-based informatics transactions to integrate anatomic pathology information to the Healthcare Enterprise. We used the methodology of the IHE initiative. Working groups from IHE, HL7, and DICOM, with special interest in anatomic pathology, defined consensual technical solutions to provide end-users with improved access to consistent information across multiple information systems. The IHE anatomic pathology technical framework describes a first integration profile, "Anatomic Pathology Workflow," dedicated to the diagnostic process including basic image acquisition and reporting solutions. This integration profile relies on 10 transactions based on HL7 or DICOM standards. A common specimen model was defined to consistently identify and describe specimens in both HL7 and DICOM transactions. The IHE anatomic pathology working group has defined standard-based informatics transactions to support the basic diagnostic workflow in anatomic pathology laboratories. In further stages, the technical framework will be completed to manage whole-slide images and semantically rich structured reports in the diagnostic workflow and to integrate systems used for patient care and those used for research activities (such as tissue bank databases or tissue microarrayers).