A new IRT-based standard setting method: application to eCat-listening.
García, Pablo Eduardo; Abad, Francisco José; Olea, Julio; Aguado, David
2013-01-01
Criterion-referenced interpretations of tests are highly necessary, which usually involves the difficult task of establishing cut scores. Contrasting with other Item Response Theory (IRT)-based standard setting methods, a non-judgmental approach is proposed in this study, in which Item Characteristic Curve (ICC) transformations lead to the final cut scores. eCat-Listening, a computerized adaptive test for the evaluation of English Listening, was administered to 1,576 participants, and the proposed standard setting method was applied to classify them into the performance standards of the Common European Framework of Reference for Languages (CEFR). The results showed a classification closely related to relevant external measures of the English language domain, according to the CEFR. It is concluded that the proposed method is a practical and valid standard setting alternative for IRT-based tests interpretations.
48 CFR 9904.406-61 - Interpretation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.406-61 Interpretation. (a) Questions have arisen as to... categories of costs that have been included in the past and may be considered in the future as restructuring... restructuring costs shall not exceed five years. The straight-line method of amortization should normally be...
Ozseven, Ayşe Gül; Sesli Çetin, Emel; Ozseven, Levent
2012-07-01
In recent years, owing to the presence of multi-drug resistant nosocomial bacteria, combination therapies are more frequently applied. Thus there is more need to investigate the in vitro activity of drug combinations against multi-drug resistant bacteria. Checkerboard synergy testing is among the most widely used standard technique to determine the activity of antibiotic combinations. It is based on microdilution susceptibility testing of antibiotic combinations. Although this test has a standardised procedure, there are many different methods for interpreting the results. In many previous studies carried out with multi-drug resistant bacteria, different rates of synergy have been reported with various antibiotic combinations using checkerboard technique. These differences might be attributed to the different features of the strains. However, different synergy rates detected by checkerboard method have also been reported in other studies using the same drug combinations and same types of bacteria. It was thought that these differences in synergy rates might be due to the different methods of interpretation of synergy test results. In recent years, multi-drug resistant Acinetobacter baumannii has been the most commonly encountered nosocomial pathogen especially in intensive-care units. For this reason, multidrug resistant A.baumannii has been the subject of a considerable amount of research about antimicrobial combinations. In the present study, the in vitro activities of frequently preferred combinations in A.baumannii infections like imipenem plus ampicillin/sulbactam, and meropenem plus ampicillin/sulbactam were tested by checkerboard synergy method against 34 multi-drug resistant A.baumannii isolates. Minimum inhibitory concentration (MIC) values for imipenem, meropenem and ampicillin/sulbactam were determined by the broth microdilution method. Subsequently the activity of two different combinations were tested in the dilution range of 4 x MIC and 0.03 x MIC in 96-well checkerboard plates. The results were obtained separately using the four different interpretation methods frequently preferred by researchers. Thus, it was aimed to detect to what extent the rates of synergistic, indifferent and antagonistic interactions were affected by different interpretation methods. The differences between the interpretation methods were tested by chi-square analysis for each combination used. Statistically significant differences were detected between the four different interpretation methods for the determination of synergistic and indifferent interactions (p< 0.0001). Highest rates of synergy were observed with both combinations by the method that used the lowest fractional inhibitory concentration index of all the non-turbid wells along the turbidity/non-turbidity interface. There was no statistically significant difference between the four methods for the detection of antagonism (p> 0.05). In conclusion although there is a standard procedure for checkerboard synergy testing it fails to exhibit standard results owing to different methods of interpretation of the results. Thus, there is a need to standardise the interpretation method for checkerboard synergy testing. To determine the most appropriate method of interpretation further studies investigating the clinical benefits of synergic combinations and additionally comparing the consistency of the results obtained from the other standard combination tests like time-kill studies, are required.
The need for performance criteria in evaluating the durability of wood products
Stan Lebow; Bessie Woodward; Patricia Lebow; Carol Clausen
2010-01-01
Data generated from wood-product durability evaluations can be difficult to interpret. Standard methods used to evaluate the potential long-term durability of wood products often provide little guidance on interpretation of test results. Decisions on acceptable performance for standardization and code compliance are based on the judgment of reviewers or committees....
20 CFR 640.3 - Interpretation of Federal law requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 3 2014-04-01 2014-04-01 false Interpretation of Federal law requirements... STANDARD FOR BENEFIT PAYMENT PROMPTNESS-UNEMPLOYMENT COMPENSATION § 640.3 Interpretation of Federal law... require that a State law include provision for such methods of administration as will reasonable insure...
20 CFR 640.3 - Interpretation of Federal law requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 3 2012-04-01 2012-04-01 false Interpretation of Federal law requirements... STANDARD FOR BENEFIT PAYMENT PROMPTNESS-UNEMPLOYMENT COMPENSATION § 640.3 Interpretation of Federal law... require that a State law include provision for such methods of administration as will reasonable insure...
20 CFR 640.3 - Interpretation of Federal law requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 3 2013-04-01 2013-04-01 false Interpretation of Federal law requirements... STANDARD FOR BENEFIT PAYMENT PROMPTNESS-UNEMPLOYMENT COMPENSATION § 640.3 Interpretation of Federal law... require that a State law include provision for such methods of administration as will reasonable insure...
20 CFR 640.3 - Interpretation of Federal law requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Interpretation of Federal law requirements... STANDARD FOR BENEFIT PAYMENT PROMPTNESS-UNEMPLOYMENT COMPENSATION § 640.3 Interpretation of Federal law... require that a State law include provision for such methods of administration as will reasonable insure...
Informatics and Standards for Nanomedicine Technology
Thomas, Dennis G.; Klaessig, Fred; Harper, Stacey L.; Fritts, Martin; Hoover, Mark D.; Gaheen, Sharon; Stokes, Todd H.; Reznik-Zellen, Rebecca; Freund, Elaine T.; Klemm, Juli D.; Paik, David S.; Baker, Nathan A.
2011-01-01
There are several issues to be addressed concerning the management and effective use of information (or data), generated from nanotechnology studies in biomedical research and medicine. These data are large in volume, diverse in content, and are beset with gaps and ambiguities in the description and characterization of nanomaterials. In this work, we have reviewed three areas of nanomedicine informatics: information resources; taxonomies, controlled vocabularies, and ontologies; and information standards. Informatics methods and standards in each of these areas are critical for enabling collaboration, data sharing, unambiguous representation and interpretation of data, semantic (meaningful) search and integration of data; and for ensuring data quality, reliability, and reproducibility. In particular, we have considered four types of information standards in this review, which are standard characterization protocols, common terminology standards, minimum information standards, and standard data communication (exchange) formats. Currently, due to gaps and ambiguities in the data, it is also difficult to apply computational methods and machine learning techniques to analyze, interpret and recognize patterns in data that are high dimensional in nature, and also to relate variations in nanomaterial properties to variations in their chemical composition, synthesis, characterization protocols, etc. Progress towards resolving the issues of information management in nanomedicine using informatics methods and standards discussed in this review will be essential to the rapidly growing field of nanomedicine informatics. PMID:21721140
Rohling, Martin L; Williamson, David J; Miller, L Stephen; Adams, Russell L
2003-11-01
The aim of this project was to validate an alternative global measure of neurocognitive impairment (Rohling Interpretive Method, or RIM) that could be generated from data gathered from a flexible battery approach. A critical step in this process is to establish the utility of the technique against current standards in the field. In this paper, we compared results from the Rohling Interpretive Method to those obtained from the General Neuropsychological Deficit Scale (GNDS; Reitan & Wolfson, 1988) and the Halstead-Russell Average Impairment Rating (AIR; Russell, Neuringer & Goldstein, 1970) on a large previously published sample of patients assessed with the Halstead-Reitan Battery (HRB). Findings support the use of the Rohling Interpretive Method in producing summary statistics similar in diagnostic sensitivity and specificity to the traditional HRB indices.
NASA Astrophysics Data System (ADS)
André, M. P.; Galperin, M.; Berry, A.; Ojeda-Fournier, H.; O'Boyle, M.; Olson, L.; Comstock, C.; Taylor, A.; Ledgerwood, M.
Our computer-aided diagnostic (CADx) tool uses advanced image processing and artificial intelligence to analyze findings on breast sonography images. The goal is to standardize reporting of such findings using well-defined descriptors and to improve accuracy and reproducibility of interpretation of breast ultrasound by radiologists. This study examined several factors that may impact accuracy and reproducibility of the CADx software, which proved to be highly accurate and stabile over several operating conditions.
Design, analysis, and interpretation of field quality-control data for water-sampling projects
Mueller, David K.; Schertz, Terry L.; Martin, Jeffrey D.; Sandstrom, Mark W.
2015-01-01
The report provides extensive information about statistical methods used to analyze quality-control data in order to estimate potential bias and variability in environmental data. These methods include construction of confidence intervals on various statistical measures, such as the mean, percentiles and percentages, and standard deviation. The methods are used to compare quality-control results with the larger set of environmental data in order to determine whether the effects of bias and variability might interfere with interpretation of these data. Examples from published reports are presented to illustrate how the methods are applied, how bias and variability are reported, and how the interpretation of environmental data can be qualified based on the quality-control analysis.
Does periodic lung screening of films meets standards?
Binay, Songul; Arbak, Peri; Safak, Alp Alper; Balbay, Ege Gulec; Bilgin, Cahit; Karatas, Naciye
2016-01-01
Objective: To determine whether the workers’ periodic chest x-ray screening techniques in accordance with the quality standards is the responsibility of physicians. Evaluation of differences of interpretations by physicians in different levels of education and the importance of standardization of interpretation. Methods: Previously taken chest radiographs of 400 workers who are working in a factory producing the glass run channels were evaluated according to technical and quality standards by three observers (pulmonologist, radiologist, pulmonologist assistant). There was a perfect concordance between radiologist and pulmonologist for the underpenetrated films. Whereas there was perfect concordance between pulmonologist and pulmonologist assistant for over penetrated films. Results: Pulmonologist (52%) has interpreted the dose of the films as regular more than other observers (radiologist; 44.3%, pulmonologist assistant; 30.4%). The frequency of interpretation of the films as taken in inspiratory phase by the pulmonologist (81.7%) was less than other observers (radiologist; 92.1%, pulmonologist assistant; 92.6%). The rate of the pulmonologist (53.5%) was higher than the other observers (radiologist; 44.6%, pulmonologist assistant; 41.8%) for the assessment of the positioning of the patients as symmetrical. Pulmonologist assistant (15.3%) was the one who most commonly reported the parenchymal findings (radiologist; 2.2%, pulmonologist; 12.9%). Conclusion: It is necessary to reorganize the technical standards and exposure procedures for improving the quality of the chest radiographs. The reappraisal of all interpreters and continuous training of technicians is required. PMID:28083054
Matrix effect and recovery terminology issues in regulated drug bioanalysis.
Huang, Yong; Shi, Robert; Gee, Winnie; Bonderud, Richard
2012-02-01
Understanding the meaning of the terms used in the bioanalytical method validation guidance is essential for practitioners to implement best practice. However, terms that have several meanings or that have different interpretations exist within bioanalysis, and this may give rise to differing practices. In this perspective we discuss an important but often confusing term - 'matrix effect (ME)' - in regulated drug bioanalysis. The ME can be interpreted as either the ionization change or the measurement bias of the method caused by the nonanalyte matrix. The ME definition dilemma makes its evaluation challenging. The matrix factor is currently used as a standard method for evaluation of ionization changes caused by the matrix in MS-based methods. Standard additions to pre-extraction samples have been suggested to evaluate the overall effects of a matrix from different sources on the analytical system, because it covers ionization variation and extraction recovery variation. We also provide our personal views on the term 'recovery'.
Chung, Chia-Fang; Xu, Kaiyuan; Dong, Yi; Schenk, Jeanette M.; Cain, Kevin; Munson, Sean; Heitkemper, Margaret M.
2017-01-01
There are currently no standardized methods for identifying trigger food(s) from irritable bowel syndrome (IBS) food and symptom journals. The primary aim of this study was to assess the inter-rater reliability of providers’ interpretations of IBS journals. A second aim was to describe whether these interpretations varied for each patient. Eight providers reviewed 17 IBS journals and rated how likely key food groups (fermentable oligo-di-monosaccharides and polyols, high-calorie, gluten, caffeine, high-fiber) were to trigger IBS symptoms for each patient. Agreement of trigger food ratings was calculated using Krippendorff’s α-reliability estimate. Providers were also asked to write down recommendations they would give to each patient. Estimates of agreement of trigger food likelihood ratings were poor (average α = 0.07). Most providers gave similar trigger food likelihood ratings for over half the food groups. Four providers gave the exact same written recommendation(s) (range 3–7) to over half the patients. Inter-rater reliability of provider interpretations of IBS food and symptom journals was poor. Providers favored certain trigger food likelihood ratings and written recommendations. This supports the need for a more standardized method for interpreting these journals and/or more rigorous techniques to accurately identify personalized IBS food triggers. PMID:29113044
Comparison of ambulatory blood pressure reference standards in children evaluated for hypertension
Jones, Deborah P.; Richey, Phyllis A.; Alpert, Bruce S.
2009-01-01
Objective The purpose of this study was to systematically compare methods for standardization of blood pressure levels obtained by ambulatory blood pressure monitoring (ABPM) in a group of 111 children studied at our institution. Methods Blood pressure indices, blood pressure loads and standard deviation scores were calculated using he original ABPM and the modified reference standards. Bland—Altman plots and kappa statistics for the level of agreement were generated. Results Overall, the agreement between the two methods was excellent; however, approximately 5% of children were classified differently by one as compared with the other method. Conclusion Depending on which version of the German Working Group’s reference standards is used for interpretation of ABPM data, the classification of the individual as having hypertension or normal blood pressure may vary. PMID:19433980
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-19
... assist the office in processing your requests. See the SUPPLEMENTARY INFORMATION section for electronic... considerations for standardization of image acquisition, image interpretation methods, and other procedures to help ensure imaging data quality. The draft guidance describes two categories of image acquisition and...
Untangling Autophagy Measurements: All Fluxed Up
Gottlieb, Roberta A.; Andres, Allen M.; Sin, Jon; Taylor, David
2015-01-01
Autophagy is an important physiological process in the heart, and alterations in autophagic activity can exacerbate or mitigate injury during various pathological processes. Methods to assess autophagy have changed rapidly as the field of research has expanded. As with any new field, methods and standards for data analysis and interpretation evolve as investigators acquire experience and insight. The purpose of this review is to summarize current methods to measure autophagy, selective mitochondrial autophagy (mitophagy), and autophagic flux. We will examine several published studies where confusion arose in in data interpretation, in order to illustrate the challenges. Finally we will discuss methods to assess autophagy in vivo and in patients. PMID:25634973
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around...
Electrocardiographic interpretation skills of cardiology residents: are they competent?
Sibbald, Matthew; Davies, Edward G; Dorian, Paul; Yu, Eric H C
2014-12-01
Achieving competency at electrocardiogram (ECG) interpretation among cardiology subspecialty residents has traditionally focused on interpreting a target number of ECGs during training. However, there is little evidence to support this approach. Further, there are no data documenting the competency of ECG interpretation skills among cardiology residents, who become de facto the gold standard in their practice communities. We tested 29 Cardiology residents from all 3 years in a large training program using a set of 20 ECGs collected from a community cardiology practice over a 1-month period. Residents interpreted half of the ECGs using a standard analytic framework, and half using their own approach. Residents were scored on the number of correct and incorrect diagnoses listed. Overall diagnostic accuracy was 58%. Of 6 potentially life-threatening diagnoses, residents missed 36% (123 of 348) including hyperkalemia (81%), long QT (52%), complete heart block (35%), and ventricular tachycardia (19%). Residents provided additional inappropriate diagnoses on 238 ECGs (41%). Diagnostic accuracy was similar between ECGs interpreted using an analytic framework vs ECGs interpreted without an analytic framework (59% vs 58%; F(1,1333) = 0.26; P = 0.61). Cardiology resident proficiency at ECG interpretation is suboptimal. Despite the use of an analytic framework, there remain significant deficiencies in ECG interpretation among Cardiology residents. A more systematic method of addressing these important learning gaps is urgently needed. Copyright © 2014 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.
Comparison of ambulatory blood pressure reference standards in children evaluated for hypertension.
Jones, Deborah P; Richey, Phyllis A; Alpert, Bruce S
2009-06-01
The purpose of this study was to systematically compare methods for standardization of blood pressure levels obtained by ambulatory blood pressure monitoring (ABPM) in a group of 111 children studied at our institution. Blood pressure indices, blood pressure loads and standard deviation scores were calculated using the original ABPM and the modified reference standards. Bland-Altman plots and kappa statistics for the level of agreement were generated. Overall, the agreement between the two methods was excellent; however, approximately 5% of children were classified differently by one as compared with the other method. Depending on which version of the German Working Group's reference standards is used for interpretation of ABPM data, the classification of the individual as having hypertension or normal blood pressure may vary.
ERIC Educational Resources Information Center
Patalino, Marianne
Problems in current course evaluation methods are discussed and an alternative method is described for the construction, analysis, and interpretation of a test to evaluate instructional programs. The method presented represents a different approach to the traditional overreliance on standardized achievement tests and the total scores they provide.…
Pleil, Joachim D
2016-01-01
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.
Brownstein, Catherine A; Beggs, Alan H; Homer, Nils; Merriman, Barry; Yu, Timothy W; Flannery, Katherine C; DeChene, Elizabeth T; Towne, Meghan C; Savage, Sarah K; Price, Emily N; Holm, Ingrid A; Luquette, Lovelace J; Lyon, Elaine; Majzoub, Joseph; Neupert, Peter; McCallie, David; Szolovits, Peter; Willard, Huntington F; Mendelsohn, Nancy J; Temme, Renee; Finkel, Richard S; Yum, Sabrina W; Medne, Livija; Sunyaev, Shamil R; Adzhubey, Ivan; Cassa, Christopher A; de Bakker, Paul I W; Duzkale, Hatice; Dworzyński, Piotr; Fairbrother, William; Francioli, Laurent; Funke, Birgit H; Giovanni, Monica A; Handsaker, Robert E; Lage, Kasper; Lebo, Matthew S; Lek, Monkol; Leshchiner, Ignaty; MacArthur, Daniel G; McLaughlin, Heather M; Murray, Michael F; Pers, Tune H; Polak, Paz P; Raychaudhuri, Soumya; Rehm, Heidi L; Soemedi, Rachel; Stitziel, Nathan O; Vestecka, Sara; Supper, Jochen; Gugenmus, Claudia; Klocke, Bernward; Hahn, Alexander; Schubach, Max; Menzel, Mortiz; Biskup, Saskia; Freisinger, Peter; Deng, Mario; Braun, Martin; Perner, Sven; Smith, Richard J H; Andorf, Janeen L; Huang, Jian; Ryckman, Kelli; Sheffield, Val C; Stone, Edwin M; Bair, Thomas; Black-Ziegelbein, E Ann; Braun, Terry A; Darbro, Benjamin; DeLuca, Adam P; Kolbe, Diana L; Scheetz, Todd E; Shearer, Aiden E; Sompallae, Rama; Wang, Kai; Bassuk, Alexander G; Edens, Erik; Mathews, Katherine; Moore, Steven A; Shchelochkov, Oleg A; Trapane, Pamela; Bossler, Aaron; Campbell, Colleen A; Heusel, Jonathan W; Kwitek, Anne; Maga, Tara; Panzer, Karin; Wassink, Thomas; Van Daele, Douglas; Azaiez, Hela; Booth, Kevin; Meyer, Nic; Segal, Michael M; Williams, Marc S; Tromp, Gerard; White, Peter; Corsmeier, Donald; Fitzgerald-Butt, Sara; Herman, Gail; Lamb-Thrush, Devon; McBride, Kim L; Newsom, David; Pierson, Christopher R; Rakowsky, Alexander T; Maver, Aleš; Lovrečić, Luca; Palandačić, Anja; Peterlin, Borut; Torkamani, Ali; Wedell, Anna; Huss, Mikael; Alexeyenko, Andrey; Lindvall, Jessica M; Magnusson, Måns; Nilsson, Daniel; Stranneheim, Henrik; Taylan, Fulya; Gilissen, Christian; Hoischen, Alexander; van Bon, Bregje; Yntema, Helger; Nelen, Marcel; Zhang, Weidong; Sager, Jason; Zhang, Lu; Blair, Kathryn; Kural, Deniz; Cariaso, Michael; Lennon, Greg G; Javed, Asif; Agrawal, Saloni; Ng, Pauline C; Sandhu, Komal S; Krishna, Shuba; Veeramachaneni, Vamsi; Isakov, Ofer; Halperin, Eran; Friedman, Eitan; Shomron, Noam; Glusman, Gustavo; Roach, Jared C; Caballero, Juan; Cox, Hannah C; Mauldin, Denise; Ament, Seth A; Rowen, Lee; Richards, Daniel R; San Lucas, F Anthony; Gonzalez-Garay, Manuel L; Caskey, C Thomas; Bai, Yu; Huang, Ying; Fang, Fang; Zhang, Yan; Wang, Zhengyuan; Barrera, Jorge; Garcia-Lobo, Juan M; González-Lamuño, Domingo; Llorca, Javier; Rodriguez, Maria C; Varela, Ignacio; Reese, Martin G; De La Vega, Francisco M; Kiruluta, Edward; Cargill, Michele; Hart, Reece K; Sorenson, Jon M; Lyon, Gholson J; Stevenson, David A; Bray, Bruce E; Moore, Barry M; Eilbeck, Karen; Yandell, Mark; Zhao, Hongyu; Hou, Lin; Chen, Xiaowei; Yan, Xiting; Chen, Mengjie; Li, Cong; Yang, Can; Gunel, Murat; Li, Peining; Kong, Yong; Alexander, Austin C; Albertyn, Zayed I; Boycott, Kym M; Bulman, Dennis E; Gordon, Paul M K; Innes, A Micheil; Knoppers, Bartha M; Majewski, Jacek; Marshall, Christian R; Parboosingh, Jillian S; Sawyer, Sarah L; Samuels, Mark E; Schwartzentruber, Jeremy; Kohane, Isaac S; Margulies, David M
2014-03-25
There is tremendous potential for genome sequencing to improve clinical diagnosis and care once it becomes routinely accessible, but this will require formalizing research methods into clinical best practices in the areas of sequence data generation, analysis, interpretation and reporting. The CLARITY Challenge was designed to spur convergence in methods for diagnosing genetic disease starting from clinical case history and genome sequencing data. DNA samples were obtained from three families with heritable genetic disorders and genomic sequence data were donated by sequencing platform vendors. The challenge was to analyze and interpret these data with the goals of identifying disease-causing variants and reporting the findings in a clinically useful format. Participating contestant groups were solicited broadly, and an independent panel of judges evaluated their performance. A total of 30 international groups were engaged. The entries reveal a general convergence of practices on most elements of the analysis and interpretation process. However, even given this commonality of approach, only two groups identified the consensus candidate variants in all disease cases, demonstrating a need for consistent fine-tuning of the generally accepted methods. There was greater diversity of the final clinical report content and in the patient consenting process, demonstrating that these areas require additional exploration and standardization. The CLARITY Challenge provides a comprehensive assessment of current practices for using genome sequencing to diagnose and report genetic diseases. There is remarkable convergence in bioinformatic techniques, but medical interpretation and reporting are areas that require further development by many groups.
2014-01-01
Background There is tremendous potential for genome sequencing to improve clinical diagnosis and care once it becomes routinely accessible, but this will require formalizing research methods into clinical best practices in the areas of sequence data generation, analysis, interpretation and reporting. The CLARITY Challenge was designed to spur convergence in methods for diagnosing genetic disease starting from clinical case history and genome sequencing data. DNA samples were obtained from three families with heritable genetic disorders and genomic sequence data were donated by sequencing platform vendors. The challenge was to analyze and interpret these data with the goals of identifying disease-causing variants and reporting the findings in a clinically useful format. Participating contestant groups were solicited broadly, and an independent panel of judges evaluated their performance. Results A total of 30 international groups were engaged. The entries reveal a general convergence of practices on most elements of the analysis and interpretation process. However, even given this commonality of approach, only two groups identified the consensus candidate variants in all disease cases, demonstrating a need for consistent fine-tuning of the generally accepted methods. There was greater diversity of the final clinical report content and in the patient consenting process, demonstrating that these areas require additional exploration and standardization. Conclusions The CLARITY Challenge provides a comprehensive assessment of current practices for using genome sequencing to diagnose and report genetic diseases. There is remarkable convergence in bioinformatic techniques, but medical interpretation and reporting are areas that require further development by many groups. PMID:24667040
Morris, Heather C; Monaco, Lisa A; Steele, Andrew; Wainwright, Norm
2010-10-01
Historically, colony-forming units as determined by plate cultures have been the standard unit for microbiological analysis of environmental samples, medical diagnostics, and products for human use. However, the time and materials required make plate cultures expensive and potentially hazardous in the closed environments of future NASA missions aboard the International Space Station and missions to other Solar System targets. The Limulus Amebocyte Lysate (LAL) assay is an established method for ensuring the sterility and cleanliness of samples in the meat-packing and pharmaceutical industries. Each of these industries has verified numerical requirements for the correct interpretation of results from this assay. The LAL assay is a rapid, point-of-use, verified assay that has already been approved by NASA Planetary Protection as an alternate, molecular method for the examination of outbound spacecraft. We hypothesize that standards for molecular techniques, similar to those used by the pharmaceutical and meat-packing industries, need to be set by space agencies to ensure accurate data interpretation and subsequent decision making. In support of this idea, we present research that has been conducted to relate the LAL assay to plate cultures, and we recommend values obtained from these investigations that could assist in interpretation and analysis of data obtained from the LAL assay.
Interpretation and classification of microvolt T wave alternans tests
NASA Technical Reports Server (NTRS)
Bloomfield, Daniel M.; Hohnloser, Stefan H.; Cohen, Richard J.
2002-01-01
Measurement of microvolt-level T wave alternans (TWA) during routine exercise stress testing now is possible as a result of sophisticated noise reduction techniques and analytic methods that have become commercially available. Even though this technology is new, the available data suggest that microvolt TWA is a potent predictor of arrhythmia risk in diverse disease states. As this technology becomes more widely available, physicians will be called upon to interpret microvolt TWA tracings. This review seeks to establish uniform standards for the clinical interpretation of microvolt TWA tracings.
Application of surface geophysics to ground-water investigations
Zohdy, Adel A.R.; Eaton, Gordon P.; Mabey, Don R.
1974-01-01
This manual reviews the standard methods of surface geophysics applicable to ground-water investigations. It covers electrical methods, seismic and gravity methods, and magnetic methods. The general physical principles underlying each method and its capabilities and limitations are described. Possibilities for non-uniqueness of interpretation of geophysical results are noted. Examples of actual use of the methods are given to illustrate applications and interpretation in selected geohydrologic environments. The objective of the manual is to provide the hydrogeologist with a sufficient understanding of the capabilities, imitations, and relative cost of geophysical methods to make sound decisions as to when to use of these methods is desirable. The manual also provides enough information for the hydrogeologist to work with a geophysicist in designing geophysical surveys that differentiate significant hydrogeologic changes.
Coordination and standardization of federal sedimentation activities
Glysson, G. Douglas; Gray, John R.
1997-01-01
- precipitation information critical to water resources management. Memorandum M-92-01 covers primarily freshwater bodies and includes activities, such as "development and distribution of consensus standards, field-data collection and laboratory analytical methods, data processing and interpretation, data-base management, quality control and quality assurance, and water- resources appraisals, assessments, and investigations." Research activities are not included.
Validating Automated Essay Scoring: A (Modest) Refinement of the "Gold Standard"
ERIC Educational Resources Information Center
Powers, Donald E.; Escoffery, David S.; Duchnowski, Matthew P.
2015-01-01
By far, the most frequently used method of validating (the interpretation and use of) automated essay scores has been to compare them with scores awarded by human raters. Although this practice is questionable, human-machine agreement is still often regarded as the "gold standard." Our objective was to refine this model and apply it to…
Standardization of Laboratory Methods for the PERCH Study
Karron, Ruth A.; Morpeth, Susan C.; Bhat, Niranjan; Levine, Orin S.; Baggett, Henry C.; Brooks, W. Abdullah; Feikin, Daniel R.; Hammitt, Laura L.; Howie, Stephen R. C.; Knoll, Maria Deloria; Kotloff, Karen L.; Madhi, Shabir A.; Scott, J. Anthony G.; Thea, Donald M.; Adrian, Peter V.; Ahmed, Dilruba; Alam, Muntasir; Anderson, Trevor P.; Antonio, Martin; Baillie, Vicky L.; Dione, Michel; Endtz, Hubert P.; Gitahi, Caroline; Karani, Angela; Kwenda, Geoffrey; Maiga, Abdoul Aziz; McClellan, Jessica; Mitchell, Joanne L.; Morailane, Palesa; Mugo, Daisy; Mwaba, John; Mwansa, James; Mwarumba, Salim; Nyongesa, Sammy; Panchalingam, Sandra; Rahman, Mustafizur; Sawatwong, Pongpun; Tamboura, Boubou; Toure, Aliou; Whistler, Toni; O’Brien, Katherine L.; Murdoch, David R.
2017-01-01
Abstract The Pneumonia Etiology Research for Child Health study was conducted across 7 diverse research sites and relied on standardized clinical and laboratory methods for the accurate and meaningful interpretation of pneumonia etiology data. Blood, respiratory specimens, and urine were collected from children aged 1–59 months hospitalized with severe or very severe pneumonia and community controls of the same age without severe pneumonia and were tested with an extensive array of laboratory diagnostic tests. A standardized testing algorithm and standard operating procedures were applied across all study sites. Site laboratories received uniform training, equipment, and reagents for core testing methods. Standardization was further assured by routine teleconferences, in-person meetings, site monitoring visits, and internal and external quality assurance testing. Targeted confirmatory testing and testing by specialized assays were done at a central reference laboratory. PMID:28575358
Grabenhenrich, L B; Reich, A; Bellach, J; Trendelenburg, V; Sprikkelman, A B; Roberts, G; Grimshaw, K E C; Sigurdardottir, S; Kowalski, M L; Papadopoulos, N G; Quirce, S; Dubakiene, R; Niggemann, B; Fernández-Rivas, M; Ballmer-Weber, B; van Ree, R; Schnadt, S; Mills, E N C; Keil, T; Beyer, K
2017-03-01
The conduct of oral food challenges as the preferred diagnostic standard for food allergy (FA) was harmonized over the last years. However, documentation and interpretation of challenge results, particularly in research settings, are not sufficiently standardized to allow valid comparisons between studies. Our aim was to develop a diagnostic toolbox to capture and report clinical observations in double-blind placebo-controlled food challenges (DBPCFC). A group of experienced allergists, paediatricians, dieticians, epidemiologists and data managers developed generic case report forms and standard operating procedures for DBPCFCs and piloted them in three clinical centres. The follow-up of the EuroPrevall/iFAAM birth cohort and other iFAAM work packages applied these methods. A set of newly developed questionnaire or interview items capture the history of FA. Together with sensitization status, this forms the basis for the decision to perform a DBPCFC, following a standardized decision algorithm. A generic form including details about severity and timing captures signs and symptoms observed during or after the procedures. In contrast to the commonly used dichotomous outcome FA vs no FA, the allergy status is interpreted in multiple categories to reflect the complexity of clinical decision-making. The proposed toolbox sets a standard for improved documentation and harmonized interpretation of DBPCFCs. By a detailed documentation and common terminology for communicating outcomes, these tools hope to reduce the influence of subjective judgment of supervising physicians. All forms are publicly available for further evolution and free use in clinical and research settings. © 2016 The Authors. Allergy Published by John Wiley & Sons Ltd.
Moser, Rosemarie Scolaro; Schatz, Philip; Lichtenstein, Jonathan D
2015-01-01
Media coverage, litigation, and new legislation have resulted in a heightened awareness of the prevalence of sports concussion in both adult and youth athletes. Baseline and postconcussion testing is now commonly used for the assessment and management of sports-related concussion in schools and in youth sports leagues. With increased use of computerized neurocognitive sports concussion testing, there is a need for standards for proper administration and interpretation. To date, there has been a lack of standardized procedures by which assessments are administered. More specifically, individuals who are not properly trained often interpret test results, and their methods of interpretation vary considerably. The purpose of this article is to outline factors affecting the validity of test results, to provide examples of misuse and misinterpretation of test results, and to communicate the need to administer testing in the most effective and useful manner. An increase in the quality of test administration and application may serve to decrease the prevalence of invalid test results and increase the accuracy and utility of baseline test results if an athlete sustains a concussion. Standards for test use should model the American Psychological Association and Centers for Disease Control and Prevention guidelines, as well as the recent findings of the joint position paper on computerized neuropsychological assessment devices.
NASA Technical Reports Server (NTRS)
Staubert, R.
1985-01-01
Methods for calculating the statistical significance of excess events and the interpretation of the formally derived values are discussed. It is argued that a simple formula for a conservative estimate should generally be used in order to provide a common understanding of quoted values.
Dilute Russel Viper Venom Time analysis in a Haematology Laboratory: An audit.
Kruger, W; Meyer, P W A; Nel, J G
2018-04-17
To determine whether the current set of evaluation criteria used for dilute Russel Viper Venom Time (dRVVT) investigations in the routine laboratory meet expectation and identify possible shortcomings. All dRVVT assays requested from January 2015 to December 2015 were appraised in this cross-sectional study. The raw data panels were compared with the new reference interval, established in 2016, to determine the sequence of assays that should have been performed. The interpretive comments were audited, and false-negative reports identified. Interpretive comments according to three interpretation guidelines were compared. The reagent cost per assay was determined, and reagent cost wastage, due to redundant tests, was calculated. Only ~9% of dRVVT results authorized during 2015 had an interpretive comment included in the report. ~15% of these results were false-negative interpretations. There is a significant statistical difference in interpretive comments between the three interpretation methods. Redundant mixing tests resulted in R 7477.91 (~11%) reagent cost wastage in 2015. We managed to demonstrate very evident deficiencies in our own practice and managed to establish a standardized workflow that will potentially render our service more efficient and cost effective, aiding clinicians in making improved treatment decisions and diagnoses. Furthermore, it is essential that standard operating procedures be kept up to date and executed by all staff in the laboratory. © 2018 John Wiley & Sons Ltd.
ICADx: interpretable computer aided diagnosis of breast masses
NASA Astrophysics Data System (ADS)
Kim, Seong Tae; Lee, Hakmin; Kim, Hak Gu; Ro, Yong Man
2018-02-01
In this study, a novel computer aided diagnosis (CADx) framework is devised to investigate interpretability for classifying breast masses. Recently, a deep learning technology has been successfully applied to medical image analysis including CADx. Existing deep learning based CADx approaches, however, have a limitation in explaining the diagnostic decision. In real clinical practice, clinical decisions could be made with reasonable explanation. So current deep learning approaches in CADx are limited in real world deployment. In this paper, we investigate interpretability in CADx with the proposed interpretable CADx (ICADx) framework. The proposed framework is devised with a generative adversarial network, which consists of interpretable diagnosis network and synthetic lesion generative network to learn the relationship between malignancy and a standardized description (BI-RADS). The lesion generative network and the interpretable diagnosis network compete in an adversarial learning so that the two networks are improved. The effectiveness of the proposed method was validated on public mammogram database. Experimental results showed that the proposed ICADx framework could provide the interpretability of mass as well as mass classification. It was mainly attributed to the fact that the proposed method was effectively trained to find the relationship between malignancy and interpretations via the adversarial learning. These results imply that the proposed ICADx framework could be a promising approach to develop the CADx system.
Nanayakkara, Shane; Weiss, Heike; Bailey, Michael; van Lint, Allison; Cameron, Peter; Pilcher, David
2014-11-01
Time spent in the emergency department (ED) before admission to hospital is often considered an important key performance indicator (KPI). Throughout Australia and New Zealand, there is no standard definition of 'time of admission' for patients admitted through the ED. By using data submitted to the Australian and New Zealand Intensive Care Society Adult Patient Database, the aim was to determine the differing methods used to define hospital admission time and assess how these impact on the calculation of time spent in the ED before admission to an intensive care unit (ICU). Between March and December of 2010, 61 hospitals were contacted directly. Decision methods for determining time of admission to the ED were matched to 67,787 patient records. Univariate and multivariate analyses were conducted to assess the relationship between decision method and the reported time spent in the ED. Four mechanisms of recording time of admission were identified, with time of triage being the most common (28/61 hospitals). Reported median time spent in the ED varied from 2.5 (IQR 0.83-5.35) to 5.1 h (2.82-8.68), depending on the decision method. After adjusting for illness severity, hospital type and location, decision method remained a significant factor in determining measurement of ED length of stay. Different methods are used in Australia and New Zealand to define admission time to hospital. Professional bodies, hospitals and jurisdictions should ensure standardisation of definitions for appropriate interpretation of KPIs as well as for the interpretation of studies assessing the impact of admission time to ICU from the ED. WHAT IS KNOWN ABOUT THE TOPIC?: There are standards for the maximum time spent in the ED internationally, but these standards vary greatly across Australia. The definition of such a standard is critically important not only to patient care, but also in the assessment of hospital outcomes. Key performance indicators rely on quality data to improve decision-making. WHAT DOES THIS PAPER ADD?: This paper quantifies the variability of times measured and analyses why the variability exists. It also discusses the impact of this variability on assessment of outcomes and provides suggestions to improve standardisation. WHAT ARE THE IMPLICATIONS FOR PRACTITIONERS?: This paper provides a clearer view on standards regarding length of stay in the ICU, highlighting the importance of key performance indicators, as well as the quality of data that underlies them. This will lead to significant changes in the way we standardise and interpret data regarding length of stay.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 2 2014-07-01 2014-07-01 false Interpretation of the National Ambient Air Quality Standards for Particulate Matter K Appendix K to Part 50 Protection of Environment... STANDARDS Pt. 50, App. K Appendix K to Part 50—Interpretation of the National Ambient Air Quality Standards...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 2 2013-07-01 2013-07-01 false Interpretation of the National Ambient Air Quality Standards for Particulate Matter K Appendix K to Part 50 Protection of Environment... STANDARDS Pt. 50, App. K Appendix K to Part 50—Interpretation of the National Ambient Air Quality Standards...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 2 2012-07-01 2012-07-01 false Interpretation of the National Ambient Air Quality Standards for Particulate Matter K Appendix K to Part 50 Protection of Environment... STANDARDS Pt. 50, App. K Appendix K to Part 50—Interpretation of the National Ambient Air Quality Standards...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 2 2011-07-01 2011-07-01 false Interpretation of the National Ambient Air Quality Standards for Particulate Matter K Appendix K to Part 50 Protection of Environment... STANDARDS Pt. 50, App. K Appendix K to Part 50—Interpretation of the National Ambient Air Quality Standards...
A review of contemporary methods for the presentation of scientific uncertainty.
Makinson, K A; Hamby, D M; Edwards, J A
2012-12-01
Graphic methods for displaying uncertainty are often the most concise and informative way to communicate abstract concepts. Presentation methods currently in use for the display and interpretation of scientific uncertainty are reviewed. Numerous subjective and objective uncertainty display methods are presented, including qualitative assessments, node and arrow diagrams, standard statistical methods, box-and-whisker plots,robustness and opportunity functions, contribution indexes, probability density functions, cumulative distribution functions, and graphical likelihood functions.
Standards for reporting fish toxicity tests
Cope, O.B.
1961-01-01
The growing impetus of studies on fish and pesticides focuses attention on the need for standardized reporting procedures. Good methods have been developed for laboratory and field procedures in testing programs and in statistical features of assay experiments; and improvements are being made on methods of collecting and preserving fish, invertebrates, and other materials exposed to economic poisons. On the other had, the reporting of toxicity data in a complete manner has lagged behind, and today's literature is little improved over yesterday's with regard to completeness and susceptibility to interpretation.
USDA-ARS?s Scientific Manuscript database
Reverse Transcription quantitative Polymerase Chain Reaction (qRT-PCR) is a popular method for measuring transcript abundance. The most commonly used method of interpretation is relative quantification and thus necessitates the use of normalization controls (i.e. reference genes) to standardize tran...
Lu, Michael T; Meyersohn, Nandini M; Mayrhofer, Thomas; Bittner, Daniel O; Emami, Hamed; Puchner, Stefan B; Foldyna, Borek; Mueller, Martin E; Hearne, Steven; Yang, Clifford; Achenbach, Stephan; Truong, Quynh A; Ghoshhajra, Brian B; Patel, Manesh R; Ferencik, Maros; Douglas, Pamela S; Hoffmann, Udo
2018-04-01
Purpose To assess concordance and relative prognostic utility between central core laboratory and local site interpretation for significant coronary artery disease (CAD) and cardiovascular events. Materials and Methods In the Prospective Multicenter Imaging Study for Evaluation of Chest Pain (PROMISE) trial, readers at 193 North American sites interpreted coronary computed tomographic (CT) angiography as part of the clinical evaluation of stable chest pain. Readers at a central core laboratory also interpreted CT angiography blinded to clinical data, site interpretation, and outcomes. Significant CAD was defined as stenosis greater than or equal to 50%; cardiovascular events were defined as a composite of cardiovascular death or myocardial infarction. Results In 4347 patients (51.8% women; mean age ± standard deviation, 60.4 years ± 8.2), core laboratory and site interpretations were discordant in 16% (683 of 4347), most commonly because of a finding of significant CAD by site but not by core laboratory interpretation (80%, 544 of 683). Overall, core laboratory interpretation resulted in 41% fewer patients being reported as having significant CAD (14%, 595 of 4347 vs 23%, 1000 of 4347; P < .001). Over a median follow-up period of 25 months, 1.3% (57 of 4347) sustained myocardial infarction or cardiovascular death. The C statistic for future myocardial infarction or cardiovascular death was 0.61 (95% confidence interval [CI]: 0.54, 0.68) for the core laboratory and 0.63 (95% CI: 0.56, 0.70) for the sites. Conclusion Compared with interpretation by readers at 193 North American sites, standardized core laboratory interpretation classified 41% fewer patients as having significant CAD. © RSNA, 2017 Online supplemental material is available for this article. Clinical trial registration no. NCT01174550.
Interpretation of IEEE-854 floating-point standard and definition in the HOL system
NASA Technical Reports Server (NTRS)
Carreno, Victor A.
1995-01-01
The ANSI/IEEE Standard 854-1987 for floating-point arithmetic is interpreted by converting the lexical descriptions in the standard into mathematical conditional descriptions organized in tables. The standard is represented in higher-order logic within the framework of the HOL (Higher Order Logic) system. The paper is divided in two parts with the first part the interpretation and the second part the description in HOL.
Metrics for Offline Evaluation of Prognostic Performance
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2010-01-01
Prognostic performance evaluation has gained significant attention in the past few years. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.
Nutrimetry: BMI assessment as a function of development.
Selem-Solís, Jorge Enrique; Alcocer-Gamboa, Alberto; Hattori-Hara, Mónica; Esteve-Lanao, Jonathan; Larumbe-Zabala, Eneko
2018-02-01
Adequate nutritional assessment is required to fight malnutrition (undernutrition and overfeeding) in children and adolescents. For this, joint interpretation of certain indicators (body mass index [BMI], height, weight, etc.) is recommended. This is done clinically, but not epidemiologically. The aim of this paper is to present "nutrimetry", a simple method that crosses anthropometric information allowing for bivariate interpretation at both levels (clinical and epidemiological). Data from 41,001 children and adolescents aged 0-19 years, taken from Mexico's National Health and Nutrition Survey 2012, were analyzed. Data crossed were BMI-for-age z-scores (BAZ) with height-for-age z-scores (HAZ) according to the World Health Organization (WHO) standards. Conditional prevalences were calculated in a 3×3 grid and were compared with expected values. This method identified subgroups in each BAZ category showing heterogeneity of the sample with regard to WHO standards for HAZ and nutritional status. According to the method, nutritional status patterns differed among Mexican states and age and sex groups. Nutrimetry is a helpful and accessible tool to be used in epidemiology. It allows for detecting unexpected distributions of conditional prevalences, its graphical representation facilitates communication of results by geographic areas, and enriched interpretation of BAZ helps guide intervention actions according to their codes. Copyright © 2017 SEEN y SED. Publicado por Elsevier España, S.L.U. All rights reserved.
ERIC Educational Resources Information Center
Fillingim, Jennifer G.; Barlow, Angela T.
2010-01-01
Mathematics educators promote student engagement in the Process Standards and create problem-solving tasks and facilitate discussions to help their students develop strengths in explaining their methods, using and interpreting multiple representations, and making connections between topics. They are excited and encouraged when they see students…
REVIEW OF QUANTITATIVE STANDARDS AND GUIDELINES FOR FUNGI IN INDOOR AIR
Exposure to fungal aerosols clearly causes human disease. However, methods for assessing exposure remain poorly understood, and guidelines for interpreting data are often contradictory. The purposes of this paper are to review and compare existing guidelines for indoor airborne...
ERIC Educational Resources Information Center
Alase, Abayomi
2017-01-01
This interpretative phenomenological analysis (IPA) study investigated and interpreted the Common Core State Standards program (the phenomenon) that has been the dominating topic of discussions amongst educators all across the country since the inauguration of the program in 2014/2015 school session. Common Core State Standards (CCSS) was a…
The anesthesia and brain monitor (ABM). Concept and performance.
Kay, B
1984-01-01
Three integral components of the ABM, the frontalis electromyogram (EMG), the processed unipolar electroencephalogram (EEG) and the neuromuscular transmission monitor (NMT) were compared with standard research methods, and their clinical utility indicated. The EMG was compared with the method of Dundee et al (2) for measuring the induction dose of thiopentone; the EEG was compared with the SLE Galileo E8-b and the NMT was compared with the Medelec MS6. In each case correlation of results was extremely high, and the ABM offered some advantages over the standard research methods. We conclude that each of the integral units of the ABM is simple to apply and interpret, yet as accurate as standard apparatus used for research. In addition the ABM offers excellent display and recording facilities and alarm systems.
A Proposed Interpretation of the ISO 10015 and Implications for HRD Theory and Research
ERIC Educational Resources Information Center
Jacobs, Ronald L.; Wang, Bryan
2007-01-01
While recent discussions of ISO 10015- Guidelines for Training have done much to promote the need for the standard, no interpretation of the standard has been presented that would guide its actual implementation. This paper proposes an interpretation of the ISO 10015 based on the specifications of the guideline and two other standards related to…
Harding, Keith; Benson, Erica E
2015-01-01
Standard operating procedures are a systematic way of making sure that biopreservation processes, tasks, protocols, and operations are correctly and consistently performed. They are the basic documents of biorepository quality management systems and are used in quality assurance, control, and improvement. Methodologies for constructing workflows and writing standard operating procedures and work instructions are described using a plant cryopreservation protocol as an example. This chapter is pertinent to other biopreservation sectors because how methods are written, interpreted, and implemented can affect the quality of storage outcomes.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Interpretation of Chest Roentgenograms (X-Rays) A Appendix A to Part 718 Employees' Benefits OFFICE OF WORKERS... Appendix A to Part 718—Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays) The... procedures are used in administering and interpreting X-rays and that the best available medical evidence...
Code of Federal Regulations, 2013 CFR
2013-04-01
... Interpretation of Chest Roentgenograms (X-Rays) A Appendix A to Part 718 Employees' Benefits OFFICE OF WORKERS... Appendix A to Part 718—Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays) The... procedures are used in administering and interpreting X-rays and that the best available medical evidence...
Code of Federal Regulations, 2011 CFR
2011-04-01
... Interpretation of Chest Roentgenograms (X-Rays) A Appendix A to Part 718 Employees' Benefits OFFICE OF WORKERS... Appendix A to Part 718—Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays) The... procedures are used in administering and interpreting X-rays and that the best available medical evidence...
Code of Federal Regulations, 2012 CFR
2012-04-01
... Interpretation of Chest Roentgenograms (X-Rays) A Appendix A to Part 718 Employees' Benefits OFFICE OF WORKERS... Appendix A to Part 718—Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays) The... procedures are used in administering and interpreting X-rays and that the best available medical evidence...
Perry, Nicholas S; Baucom, Katherine J W; Bourne, Stacia; Butner, Jonathan; Crenshaw, Alexander O; Hogan, Jasara N; Imel, Zac E; Wiltshire, Travis J; Baucom, Brian R W
2017-08-01
Researchers commonly use repeated-measures actor-partner interdependence models (RM-APIM) to understand how romantic partners change in relation to one another over time. However, traditional interpretations of the results of these models do not fully or correctly capture the dyadic temporal patterns estimated in RM-APIM. Interpretation of results from these models largely focuses on the meaning of single-parameter estimates in isolation from all the others. However, considering individual coefficients separately impedes the understanding of how these associations combine to produce an interdependent pattern that emerges over time. Additionally, positive within-person, or actor, effects are commonly misinterpreted as indicating growth from one time point to the next when they actually represent decline. We suggest that change-as-outcome RM-APIMs and vector field diagrams (VFDs) can be used to improve the understanding and presentation of dyadic patterns of association described by standard RM-APIMs. The current article briefly reviews the conceptual foundations of RM-APIMs, demonstrates how change-as-outcome RM-APIMs and VFDs can aid interpretation of standard RM-APIMs, and provides a tutorial in making VFDs using multilevel modeling. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Orthoclinostatic test as one of the methods for evaluating the human functional state
NASA Technical Reports Server (NTRS)
Doskin, V. A.; Gissen, L. D.; Bomshteyn, O. Z.; Merkin, E. N.; Sarychev, S. B.
1980-01-01
The possible use of different methods to evaluate the autonomic regulation in hygienic studies were examined. The simplest and most objective tests were selected. It is shown that the use of the optimized standards not only makes it possible to detect earlier unfavorables shifts, but also permits a quantitative characterization of the degree of impairment in the state of the organism. Precise interpretation of the observed shifts is possible. Results indicate that the standards can serve as one of the criteria for evaluating the state and can be widely used in hygienic practice.
Uncertainty loops in travel-time tomography from nonlinear wave physics.
Galetti, Erica; Curtis, Andrew; Meles, Giovanni Angelo; Baptie, Brian
2015-04-10
Estimating image uncertainty is fundamental to guiding the interpretation of geoscientific tomographic maps. We reveal novel uncertainty topologies (loops) which indicate that while the speeds of both low- and high-velocity anomalies may be well constrained, their locations tend to remain uncertain. The effect is widespread: loops dominate around a third of United Kingdom Love wave tomographic uncertainties, changing the nature of interpretation of the observed anomalies. Loops exist due to 2nd and higher order aspects of wave physics; hence, although such structures must exist in many tomographic studies in the physical sciences and medicine, they are unobservable using standard linearized methods. Higher order methods might fruitfully be adopted.
SSMA Science Reviewers' Forecasts for the Future of Science Education.
ERIC Educational Resources Information Center
Jinks, Jerry; Hoffer, Terry
1989-01-01
Described is a study which was conducted as an exploratory assessment of science reviewers' perceptions for the future of science education. Arrives at interpretations for identified categories of computers and high technology, science curriculum, teacher education, training, certification, standards, teaching methods, and materials. (RT)
Data Combination and Instrumental Variables in Linear Models
ERIC Educational Resources Information Center
Khawand, Christopher
2012-01-01
Instrumental variables (IV) methods allow for consistent estimation of causal effects, but suffer from poor finite-sample properties and data availability constraints. IV estimates also tend to have relatively large standard errors, often inhibiting the interpretability of differences between IV and non-IV point estimates. Lastly, instrumental…
Huang, Linda; Fernandes, Helen; Zia, Hamid; Tavassoli, Peyman; Rennert, Hanna; Pisapia, David; Imielinski, Marcin; Sboner, Andrea; Rubin, Mark A; Kluk, Michael
2017-01-01
Objective: This paper describes the Precision Medicine Knowledge Base (PMKB; https://pmkb.weill.cornell.edu), an interactive online application for collaborative editing, maintenance, and sharing of structured clinical-grade cancer mutation interpretations. Materials and Methods: PMKB was built using the Ruby on Rails Web application framework. Leveraging existing standards such as the Human Genome Variation Society variant description format, we implemented a data model that links variants to tumor-specific and tissue-specific interpretations. Key features of PMKB include support for all major variant types, standardized authentication, distinct user roles including high-level approvers, and detailed activity history. A REpresentational State Transfer (REST) application-programming interface (API) was implemented to query the PMKB programmatically. Results: At the time of writing, PMKB contains 457 variant descriptions with 281 clinical-grade interpretations. The EGFR, BRAF, KRAS, and KIT genes are associated with the largest numbers of interpretable variants. PMKB’s interpretations have been used in over 1500 AmpliSeq tests and 750 whole-exome sequencing tests. The interpretations are accessed either directly via the Web interface or programmatically via the existing API. Discussion: An accurate and up-to-date knowledge base of genomic alterations of clinical significance is critical to the success of precision medicine programs. The open-access, programmatically accessible PMKB represents an important attempt at creating such a resource in the field of oncology. Conclusion: The PMKB was designed to help collect and maintain clinical-grade mutation interpretations and facilitate reporting for clinical cancer genomic testing. The PMKB was also designed to enable the creation of clinical cancer genomics automated reporting pipelines via an API. PMID:27789569
Describing the epidemiology of rheumatic diseases: methodological aspects.
Guillemin, Francis
2012-03-01
Producing descriptive epidemiology data is essential to understand the burden of rheumatic diseases (prevalence) and their dynamic in the population (incidence). No matter how simple such indicators may look, the correct collection of data and the appropriate interpretation of the results face several challenges: distinguishing indicators, facing the costs of obtaining data, using appropriate definition, identifying optimal sources of data, choosing among many survey methods, dealing with estimates precision, and standardizing results. This study describes the underlying methodological difficulties to be overcome so as to make descriptive indicators reliable and interpretable.
48 CFR 9904.409-61 - Interpretation. [Reserved
Code of Federal Regulations, 2010 CFR
2010-10-01
...] 9904.409-61 Section 9904.409-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.409-61 Interpretation. [Reserved] ...
48 CFR 9904.407-61 - Interpretation. [Reserved
Code of Federal Regulations, 2010 CFR
2010-10-01
...] 9904.407-61 Section 9904.407-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.407-61 Interpretation. [Reserved] ...
48 CFR 9904.405-61 - Interpretation. [Reserved
Code of Federal Regulations, 2010 CFR
2010-10-01
...] 9904.405-61 Section 9904.405-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.405-61 Interpretation. [Reserved] ...
48 CFR 9904.402-61 - Interpretation.
Code of Federal Regulations, 2010 CFR
2010-10-01
...-61 Section 9904.402-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.402-61 Interpretation. (a) 9904.402, Cost Accounting...
48 CFR 9904.410-61 - Interpretation. [Reserved
Code of Federal Regulations, 2010 CFR
2010-10-01
...] 9904.410-61 Section 9904.410-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.410-61 Interpretation. [Reserved] ...
48 CFR 9904.404-61 - Interpretation. [Reserved
Code of Federal Regulations, 2010 CFR
2010-10-01
...] 9904.404-61 Section 9904.404-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.404-61 Interpretation. [Reserved] ...
48 CFR 9904.408-61 - Interpretation. [Reserved
Code of Federal Regulations, 2010 CFR
2010-10-01
...] 9904.408-61 Section 9904.408-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.408-61 Interpretation. [Reserved] ...
Erdoğan, Zeynep; Abdülrezzak, Ümmühan; Silov, Güler; Özdal, Ayşegül; Turhal, Özgül
2014-01-01
Objective: The aim of this study was to investigate the variability in the interpretation of parenchymal abnormalities and to assess the differences in interpretation of routine renal scintigraphic findings on posterior view of technetium-99m dimercaptosuccinic acid (pvDMSA) scans and parenchymal phase of technetium-99m mercaptoacetyltriglycine (ppMAG3) scans by using standard criterions to make standardization and semiquantitative evaluation and to have more accurately correlation. Materials and Methods: Two experienced nuclear medicine physicians independently interpreted pvDMSA scans of 204 and ppMAG3 scans of 102 pediatric patients, retrospectively. Comparisons were made by visual inspection of pvDMSA scans, and ppMAG3 scans by using a grading system modified from Itoh et al. According to this, anatomical damage of the renal parenchyma was classified into six types: Grade 0-V. In the calculation of the agreement rates, Kendall correlation (tau-b) analysis was used. Results: According to our findings, excellent agreement was found for DMSA grade readings (DMSA-GR) (tau-b = 0.827) and good agreement for MAG3 grade readings (MAG3-GR) (tau-b = 0.790) between two observers. Most of clear parenchymal lesions detected on pvDMSA scans and ppMAG3 scans identified by observers equally. Studies with negative or minimal lesions reduced correlation degrees for both DMSA-GR and MAG3-GR. Conclusion: Our grading system can be used for standardization of the reports. We conclude that standardization of criteria and terminology in the interpretations may result in higher interobserver consistency, also improve low interobserver reproducibility and objectivity of renal scintigraphy reports. PMID:24761059
Mobini, Sirous; Mackintosh, Bundy; Illingworth, Jo; Gega, Lina; Langdon, Peter; Hoppitt, Laura
2014-01-01
Background and objectives This study examines the effects of a single session of Cognitive Bias Modification to induce positive Interpretative bias (CBM-I) using standard or explicit instructions and an analogue of computer-administered CBT (c-CBT) program on modifying cognitive biases and social anxiety. Methods A sample of 76 volunteers with social anxiety attended a research site. At both pre- and post-test, participants completed two computer-administered tests of interpretative and attentional biases and a self-report measure of social anxiety. Participants in the training conditions completed a single session of either standard or explicit CBM-I positive training and a c-CBT program. Participants in the Control (no training) condition completed a CBM-I neutral task matched the active CBM-I intervention in format and duration but did not encourage positive disambiguation of socially ambiguous or threatening scenarios. Results Participants in both CBM-I programs (either standard or explicit instructions) and the c-CBT condition exhibited more positive interpretations of ambiguous social scenarios at post-test and one-week follow-up as compared to the Control condition. Moreover, the results showed that CBM-I and c-CBT, to some extent, changed negative attention biases in a positive direction. Furthermore, the results showed that both CBM-I training conditions and c-CBT reduced social anxiety symptoms at one-week follow-up. Limitations This study used a single session of CBM-I training, however multi-sessions intervention might result in more endurable positive CBM-I changes. Conclusions A computerised single session of CBM-I and an analogue of c-CBT program reduced negative interpretative biases and social anxiety. PMID:24412966
Video Measurements: Quantity or Quality
ERIC Educational Resources Information Center
Zajkov, Oliver; Mitrevski, Boce
2012-01-01
Students have problems with understanding, using and interpreting graphs. In order to improve the students' skills for working with graphs, we propose Manual Video Measurement (MVM). In this paper, the MVM method is explained and its accuracy is tested. The comparison with the standardized video data software shows that its accuracy is comparable…
Washington English Language Proficiency Assessment (WELPA). Form C 2015. Interpretation Guide
ERIC Educational Resources Information Center
Washington Office of Superintendent of Public Instruction, 2015
2015-01-01
The "Washington English Language Proficiency Assessment" (WELPA) is a No Child Left Behind (NCLB)-compliant instrument that is used in Grades K-12 as a formal and standardized method of measuring language proficiency. The test results provide important information for classifying English Language Learners (ELLs) and subsequently for…
42 CFR 37.52 - Method of obtaining definitive interpretations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... other diseases must be demonstrated by those physicians who desire to be B Readers by taking and passing... specified by NIOSH. Each physician who desires to take the digital version of the examination will be provided a complete set of the current NIOSH-approved standard reference digital radiographs. Physicians...
ERIC Educational Resources Information Center
Qi, Jing
2015-01-01
Transnational education seeks equivalence in standards and/or relevance of outcomes through the transfer of Western theories, concepts and methods. Utilising a critique-interpretative approach, Jing Qi argues that equivalence/relevance-oriented approaches to transnational education assume the legitimacy of the global knowledge hierarchy.…
30 CFR 784.200 - Interpretive rules related to General Performance Standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... RECLAMATION AND OPERATION PLAN § 784.200 Interpretive rules related to General Performance Standards. The... ENFORCEMENT, DEPARTMENT OF THE INTERIOR SURFACE COAL MINING AND RECLAMATION OPERATIONS PERMITS AND COAL... Surface Mining Reclamation and Enforcement. (a) Interpretation of § 784.15: Reclamation plan: Postmining...
Harn, Nicholas R; Hunt, Suzanne L; Hill, Jacqueline; Vidoni, Eric; Perry, Mark; Burns, Jeffrey M
2017-08-01
Establishing reliable methods for interpreting elevated cerebral amyloid-β plaque on PET scans is increasingly important for radiologists, as availability of PET imaging in clinical practice increases. We examined a 3-step method to detect plaque in cognitively normal older adults, focusing on the additive value of quantitative information during the PET scan interpretation process. Fifty-five F-florbetapir PET scans were evaluated by 3 experienced raters. Scans were first visually interpreted as having "elevated" or "nonelevated" plaque burden ("Visual Read"). Images were then processed using a standardized quantitative analysis software (MIMneuro) to generate whole brain and region of interest SUV ratios. This "Quantitative Read" was considered elevated if at least 2 of 6 regions of interest had an SUV ratio of more than 1.1. The final interpretation combined both visual and quantitative data together ("VisQ Read"). Cohen kappa values were assessed as a measure of interpretation agreement. Plaque was elevated in 25.5% to 29.1% of the 165 total Visual Reads. Interrater agreement was strong (kappa = 0.73-0.82) and consistent with reported values. Quantitative Reads were elevated in 45.5% of participants. Final VisQ Reads changed from initial Visual Reads in 16 interpretations (9.7%), with most changing from "nonelevated" Visual Reads to "elevated." These changed interpretations demonstrated lower plaque quantification than those initially read as "elevated" that remained unchanged. Interrater variability improved for VisQ Reads with the addition of quantitative information (kappa = 0.88-0.96). Inclusion of quantitative information increases consistency of PET scan interpretations for early detection of cerebral amyloid-β plaque accumulation.
Quintana, D S; Alvares, G A; Heathers, J A J
2016-01-01
The number of publications investigating heart rate variability (HRV) in psychiatry and the behavioral sciences has increased markedly in the last decade. In addition to the significant debates surrounding ideal methods to collect and interpret measures of HRV, standardized reporting of methodology in this field is lacking. Commonly cited recommendations were designed well before recent calls to improve research communication and reproducibility across disciplines. In an effort to standardize reporting, we propose the Guidelines for Reporting Articles on Psychiatry and Heart rate variability (GRAPH), a checklist with four domains: participant selection, interbeat interval collection, data preparation and HRV calculation. This paper provides an overview of these four domains and why their standardized reporting is necessary to suitably evaluate HRV research in psychiatry and related disciplines. Adherence to these communication guidelines will help expedite the translation of HRV research into a potential psychiatric biomarker by improving interpretation, reproducibility and future meta-analyses. PMID:27163204
Bidgood, W. Dean; Bray, Bruce; Brown, Nicolas; Mori, Angelo Rossi; Spackman, Kent A.; Golichowski, Alan; Jones, Robert H.; Korman, Louis; Dove, Brent; Hildebrand, Lloyd; Berg, Michael
1999-01-01
Objective: To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. Design: The authors introduce the notion of “image acquisition context,” the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. Methods: The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. Results: The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries. PMID:9925229
48 CFR 9901.305 - Requirements for standards and interpretive rulings.
Code of Federal Regulations, 2014 CFR
2014-10-01
... promulgation of cost accounting standards and interpretations thereof, the Board shall: (a) Take into account, after consultation and discussion with the Comptroller General, professional accounting organizations... ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET...
48 CFR 9901.305 - Requirements for standards and interpretive rulings.
Code of Federal Regulations, 2013 CFR
2013-10-01
... promulgation of cost accounting standards and interpretations thereof, the Board shall: (a) Take into account, after consultation and discussion with the Comptroller General, professional accounting organizations... ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET...
48 CFR 9901.305 - Requirements for standards and interpretive rulings.
Code of Federal Regulations, 2011 CFR
2011-10-01
... promulgation of cost accounting standards and interpretations thereof, the Board shall: (a) Take into account, after consultation and discussion with the Comptroller General, professional accounting organizations... ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET...
48 CFR 9901.305 - Requirements for standards and interpretive rulings.
Code of Federal Regulations, 2012 CFR
2012-10-01
... promulgation of cost accounting standards and interpretations thereof, the Board shall: (a) Take into account, after consultation and discussion with the Comptroller General, professional accounting organizations... ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET...
48 CFR 9901.305 - Requirements for standards and interpretive rulings.
Code of Federal Regulations, 2010 CFR
2010-10-01
... promulgation of cost accounting standards and interpretations thereof, the Board shall: (a) Take into account, after consultation and discussion with the Comptroller General, professional accounting organizations... ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET...
48 CFR 9904.403-61 - Interpretation.
Code of Federal Regulations, 2010 CFR
2010-10-01
...-61 Section 9904.403-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.403-61 Interpretation. (a) Questions have arisen as to...
Standardizing Interpretive Training to Create a More Meaningful Visitor Experience
ERIC Educational Resources Information Center
Carr, Rob
2016-01-01
Implementing a standardized interpretive training and mentoring program across multiple departments has helped created a shared language that staff and volunteers use to collaborate and evaluate interpretive programs and products. This has led to more efficient and effective training and measurable improvements in the quality of the visitor's…
Grootswagers, Tijl; Wardle, Susan G; Carlson, Thomas A
2017-04-01
Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analyzing fMRI data. Although decoding methods have been extensively applied in brain-computer interfaces, these methods have only recently been applied to time series neuroimaging data such as MEG and EEG to address experimental questions in cognitive neuroscience. In a tutorial style review, we describe a broad set of options to inform future time series decoding studies from a cognitive neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to "decode" different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalization, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time series decoding experiments.
Preliminary evaluation of a gel tube agglutination major cross-match method in dogs.
Villarnovo, Dania; Burton, Shelley A; Horney, Barbara S; MacKenzie, Allan L; Vanderstichel, Raphaël
2016-09-01
A major cross-match gel tube test is available for use in dogs yet has not been clinically evaluated. This study compared cross-match results obtained using the gel tube and the standard tube methods for canine samples. Study 1 included 107 canine sample donor-recipient pairings cross-match tested with the RapidVet-H method gel tube test and compared results with the standard tube method. Additionally, 120 pairings using pooled sera containing anti-canine erythrocyte antibody at various concentrations were tested with leftover blood from a hospital population to assess sensitivity and specificity of the gel tube method in comparison with the standard method. The gel tube method had a good relative specificity of 96.1% in detecting lack of agglutination (compatibility) compared to the standard tube method. Agreement between the 2 methods was moderate. Nine of 107 pairings showed agglutination/incompatibility on either test, too few to allow reliable calculation of relative sensitivity. Fifty percent of the gel tube method results were difficult to interpret due to sample spreading in the reaction and/or negative control tubes. The RapidVet-H method agreed with the standard cross-match method on compatible samples, but detected incompatibility in some sample pairs that were compatible with the standard method. Evaluation using larger numbers of incompatible pairings is needed to assess diagnostic utility. The gel tube method results were difficult to categorize due to sample spreading. Weak agglutination reactions or other factors such as centrifuge model may be responsible. © 2016 American Society for Veterinary Clinical Pathology.
Minamimoto, Ryogo; Fayad, Luis; Advani, Ranjana; Vose, Julie; Macapinlac, Homer; Meza, Jane; Hankins, Jordan; Mottaghy, Felix; Juweid, Malik
2016-01-01
Purpose To compare the performance characteristics of interim fluorine 18 (18F) fluorodeoxyglucose (FDG) positron emission tomography (PET)/computed tomography (CT) (after two cycles of chemotherapy) by using the most prominent standardized interpretive criteria (including International Harmonization Project [IHP] criteria, European Organization for Research and Treatment of Cancer [EORTC] criteria, and PET Response Criteria in Solid Tumors (PERCIST) versus those of interim 18F fluorothymidine (FLT) PET/CT and simple visual interpretation. Materials and Methods This HIPAA-compliant prospective study was approved by the institutional review boards, and written informed consent was obtained. Patients with newly diagnosed diffuse large B-cell lymphoma (DLBCL) underwent both FLT and FDG PET/CT 18–24 days after two cycles of rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone or rituximab, etoposide, prednisone, vincristine, cyclophosphamide, and doxorubicin. For FDG PET/CT interpretation, IHP criteria, EORTC criteria, PERCIST, Deauville criteria, standardized uptake value, total lesion glycolysis, and metabolic tumor volume were used. FLT PET/CT images were interpreted with visual assessment by two reviewers in consensus. The interim (after cycle 2) FDG and FLT PET/CT studies were then compared with the end-of-treatment FDG PET/CT studies to determine which interim examination and/or criteria best predicted the result after six cycles of chemotherapy. Results From November 2011 to May 2014, there were 60 potential patients for inclusion, of whom 46 patients (24 men [mean age, 60.9 years ± 13.7; range, 28–78 years] and 22 women [mean age, 57.2 years ± 13.4; range, 25–76 years]) fulfilled the criteria. Thirty-four patients had complete response, and 12 had residual disease at the end of treatment. FLT PET/CT had a significantly higher positive predictive value (PPV) (91%) in predicting residual disease than did any FDG PET/CT interpretation method (42%–46%). No difference in negative predictive value (NPV) was found between FLT PET/CT (94%) and FDG PET/CT (82%–95%), regardless of the interpretive criteria used. FLT PET/CT showed statistically higher (P < .001–.008) or similar NPVs than did FDG PET/CT. Conclusion Early interim FLT PET/CT had a significantly higher PPV than standardized FDG PET/CT–based interpretation for therapeutic response assessment in DLBCL. © RSNA, 2016 Online supplemental material is available for this article. PMID:26854705
48 CFR 9904.401-61 - Interpretation.
Code of Federal Regulations, 2010 CFR
2010-10-01
...-61 Section 9904.401-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.401-61 Interpretation. (a) 9904.401, Cost Accounting... accounting practices used in accumulating and reporting costs.” (b) In estimating the cost of direct material...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-03
... 1250-ZA00 Interpretive Standards for Systemic Compensation Discrimination and Voluntary Guidelines for... Order 11246 with respect to Systemic Compensation Discrimination (Standards) and Voluntary Guidelines... to Systemic Compensation Discrimination (Voluntary Guidelines). OFCCP is proposing to rescind the...
Jets and Metastability in Quantum Mechanics and Quantum Field Theory
NASA Astrophysics Data System (ADS)
Farhi, David
I give a high level overview of the state of particle physics in the introduction, accessible without any background in the field. I discuss improvements of theoretical and statistical methods used for collider physics. These include telescoping jets, a statistical method which was claimed to allow jet searches to increase their sensitivity by considering several interpretations of each event. We find that indeed multiple interpretations extend the power of searches, for both simple counting experiments and powerful multivariate fitting experiments, at least for h → bb¯ at the LHC. Then I propose a method for automation of background calculations using SCET by appropriating the technology of Monte Carlo generators such as MadGraph. In the third chapter I change gears and discuss the future of the universe. It has long been known that our pocket of the standard model is unstable; there is a lower-energy configuration in a remote part of the configuration space, to which our universe will, eventually, decay. While the timescales involved are on the order of 10400 years (depending on how exactly one counts) and thus of no immediate worry, I discuss the shortcomings of the standard methods and propose a more physically motivated derivation for the decay rate. I then make various observations about the structure of decays in quantum field theory.
A public resource facilitating clinical use of genomes
Ball, Madeleine P.; Thakuria, Joseph V.; Zaranek, Alexander Wait; Clegg, Tom; Rosenbaum, Abraham M.; Wu, Xiaodi; Angrist, Misha; Bhak, Jong; Bobe, Jason; Callow, Matthew J.; Cano, Carlos; Chou, Michael F.; Chung, Wendy K.; Douglas, Shawn M.; Estep, Preston W.; Gore, Athurva; Hulick, Peter; Labarga, Alberto; Lee, Je-Hyuk; Lunshof, Jeantine E.; Kim, Byung Chul; Kim, Jong-Il; Li, Zhe; Murray, Michael F.; Nilsen, Geoffrey B.; Peters, Brock A.; Raman, Anugraha M.; Rienhoff, Hugh Y.; Robasky, Kimberly; Wheeler, Matthew T.; Vandewege, Ward; Vorhaus, Daniel B.; Yang, Joyce L.; Yang, Luhan; Aach, John; Ashley, Euan A.; Drmanac, Radoje; Kim, Seong-Jin; Li, Jin Billy; Peshkin, Leonid; Seidman, Christine E.; Seo, Jeong-Sun; Zhang, Kun; Rehm, Heidi L.; Church, George M.
2012-01-01
Rapid advances in DNA sequencing promise to enable new diagnostics and individualized therapies. Achieving personalized medicine, however, will require extensive research on highly reidentifiable, integrated datasets of genomic and health information. To assist with this, participants in the Personal Genome Project choose to forgo privacy via our institutional review board- approved “open consent” process. The contribution of public data and samples facilitates both scientific discovery and standardization of methods. We present our findings after enrollment of more than 1,800 participants, including whole-genome sequencing of 10 pilot participant genomes (the PGP-10). We introduce the Genome-Environment-Trait Evidence (GET-Evidence) system. This tool automatically processes genomes and prioritizes both published and novel variants for interpretation. In the process of reviewing the presumed healthy PGP-10 genomes, we find numerous literature references implying serious disease. Although it is sometimes impossible to rule out a late-onset effect, stringent evidence requirements can address the high rate of incidental findings. To that end we develop a peer production system for recording and organizing variant evaluations according to standard evidence guidelines, creating a public forum for reaching consensus on interpretation of clinically relevant variants. Genome analysis becomes a two-step process: using a prioritized list to record variant evaluations, then automatically sorting reviewed variants using these annotations. Genome data, health and trait information, participant samples, and variant interpretations are all shared in the public domain—we invite others to review our results using our participant samples and contribute to our interpretations. We offer our public resource and methods to further personalized medical research. PMID:22797899
Promoting clinical and laboratory interaction by harmonization.
Plebani, Mario; Panteghini, Mauro
2014-05-15
The lack of interchangeable results in current practice among clinical laboratories has underpinned greater attention to standardization and harmonization projects. Although the focus was mainly on the standardization and harmonization of measurement procedures and their results, the scope of harmonization goes beyond method and analytical results: it includes all other aspects of laboratory testing, including terminology and units, report formats, reference limits and decision thresholds, as well as test profiles and criteria for the interpretation of results. In particular, as evidence collected in last decades demonstrates that pre-pre- and post-post-analytical steps are more vulnerable to errors, harmonization initiatives should be performed to improve procedures and processes at the laboratory-clinical interface. Managing upstream demand, down-stream interpretation of laboratory results, and subsequent appropriate action through close relationships between laboratorians and clinicians remains a crucial issue of the laboratory testing process. Therefore, initiatives to improve test demand management from one hand and to harmonize procedures to improve physicians' acknowledgment of laboratory data and their interpretation from the other hand are needed in order to assure quality and safety in the total testing process. © 2013.
44 CFR 61.14 - Standard Flood Insurance Policy Interpretations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.14 Standard Flood Insurance Policy Interpretations. (a... 44 Emergency Management and Assistance 1 2012-10-01 2011-10-01 true Standard Flood Insurance...
44 CFR 61.14 - Standard Flood Insurance Policy Interpretations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.14 Standard Flood Insurance Policy Interpretations. (a... 44 Emergency Management and Assistance 1 2013-10-01 2013-10-01 false Standard Flood Insurance...
44 CFR 61.14 - Standard Flood Insurance Policy Interpretations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.14 Standard Flood Insurance Policy Interpretations. (a... 44 Emergency Management and Assistance 1 2014-10-01 2014-10-01 false Standard Flood Insurance...
44 CFR 61.14 - Standard Flood Insurance Policy Interpretations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.14 Standard Flood Insurance Policy Interpretations. (a... 44 Emergency Management and Assistance 1 2011-10-01 2011-10-01 false Standard Flood Insurance...
44 CFR 61.14 - Standard Flood Insurance Policy Interpretations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.14 Standard Flood Insurance Policy Interpretations. (a... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Standard Flood Insurance...
A method for the geometric and densitometric standardization of intraoral radiographs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duckworth, J.E.; Judy, P.F.; Goodson, J.M.
1983-07-01
The interpretation of dental radiographs for the diagnosis of periodontal disease conditions poses several difficulties. These include the inability to adequately reproduce the projection geometry and optical density of the exposures. In order to improve the ability to extract accurate quantitative information from a radiographic survey of periodontal status, a method was developed which provided for consistent reproduction of both geometric and densitometric exposure parameters. This technique employed vertical bitewing projections in holders customized to individual segments of the dentition. A copper stepwedge was designed to provide densitometric standardization, and wire markers were included to permit measurement of angular variation.more » In a series of 53 paired radiographs, measurement of alveolar crest heights was found to be reproducible within approximately 0.1 mm. This method provided a full mouth radiographic survey using seven films, each complete with internal standards suitable for computer-based image processing.« less
Court Interpreters and Translators: Developing Ethical and Professional Standards.
ERIC Educational Resources Information Center
Funston, Richard
Changing needs in the courtroom have raised questions about the need for standards in court interpreter qualifications. In California, no formal training or familiarity with the legal system is required for certification, which is done entirely by language testing. The fact that often court interpreters are officers of the court may be…
Depreter, Barbara; Devreese, Katrien M J
2016-09-01
Lupus anticoagulant (LAC) testing includes a screening, mixing and confirmation step. Although recently published guidelines on LAC testing are a useful step towards standardization, a lack of consensus remains whether to express mixing tests in clotting time (CT) or index of circulating anticoagulant (ICA). The influence of anticoagulant therapy, e.g. vitamin K antagonists (VKA) or direct oral anticoagulants (DOAC) on both methods of interpretation remains to be investigated. The objective of this study was to contribute to a simplification and standardization of the LAC three-step interpretation on the level of the mixing test. Samples from 148 consecutive patients with LAC request and prolonged screening step, and 77 samples from patients non-suspicious for LAC treated with VKA (n=37) or DOAC (n=30) were retrospectively evaluated. An activated partial thromboplastin time (aPTT) and dilute Russell's viper venom time (dRVVT) were used for routine LAC testing. The supplemental anticoagulant samples were tested with dRVVT only. We focused on the interpretation differences for mixing tests expressed as CT or ICA and compared the final LAC conclusion within each distinct group of concordant and discordant mixing test results. Mixing test interpretation by CT resulted in 10 (dRVVT) and 16 (aPTT) more LAC positive patients compared to interpretation with ICA. Isolated prolonged dRVVT screen mix ICA results were exclusively observed in samples from VKA-treated patients without suspicion for LAC. We recommend using CT in respect to the 99th percentile cut-off for interpretation of mixing steps in order to reach the highest sensitivity and specificity in LAC detection.
Conventionalism and Methodological Standards in Contending with Skepticism about Uncertainty
NASA Astrophysics Data System (ADS)
Brumble, K. C.
2012-12-01
What it means to measure and interpret confidence and uncertainty in a result is often particular to a specific scientific community and its methodology of verification. Additionally, methodology in the sciences varies greatly across disciplines and scientific communities. Understanding the accuracy of predictions of a particular science thus depends largely upon having an intimate working knowledge of the methods, standards, and conventions utilized and underpinning discoveries in that scientific field. Thus, valid criticism of scientific predictions and discoveries must be conducted by those who are literate in the field in question: they must have intimate working knowledge of the methods of the particular community and of the particular research under question. The interpretation and acceptance of uncertainty is one such shared, community-based convention. In the philosophy of science, this methodological and community-based way of understanding scientific work is referred to as conventionalism. By applying the conventionalism of historian and philosopher of science Thomas Kuhn to recent attacks upon methods of multi-proxy mean temperature reconstructions, I hope to illuminate how climate skeptics and their adherents fail to appreciate the need for community-based fluency in the methodological standards for understanding uncertainty shared by the wider climate science community. Further, I will flesh out a picture of climate science community standards of evidence and statistical argument following the work of philosopher of science Helen Longino. I will describe how failure to appreciate the conventions of professionalism and standards of evidence accepted in the climate science community results in the application of naïve falsification criteria. Appeal to naïve falsification in turn has allowed scientists outside the standards and conventions of the mainstream climate science community to consider themselves and to be judged by climate skeptics as valid critics of particular statistical reconstructions with naïve and misapplied methodological criticism. Examples will include the skeptical responses to multi-proxy mean temperature reconstructions and congressional hearings criticizing the work of Michael Mann et al.'s Hockey Stick.
Eddy Current, Magnetic Particle and Hardness Testing, Aviation Quality Control (Advanced): 9227.04.
ERIC Educational Resources Information Center
Dade County Public Schools, Miami, FL.
This unit of instruction includes the principles of eddy current, magnetic particle and hardness testing; standards used for analyzing test results; techniques of operating equipment; interpretation of indications; advantages and limitations of these methods of testing; care and calibration of equipment; and safety and work precautions. Motion…
Ultrasonic Testing, Aviation Quality Control (Advanced): 9227.03.
ERIC Educational Resources Information Center
Dade County Public Schools, Miami, FL.
This unit of instruction covers the theory of ultrasonic sound, methods of applying soundwaves to test specimens and interpreting results, calibrating the ultrasonic equipment, and the use of standards. Study periods, group discussions, and extensive use of textbooks and training manuals are to be used. These are listed along with references and…
Code of Federal Regulations, 2011 CFR
2011-04-01
... guidelines in any quality assurance review: (1) ASQC Q9000-1-1994 Quality Management and Quality Assurance... Systems—Model for Quality Assurance in Final Inspection and Test; (5) ASQC Q9004-1-1994 Quality Management... in interpreting testing standards, test methods, evaluating test reports and quality control programs...
Vascular plant and vertebrate inventories in Sonoran Desert National Parks
Cecilia A. Schmidt; Eric W. Albrecht; Brian F. Powell; William L. Halvorson
2005-01-01
Biological inventories are important for natural resource management and interpretation, and can form a foundation for long-term monitoring programs. We inventoried vascular plants and vertebrates in nine National Parks in southern Arizona and western New Mexico from 2000 to 2004 using repeatable designs, commonly accepted methods, and standardized protocols. At...
Neutral model analysis of landscape patterns from mathematical morphology
Kurt H. Riitters; Peter Vogt; Pierre Soille; Jacek Kozak; Christine Estreguil
2007-01-01
Mathematical morphology encompasses methods for characterizing land-cover patterns in ecological research and biodiversity assessments. This paper reports a neutral model analysis of patterns in the absence of a structuring ecological process, to help set standards for comparing and interpreting patterns identified by mathematical morphology on real land-cover maps. We...
Ducci, Daniela; de Melo, M Teresa Condesso; Preziosi, Elisabetta; Sellerino, Mariangela; Parrone, Daniele; Ribeiro, Luis
2016-11-01
The natural background level (NBL) concept is revisited and combined with indicator kriging method to analyze the spatial distribution of groundwater quality within a groundwater body (GWB). The aim is to provide a methodology to easily identify areas with the same probability of exceeding a given threshold (which may be a groundwater quality criteria, standards, or recommended limits for selected properties and constituents). Three case studies with different hydrogeological settings and located in two countries (Portugal and Italy) are used to derive NBL using the preselection method and validate the proposed methodology illustrating its main advantages over conventional statistical water quality analysis. Indicator kriging analysis was used to create probability maps of the three potential groundwater contaminants. The results clearly indicate the areas within a groundwater body that are potentially contaminated because the concentrations exceed the drinking water standards or even the local NBL, and cannot be justified by geogenic origin. The combined methodology developed facilitates the management of groundwater quality because it allows for the spatial interpretation of NBL values. Copyright © 2016 Elsevier B.V. All rights reserved.
Subtype Diagnosis of Primary Aldosteronism: Is Adrenal Vein Sampling Always Necessary?
Buffolo, Fabrizio; Monticone, Silvia; Williams, Tracy A.; Rossato, Denis; Burrello, Jacopo; Tetti, Martina; Veglio, Franco; Mulatero, Paolo
2017-01-01
Aldosterone producing adenoma and bilateral adrenal hyperplasia are the two most common subtypes of primary aldosteronism (PA) that require targeted and distinct therapeutic approaches: unilateral adrenalectomy or lifelong medical therapy with mineralocorticoid receptor antagonists. According to the 2016 Endocrine Society Guideline, adrenal venous sampling (AVS) is the gold standard test to distinguish between unilateral and bilateral aldosterone overproduction and therefore, to safely refer patients with PA to surgery. Despite significant advances in the optimization of the AVS procedure and the interpretation of hormonal data, a standardized protocol across centers is still lacking. Alternative methods are sought to either localize an aldosterone producing adenoma or to predict the presence of unilateral disease and thereby substantially reduce the number of patients with PA who proceed to AVS. In this review, we summarize the recent advances in subtyping PA for the diagnosis of unilateral and bilateral disease. We focus on the developments in the AVS procedure, the interpretation criteria, and comparisons of the performance of AVS with the alternative methods that are currently available. PMID:28420172
Beekhuijzen, Manon; Schneider, Steffen; Barraclough, Narinder; Hallmark, Nina; Hoberman, Alan; Lordi, Sheri; Moxon, Mary; Perks, Deborah; Piersma, Aldert H; Makris, Susan L
2018-05-02
In recent years several OECD test guidelines have been updated and some will be updated shortly with the requirement to measure thyroid hormone levels in the blood of mammalian laboratory species. There is, however, an imperative need for clarification and guidance regarding the collection, assessment, and interpretation of thyroid hormone data for regulatory toxicology and risk assessment. Clarification and guidance is needed for 1) timing and methods of blood collection, 2) standardization and validation of the analytical methods, 3) triggers for additional measurements, 4) the need for T4 measurements in postnatal day (PND) 4 pups, and 5) the interpretation of changes in thyroid hormone levels regarding adversity. Discussions on these topics have already been initiated, and involve expert scientists from a number of international multisector organizations. This paper provides an overview of existing issues, current activities and recommendations for moving forward. Copyright © 2018 Elsevier Inc. All rights reserved.
Simulations for designing and interpreting intervention trials in infectious diseases.
Halloran, M Elizabeth; Auranen, Kari; Baird, Sarah; Basta, Nicole E; Bellan, Steven E; Brookmeyer, Ron; Cooper, Ben S; DeGruttola, Victor; Hughes, James P; Lessler, Justin; Lofgren, Eric T; Longini, Ira M; Onnela, Jukka-Pekka; Özler, Berk; Seage, George R; Smith, Thomas A; Vespignani, Alessandro; Vynnycky, Emilia; Lipsitch, Marc
2017-12-29
Interventions in infectious diseases can have both direct effects on individuals who receive the intervention as well as indirect effects in the population. In addition, intervention combinations can have complex interactions at the population level, which are often difficult to adequately assess with standard study designs and analytical methods. Herein, we urge the adoption of a new paradigm for the design and interpretation of intervention trials in infectious diseases, particularly with regard to emerging infectious diseases, one that more accurately reflects the dynamics of the transmission process. In an increasingly complex world, simulations can explicitly represent transmission dynamics, which are critical for proper trial design and interpretation. Certain ethical aspects of a trial can also be quantified using simulations. Further, after a trial has been conducted, simulations can be used to explore the possible explanations for the observed effects. Much is to be gained through a multidisciplinary approach that builds collaborations among experts in infectious disease dynamics, epidemiology, statistical science, economics, simulation methods, and the conduct of clinical trials.
Code of Federal Regulations, 2013 CFR
2013-07-01
... and Secondary National Ambient Air Quality Standards for Ozone H Appendix H to Part 50 Protection of... Secondary National Ambient Air Quality Standards for Ozone 1. General This appendix explains how to... associated examples are contained in the “Guideline for Interpretation of Ozone Air Quality Standards.” For...
Code of Federal Regulations, 2011 CFR
2011-07-01
... and Secondary National Ambient Air Quality Standards for Ozone H Appendix H to Part 50 Protection of... Secondary National Ambient Air Quality Standards for Ozone 1. General This appendix explains how to... associated examples are contained in the “Guideline for Interpretation of Ozone Air Quality Standards.” For...
Code of Federal Regulations, 2012 CFR
2012-07-01
... and Secondary National Ambient Air Quality Standards for Ozone H Appendix H to Part 50 Protection of... Secondary National Ambient Air Quality Standards for Ozone 1. General This appendix explains how to... associated examples are contained in the “Guideline for Interpretation of Ozone Air Quality Standards.” For...
Code of Federal Regulations, 2014 CFR
2014-07-01
... and Secondary National Ambient Air Quality Standards for Ozone H Appendix H to Part 50 Protection of... Secondary National Ambient Air Quality Standards for Ozone 1. General This appendix explains how to... associated examples are contained in the “Guideline for Interpretation of Ozone Air Quality Standards.” For...
How Engineering Standards Are Interpreted and Translated for Middle School
ERIC Educational Resources Information Center
Judson, Eugene; Ernzen, John; Krause, Stephen; Middleton, James A.; Culbertson, Robert J.
2016-01-01
In this exploratory study we examined the alignment of Next Generation Science Standards (NGSS) middle school engineering design standards with lesson ideas from middle school teachers, science education faculty, and engineering faculty (4-6 members per group). Respondents were prompted to provide plain language interpretations of two middle…
Validated method for quantification of genetically modified organisms in samples of maize flour.
Kunert, Renate; Gach, Johannes S; Vorauer-Uhl, Karola; Engel, Edwin; Katinger, Hermann
2006-02-08
Sensitive and accurate testing for trace amounts of biotechnology-derived DNA from plant material is the prerequisite for detection of 1% or 0.5% genetically modified ingredients in food products or raw materials thereof. Compared to ELISA detection of expressed proteins, real-time PCR (RT-PCR) amplification has easier sample preparation and detection limits are lower. Of the different methods of DNA preparation CTAB method with high flexibility in starting material and generation of sufficient DNA with relevant quality was chosen. Previous RT-PCR data generated with the SYBR green detection method showed that the method is highly sensitive to sample matrices and genomic DNA content influencing the interpretation of results. Therefore, this paper describes a real-time DNA quantification based on the TaqMan probe method, indicating high accuracy and sensitivity with detection limits of lower than 18 copies per sample applicable and comparable to highly purified plasmid standards as well as complex matrices of genomic DNA samples. The results were evaluated with ValiData for homology of variance, linearity, accuracy of the standard curve, and standard deviation.
The curation of genetic variants: difficulties and possible solutions.
Pandey, Kapil Raj; Maden, Narendra; Poudel, Barsha; Pradhananga, Sailendra; Sharma, Amit Kumar
2012-12-01
The curation of genetic variants from biomedical articles is required for various clinical and research purposes. Nowadays, establishment of variant databases that include overall information about variants is becoming quite popular. These databases have immense utility, serving as a user-friendly information storehouse of variants for information seekers. While manual curation is the gold standard method for curation of variants, it can turn out to be time-consuming on a large scale thus necessitating the need for automation. Curation of variants described in biomedical literature may not be straightforward mainly due to various nomenclature and expression issues. Though current trends in paper writing on variants is inclined to the standard nomenclature such that variants can easily be retrieved, we have a massive store of variants in the literature that are present as non-standard names and the online search engines that are predominantly used may not be capable of finding them. For effective curation of variants, knowledge about the overall process of curation, nature and types of difficulties in curation, and ways to tackle the difficulties during the task are crucial. Only by effective curation, can variants be correctly interpreted. This paper presents the process and difficulties of curation of genetic variants with possible solutions and suggestions from our work experience in the field including literature support. The paper also highlights aspects of interpretation of genetic variants and the importance of writing papers on variants following standard and retrievable methods. Copyright © 2012. Published by Elsevier Ltd.
The Curation of Genetic Variants: Difficulties and Possible Solutions
Pandey, Kapil Raj; Maden, Narendra; Poudel, Barsha; Pradhananga, Sailendra; Sharma, Amit Kumar
2012-01-01
The curation of genetic variants from biomedical articles is required for various clinical and research purposes. Nowadays, establishment of variant databases that include overall information about variants is becoming quite popular. These databases have immense utility, serving as a user-friendly information storehouse of variants for information seekers. While manual curation is the gold standard method for curation of variants, it can turn out to be time-consuming on a large scale thus necessitating the need for automation. Curation of variants described in biomedical literature may not be straightforward mainly due to various nomenclature and expression issues. Though current trends in paper writing on variants is inclined to the standard nomenclature such that variants can easily be retrieved, we have a massive store of variants in the literature that are present as non-standard names and the online search engines that are predominantly used may not be capable of finding them. For effective curation of variants, knowledge about the overall process of curation, nature and types of difficulties in curation, and ways to tackle the difficulties during the task are crucial. Only by effective curation, can variants be correctly interpreted. This paper presents the process and difficulties of curation of genetic variants with possible solutions and suggestions from our work experience in the field including literature support. The paper also highlights aspects of interpretation of genetic variants and the importance of writing papers on variants following standard and retrievable methods. PMID:23317699
ERIC Educational Resources Information Center
St. Louis, Kenneth O.
2011-01-01
Purpose: The "Public Opinion Survey of Human Attributes-Stuttering" ("POSHA-S") was developed to make available worldwide a standard measure of public attitudes toward stuttering that is practical, reliable, valid, and translatable. Mean data from past field studies as comparisons for interpretation of "POSHA-S" results are reported. Method: Means…
Scoring and setting pass/fail standards for an essay certification examination in nurse-midwifery.
Fullerton, J T; Greener, D L; Gross, L J
1992-03-01
Examination for certification or licensure of health professionals (credentialing) in the United States is almost exclusively of the multiple choice format. The certification examination for entry into the practice of the profession of nurse-midwifery has, however, used a modified essay format throughout its twenty-year history. The examination has recently undergone a revision in the method for score interpretation and for pass/fail decision-making. The revised method, described in this paper, has important implications for all health professional credentialing agencies which use modified essay, oral or practical methods of competency assessment. This paper describes criterion-referenced scoring, the process of constructing the essay items, the methods for assuring validity and reliability for the examination, and the manner of standard setting. In addition, two alternative methods for increasing the validity of the pass/fail decision are evaluated, and the rationale for decision-making about marginal candidates is described.
75 FR 23755 - Combined Notice of Filings #1
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-04
... securities filings: Docket Numbers: ES10-35-000. Applicants: American Transmission Company LLC, ATC... Reliability Corporation for Approval of Interpretation to Reliability Standard CIP- 001--Cyber Security... Corporation for Approval of Interpretation to Reliability Standard [[Page 23756
NASA Astrophysics Data System (ADS)
Leijenaar, Ralph T. H.; Nalbantov, Georgi; Carvalho, Sara; van Elmpt, Wouter J. C.; Troost, Esther G. C.; Boellaard, Ronald; Aerts, Hugo J. W. L.; Gillies, Robert J.; Lambin, Philippe
2015-08-01
FDG-PET-derived textural features describing intra-tumor heterogeneity are increasingly investigated as imaging biomarkers. As part of the process of quantifying heterogeneity, image intensities (SUVs) are typically resampled into a reduced number of discrete bins. We focused on the implications of the manner in which this discretization is implemented. Two methods were evaluated: (1) RD, dividing the SUV range into D equally spaced bins, where the intensity resolution (i.e. bin size) varies per image; and (2) RB, maintaining a constant intensity resolution B. Clinical feasibility was assessed on 35 lung cancer patients, imaged before and in the second week of radiotherapy. Forty-four textural features were determined for different D and B for both imaging time points. Feature values depended on the intensity resolution and out of both assessed methods, RB was shown to allow for a meaningful inter- and intra-patient comparison of feature values. Overall, patients ranked differently according to feature values-which was used as a surrogate for textural feature interpretation-between both discretization methods. Our study shows that the manner of SUV discretization has a crucial effect on the resulting textural features and the interpretation thereof, emphasizing the importance of standardized methodology in tumor texture analysis.
Pagels, Patti; Kindratt, Tiffany; Arnold, Danielle; Brandt, Jeffrey; Woodfin, Grant; Gimpel, Nora
2015-01-01
Introduction. Future health care providers need to be trained in the knowledge and skills to effectively communicate with their patients with limited health literacy. The purpose of this study is to develop and evaluate a curriculum designed to increase residents' health literacy knowledge, improve communication skills, and work with an interpreter. Materials and Methods. Family Medicine residents (N = 25) participated in a health literacy training which included didactic lectures and an objective structured clinical examination (OSCE). Community promotoras acted as standardized patients and evaluated the residents' ability to measure their patients' health literacy, communicate effectively using the teach-back and Ask Me 3 methods, and appropriately use an interpreter. Pre- and postknowledge, attitudes, and postdidactic feedback were obtained. We compared OSCE scores from the group that received training (didactic group) and previous graduates. Residents reported the skills they used in practice three months later. Results. Family Medicine residents showed an increase in health literacy knowledge (p = 0.001) and scored in the adequately to expertly performed range in the OSCE. Residents reported using the teach-back method (77.8%) and a translator more effectively (77.8%) three months later. Conclusions. Our innovative health literacy OSCE can be replicated for medical learners at all levels of training.
Pagels, Patti; Kindratt, Tiffany; Arnold, Danielle; Brandt, Jeffrey; Woodfin, Grant; Gimpel, Nora
2015-01-01
Introduction. Future health care providers need to be trained in the knowledge and skills to effectively communicate with their patients with limited health literacy. The purpose of this study is to develop and evaluate a curriculum designed to increase residents' health literacy knowledge, improve communication skills, and work with an interpreter. Materials and Methods. Family Medicine residents (N = 25) participated in a health literacy training which included didactic lectures and an objective structured clinical examination (OSCE). Community promotoras acted as standardized patients and evaluated the residents' ability to measure their patients' health literacy, communicate effectively using the teach-back and Ask Me 3 methods, and appropriately use an interpreter. Pre- and postknowledge, attitudes, and postdidactic feedback were obtained. We compared OSCE scores from the group that received training (didactic group) and previous graduates. Residents reported the skills they used in practice three months later. Results. Family Medicine residents showed an increase in health literacy knowledge (p = 0.001) and scored in the adequately to expertly performed range in the OSCE. Residents reported using the teach-back method (77.8%) and a translator more effectively (77.8%) three months later. Conclusions. Our innovative health literacy OSCE can be replicated for medical learners at all levels of training. PMID:26491565
Proposed Standards for Medical Education Submissions to the Journal of General Internal Medicine
Bowen, Judith L.; Gerrity, Martha S.; Kalet, Adina L.; Kogan, Jennifer R.; Spickard, Anderson; Wayne, Diane B.
2008-01-01
To help authors design rigorous studies and prepare clear and informative manuscripts, improve the transparency of editorial decisions, and raise the bar on educational scholarship, the Deputy Editors of the Journal of General Internal Medicine articulate standards for medical education submissions to the Journal. General standards include: (1) quality questions, (2) quality methods to match the questions, (3) insightful interpretation of findings, (4) transparent, unbiased reporting, and (5) attention to human subjects’ protection and ethical research conduct. Additional standards for specific study types are described. We hope these proposed standards will generate discussion that will foster their continued evolution. Electronic supplementary material The online version of this article (doi:10.1007/s11606-008-0676-z) contains supplementary material, which is available to authorized users. PMID:18612716
Gassner, Christoph; Rainer, Esther; Pircher, Elfriede; Markut, Lydia; Körmöczi, Günther F.; Jungbauer, Christof; Wessin, Dietmar; Klinghofer, Roswitha; Schennach, Harald; Schwind, Peter; Schönitzer, Diether
2009-01-01
Summary Background Validations of routinely used serological typing methods require intense performance evaluations typically including large numbers of samples before routine application. However, such evaluations could be improved considering information about the frequency of standard blood groups and their variants. Methods Using RHD and ABO population genetic data, a Caucasian-specific donor panel was compiled for a performance comparison of the three RhD and ABO serological typing methods MDmulticard (Medion Diagnostics), ID-System (DiaMed) and ScanGel (Bio-Rad). The final test panel included standard and variant RHD and ABO genotypes, e.g. RhD categories, partial and weak RhDs, RhD DELs, and ABO samples, mainly to interpret weak serological reactivity for blood group A specificity. All samples were from individuals recorded in our local DNA blood group typing database. Results For ‘standard’ blood groups, results of performance were clearly interpretable for all three serological methods compared. However, when focusing on specific variant phenotypes, pronounced differences in reaction strengths and specificities were observed between them. Conclusions A genetically and ethnically predefined donor test panel consisting of 93 individual samples only, delivered highly significant results for serological performance comparisons. Such small panels offer impressive representative powers, higher as such based on statistical chances and large numbers only. PMID:21113264
An ROC-type measure of diagnostic accuracy when the gold standard is continuous-scale.
Obuchowski, Nancy A
2006-02-15
ROC curves and summary measures of accuracy derived from them, such as the area under the ROC curve, have become the standard for describing and comparing the accuracy of diagnostic tests. Methods for estimating ROC curves rely on the existence of a gold standard which dichotomizes patients into disease present or absent. There are, however, many examples of diagnostic tests whose gold standards are not binary-scale, but rather continuous-scale. Unnatural dichotomization of these gold standards leads to bias and inconsistency in estimates of diagnostic accuracy. In this paper, we propose a non-parametric estimator of diagnostic test accuracy which does not require dichotomization of the gold standard. This estimator has an interpretation analogous to the area under the ROC curve. We propose a confidence interval for test accuracy and a statistical test for comparing accuracies of tests from paired designs. We compare the performance (i.e. CI coverage, type I error rate, power) of the proposed methods with several alternatives. An example is presented where the accuracies of two quick blood tests for measuring serum iron concentrations are estimated and compared.
NASA Technical Reports Server (NTRS)
Montegani, F. J.
1974-01-01
Methods of handling one-third-octave band noise data originating from the outdoor full-scale fan noise facility and the engine acoustic facility at the Lewis Research Center are presented. Procedures for standardizing, retrieving, extrapolating, and reporting these data are explained. Computer programs are given which are used to accomplish these and other noise data analysis tasks. This information is useful as background for interpretation of data from these facilities appearing in NASA reports and can aid data exchange by promoting standardization.
Deriving allowable properties of lumber : a practical guide for interpretation of ASTM standards
Alan Bendtsen; William L. Galligan
1978-01-01
The ASTM standards for establishing clear wood mechanical properties and for deriving structural grades and related allowable properties for visually graded lumber can be confusing and difficult for the uninitiated to interpret. This report provides a practical guide to using these standards for individuals not familiar with their application. Sample stress...
Multiple Testing of Gene Sets from Gene Ontology: Possibilities and Pitfalls.
Meijer, Rosa J; Goeman, Jelle J
2016-09-01
The use of multiple testing procedures in the context of gene-set testing is an important but relatively underexposed topic. If a multiple testing method is used, this is usually a standard familywise error rate (FWER) or false discovery rate (FDR) controlling procedure in which the logical relationships that exist between the different (self-contained) hypotheses are not taken into account. Taking those relationships into account, however, can lead to more powerful variants of existing multiple testing procedures and can make summarizing and interpreting the final results easier. We will show that, from the perspective of interpretation as well as from the perspective of power improvement, FWER controlling methods are more suitable than FDR controlling methods. As an example of a possible power improvement, we suggest a modified version of the popular method by Holm, which we also implemented in the R package cherry. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Blind decomposition of Herschel-HIFI spectral maps of the NGC 7023 nebula
NASA Astrophysics Data System (ADS)
Berné, O.; Joblin, C.; Deville, Y.; Pilleri, P.; Pety, J.; Teyssier, D.; Gerin, M.; Fuente, A.
2012-12-01
Large spatial-spectral surveys are more and more common in astronomy. This calls for the need of new methods to analyze such mega- to giga-pixel data-cubes. In this paper we present a method to decompose such observations into a limited and comprehensive set of components. The original data can then be interpreted in terms of linear combinations of these components. The method uses non-negative matrix factorization (NMF) to extract latent spectral end-members in the data. The number of needed end-members is estimated based on the level of noise in the data. A Monte-Carlo scheme is adopted to estimate the optimal end-members, and their standard deviations. Finally, the maps of linear coefficients are reconstructed using non-negative least squares. We apply this method to a set of hyperspectral data of the NGC 7023 nebula, obtained recently with the HIFI instrument onboard the Herschel space observatory, and provide a first interpretation of the results in terms of 3-dimensional dynamical structure of the region.
Leak Rate Quantification Method for Gas Pressure Seals with Controlled Pressure Differential
NASA Technical Reports Server (NTRS)
Daniels, Christopher C.; Braun, Minel J.; Oravec, Heather A.; Mather, Janice L.; Taylor, Shawn C.
2015-01-01
An enhancement to the pressure decay leak rate method with mass point analysis solved deficiencies in the standard method. By adding a control system, a constant gas pressure differential across the test article was maintained. As a result, the desired pressure condition was met at the onset of the test, and the mass leak rate and measurement uncertainty were computed in real-time. The data acquisition and control system were programmed to automatically stop when specified criteria were met. Typically, the test was stopped when a specified level of measurement uncertainty was attained. Using silicone O-ring test articles, the new method was compared with the standard method that permitted the downstream pressure to be non-constant atmospheric pressure. The two methods recorded comparable leak rates, but the new method recorded leak rates with significantly lower measurement uncertainty, statistical variance, and test duration. Utilizing this new method in leak rate quantification, projects will reduce cost and schedule, improve test results, and ease interpretation between data sets.
Morbach, Caroline; Gelbrich, Götz; Breunig, Margret; Tiffe, Theresa; Wagner, Martin; Heuschmann, Peter U; Störk, Stefan
2018-02-14
Variability related to image acquisition and interpretation is an important issue of echocardiography in clinical trials. Nevertheless, there is no broadly accepted standard method for quality assessment of echocardiography in clinical research reports. We present analyses based on the echocardiography quality-assurance program of the ongoing STAAB cohort study (characteristics and course of heart failure stages A-B and determinants of progression). In 43 healthy individuals (mean age 50 ± 14 years; 18 females), duplicate echocardiography scans were acquired and mutually interpreted by one of three trained sonographers and an EACVI certified physician, respectively. Acquisition (AcV), interpretation (InV), and inter-observer variability (IOV; i.e., variability between the acquisition-interpretation sequences of two different observers), were determined for selected M-mode, B-mode, and Doppler parameters. We calculated Bland-Altman upper 95% limits of absolute differences, implying that 95% of measurement differences were smaller/equal to the given value: e.g. LV end-diastolic volume (mL): 25.0, 25.0, 27.9; septal e' velocity (cm/s): 3.03, 1.25, 3.58. Further, 90, 85, and 80% upper limits of absolute differences were determined for the respective parameters. Both, acquisition and interpretation, independently and sizably contributed to IOV. As such, separate assessment of AcV and InV is likely to aid in echocardiography training and quality-assurance. Our results further suggest to routinely determine IOV in clinical trials as a comprehensive measure of imaging quality. The derived 95, 90, 85, and 80% upper limits of absolute differences are suggested as reproducibility targets of future studies, thus contributing to the international efforts of standardization in quality-assurance.
"Magnitude-based inference": a statistical review.
Welsh, Alan H; Knight, Emma J
2015-04-01
We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.
Lázzari, J O; Pereira, M; Antunes, C M; Guimarães, A; Moncayo, A; Chávez Domínguez, R; Hernández Pieretti, O; Macedo, V; Rassi, A; Maguire, J; Romero, A
1998-11-01
An electrocardiographic recording method with an associated reading guide, designed for epidemiological studies on Chagas' disease, was tested to assess its diagnostic reproducibility. Six cardiologists from five countries each read 100 electrocardiographic (ECG) tracings, including 30 from chronic chagasic patients, then reread them after an interval of 6 months. The readings were blind, with the tracings numbered randomly for the first reading and renumbered randomly for the second reading. The physicians, all experienced in interpreting ECGs from chagasic patients, followed printed instructions for reading the tracings. Reproducibility of the readings was evaluated using the kappa (kappa) index for concordance. The results showed a high degree of interobserver concordance with respect to the diagnosis of normal vs. abnormal tracings (kappa = 0.66; SE 0.02). While the interpretations of some categories of ECG abnormalities were highly reproducible, others, especially those having a low prevalence, showed lower levels of concordance. Intraobserver concordance was uniformly higher than interobserver concordance. The findings of this study justify the use by specialists of the recording of readings method proposed for epidemiological studies on Chagas' disease, but warrant caution in the interpretation of some categories of electrocardiographic alterations.
“Magnitude-based Inference”: A Statistical Review
Welsh, Alan H.; Knight, Emma J.
2015-01-01
ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387
ANSI/ASHRAE/IES Standard 90.1-2010 Performance Rating Method Reference Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goel, Supriya; Rosenberg, Michael I.
This document is intended to be a reference manual for the Appendix G Performance Rating Method (PRM) of ANSI/ASHRAE/IES Standard 90.1- 2010 (Standard 90.1-2010).The PRM is used for rating the energy efficiency of commercial and high-rise residential buildings with designs that exceed the requirements of Standard 90.1. The procedures and processes described in this manual are designed to provide consistency and accuracy by filling in gaps and providing additional details needed by users of the PRM. It should be noted that this document is created independently from ASHRAE and SSPC 90.1 and is not sanctioned nor approved by either ofmore » those entities . Potential users of this manual include energy modelers, software developers and implementers of “beyond code” energy programs. Energy modelers using ASHRAE Standard 90.1-2010 for beyond code programs can use this document as a reference manual for interpreting requirements of the Performance Rating method. Software developers, developing tools for automated creation of the baseline model can use this reference manual as a guideline for developing the rules for the baseline model.« less
Standardization of Clinical Assessment and Sample Collection Across All PERCH Study Sites
Prosperi, Christine; Baggett, Henry C.; Brooks, W. Abdullah; Deloria Knoll, Maria; Hammitt, Laura L.; Howie, Stephen R. C.; Kotloff, Karen L.; Levine, Orin S.; Madhi, Shabir A.; Murdoch, David R.; O’Brien, Katherine L.; Thea, Donald M.; Awori, Juliet O.; Bunthi, Charatdao; DeLuca, Andrea N.; Driscoll, Amanda J.; Ebruke, Bernard E.; Goswami, Doli; Hidgon, Melissa M.; Karron, Ruth A.; Kazungu, Sidi; Kourouma, Nana; Mackenzie, Grant; Moore, David P.; Mudau, Azwifari; Mwale, Magdalene; Nahar, Kamrun; Park, Daniel E.; Piralam, Barameht; Seidenberg, Phil; Sylla, Mamadou; Feikin, Daniel R.; Scott, J. Anthony G.; O’Brien, Katherine L.; Levine, Orin S.; Knoll, Maria Deloria; Feikin, Daniel R.; DeLuca, Andrea N.; Driscoll, Amanda J.; Fancourt, Nicholas; Fu, Wei; Hammitt, Laura L.; Higdon, Melissa M.; Kagucia, E. Wangeci; Karron, Ruth A.; Li, Mengying; Park, Daniel E.; Prosperi, Christine; Wu, Zhenke; Zeger, Scott L.; Watson, Nora L.; Crawley, Jane; Murdoch, David R.; Brooks, W. Abdullah; Endtz, Hubert P.; Zaman, Khalequ; Goswami, Doli; Hossain, Lokman; Jahan, Yasmin; Ashraf, Hasan; Howie, Stephen R. C.; Ebruke, Bernard E.; Antonio, Martin; McLellan, Jessica; Machuka, Eunice; Shamsul, Arifin; Zaman, Syed M.A.; Mackenzie, Grant; Scott, J. Anthony G.; Awori, Juliet O.; Morpeth, Susan C.; Kamau, Alice; Kazungu, Sidi; Kotloff, Karen L.; Tapia, Milagritos D.; Sow, Samba O.; Sylla, Mamadou; Tamboura, Boubou; Onwuchekwa, Uma; Kourouma, Nana; Toure, Aliou; Madhi, Shabir A.; Moore, David P.; Adrian, Peter V.; Baillie, Vicky L.; Kuwanda, Locadiah; Mudau, Azwifarwi; Groome, Michelle J.; Baggett, Henry C.; Thamthitiwat, Somsak; Maloney, Susan A.; Bunthi, Charatdao; Rhodes, Julia; Sawatwong, Pongpun; Akarasewi, Pasakorn; Thea, Donald M.; Mwananyanda, Lawrence; Chipeta, James; Seidenberg, Phil; Mwansa, James; wa Somwe, Somwe; Kwenda, Geoffrey
2017-01-01
Abstract Background. Variable adherence to standardized case definitions, clinical procedures, specimen collection techniques, and laboratory methods has complicated the interpretation of previous multicenter pneumonia etiology studies. To circumvent these problems, a program of clinical standardization was embedded in the Pneumonia Etiology Research for Child Health (PERCH) study. Methods. Between March 2011 and August 2013, standardized training on the PERCH case definition, clinical procedures, and collection of laboratory specimens was delivered to 331 clinical staff at 9 study sites in 7 countries (The Gambia, Kenya, Mali, South Africa, Zambia, Thailand, and Bangladesh), through 32 on-site courses and a training website. Staff competency was assessed throughout 24 months of enrollment with multiple-choice question (MCQ) examinations, a video quiz, and checklist evaluations of practical skills. Results. MCQ evaluation was confined to 158 clinical staff members who enrolled PERCH cases and controls, with scores obtained for >86% of eligible staff at each time-point. Median scores after baseline training were ≥80%, and improved by 10 percentage points with refresher training, with no significant intersite differences. Percentage agreement with the clinical trainer on the presence or absence of clinical signs on video clips was high (≥89%), with interobserver concordance being substantial to high (AC1 statistic, 0.62–0.82) for 5 of 6 signs assessed. Staff attained median scores of >90% in checklist evaluations of practical skills. Conclusions. Satisfactory clinical standardization was achieved within and across all PERCH sites, providing reassurance that any etiological or clinical differences observed across the study sites are true differences, and not attributable to differences in application of the clinical case definition, interpretation of clinical signs, or in techniques used for clinical measurements or specimen collection. PMID:28575355
NASA Astrophysics Data System (ADS)
Le Maire, P.; Munschy, M.
2017-12-01
Interpretation of marine magnetic anomalies enable to perform accurate global kinematic models. Several methods have been proposed to compute the paleo-latitude of the oceanic crust as its formation. A model of the Earth's magnetic field is used to determine a relationship between the apparent inclination of the magnetization and the paleo-latitude. Usually, the estimation of the apparent inclination is qualitative, with the fit between magnetic data and forward models. We propose to apply a new method using complex algebra to obtain the apparent inclination of the magnetization of the oceanic crust. For two dimensional bodies, we rewrite Talwani's equations using complex algebra; the corresponding complex function of the complex variable, called CMA (complex magnetic anomaly) is easier to use for forward modelling and inversion of the magnetic data. This complex equation allows to visualize the data in the complex plane (Argand diagram) and offers a new way to interpret data (curves to the right of the figure (B), while the curves to the left represent the standard display of magnetic anomalies (A) for the model displayed (C) at the bottom of the figure). In the complex plane, the effect of the apparent inclination is to rotate the curves, while on the standard display the evolution of the shape of the anomaly is more complicated (figure). This innovative method gives the opportunity to study a set of magnetic profiles (provided by the Geological Survey of Norway) acquired in the Norwegian Sea, near the Jan Mayen fracture zone. In this area, the age of the oceanic crust ranges from 40 to 55 Ma and the apparent inclination of the magnetization is computed.
Assessing the Age of an Asteroid's Surface with Data from the International Rosetta Mission
NASA Technical Reports Server (NTRS)
Lopez, Juan Carlos
2011-01-01
Rosetta is an international mission led by the European Space Agency (ESA) with key support and instrumentation from the National Aeronautics and Space Administration (NASA). Rosetta is currently on a ten-year mission to catch comet 67P/Churyumov-Gerasimenko (C-G); throughout its voyage, the spacecraft has performed flybys of two main belt asteroids (MBA): Steins and Lutetia. Data on the physical, chemical, and geological properties of these asteroids are currently being processed and analyzed. Accurate interpretation of such data is fundamental in the success of Rosetta's mission and overall objectives. Post-flyby data analyses strive to correlate the size, shape, volume, and rotational rate of Lutetia, in addition to interpreting its multi-color imagining, albedo, and spectral mapping. Although advancements in science have contributed to the examination of celestial bodies, methods to analyze asteroids remain largely empirical, not semi-empirical, nor ab initio. This study aims to interpret and document the scientific methods currently utilized in the characterization of asteroid (21) Lutetia in order to render these processes and methods accessible to the public. Examples include a standardized technique for assessing the age of an asteroid surface, complete with clickable reference maps, methodology of grouping surface characteristics together, and a standardized power law equation for the age. Other examples include determining the density of an object. Context for what both density and age mean is a bi-product of this study. Results of the study will aid in the development of pedagogical material on asteroids for public use, and in creation of an academic database for selected targets that might be used as a reference.
Mosser, Joy; Lee, Grace; Pootrakul, Llana; Harfmann, Katya; Fabbro, Stephanie; Faith, Esteban Fernandez; Carr, David; Plotner, Alisha; Zirwas, Matthew; Kaffenberger, Benjamin H.
2016-01-01
Background: In an effort to avoid numerous problems associated with narrative letters of recommendation, a dermatology standardized letter of recommendation was utilized in the 2014–2015 resident application cycle. Objective: A comparison of the standardized letter of recommendation and narrative letters of recommendation from a single institution and application cycle to determine if the standardized letter of recommendation met its original goals of efficiency, applicant stratification, and validity. Methods: Eight dermatologists assessed all standardized letters of recommendation/narrative letters of recommendation pairs received during the 2014–2015 application cycle. Five readers repeated the analysis two months later. Each letter of recommendation was evaluated based on a seven question survey. Letter analysis and survey completion for each letter was timed. Results: Compared to the narrative letters of recommendation, the standardized letter of recommendation is easier to interpret (p<0.0001), has less exaggeration of applicants’ positive traits (p<0.001), and has higher inter-rater and intrarater reliability for determining applicant traits including personality, reliability, work-ethic, and global score. Standardized letters of recommendation are also faster to interpret (p<0.0001) and provide more information about the writer’s background or writer-applicant relationship than narrative letters of recommendation (p<0.001). Limitations: This study was completed at a single institution. Conclusions: The standardized letter of recommendation appears to be meeting its initial goals of 1) efficiency, 2) applicant stratification, and 3) validity. (J Clin Aesthet Dermatol. 2016;9(9):36–2.) PMID:27878060
Lor, Maichou; Xiong, Phia; Schweia, Rebecca J.; Bowers, Barbara; Jacobs, Elizabeth A.
2015-01-01
Background Language barriers are a large and growing problem for patients in the U.S. and around the world. Interpreter services are a standard solution for addressing language barriers and most research has focused on utilization of interpreter services and their effect on health outcomes for patients who do not speak the same language as their healthcare providers including nurses. However, there is limited research on patients’ perceptions of these interpreter services. Objective To examine Hmong- and Spanish-speaking patients’ perceptions of interpreter service quality in the context of receiving cancer preventive services. Methods Twenty limited English proficient Hmong (n=10) and Spanish-speaking participants (N=10) ranging in age from 33 to 75 years were interviewed by two bilingual researchers in a Midwestern state. Interviews were audio taped, transcribed verbatim, and translated into English. Analysis was done using conventional content analysis. Results The two groups shared perceptions about the quality of interpreter services as variable along three dimensions. Specifically, both groups evaluated quality of interpreters based on the interpreters’ ability to provide: (a) literal interpretation, (b) cultural interpretation, and (c) emotional interpretation during the health care encounter. The groups differed, however, on how they described the consequences of poor interpretation quality. Hmong participants described how poor quality interpretation could lead to: (a) poor interpersonal relationships among patients, providers, and interpreters, (b) inability of patients to follow through with treatment plans, and (c) emotional distress for patients. Conclusions Our study highlights the fact that patients are discerning consumers of interpreter services; and could be effective partners in efforts to reform and enhance interpreter services. PMID:25865517
Code of Federal Regulations, 2013 CFR
2013-07-01
... monitors utilize the same specific sampling and analysis method. Combined site data record is the data set... monitors are suitable monitors designated by a state or local agency in their annual network plan (and in... appendix. Seasonal sampling is the practice of collecting data at a reduced frequency during a season of...
Using Propensity Score Matching Methods to Improve Generalization from Randomized Experiments
ERIC Educational Resources Information Center
Tipton, Elizabeth
2011-01-01
The main result of an experiment is typically an estimate of the average treatment effect (ATE) and its standard error. In most experiments, the number of covariates that may be moderators is large. One way this issue is typically skirted is by interpreting the ATE as the average effect for "some" population. Cornfield and Tukey (1956)…
Vascular Disease, ESRD, and Death: Interpreting Competing Risk Analyses
Coresh, Josef; Segev, Dorry L.; Kucirka, Lauren M.; Tighiouart, Hocine; Sarnak, Mark J.
2012-01-01
Summary Background and objectives Vascular disease, a common condition in CKD, is a risk factor for mortality and ESRD. Optimal patient care requires accurate estimation and ordering of these competing risks. Design, setting, participants, & measurements This is a prospective cohort study of screened (n=885) and randomized participants (n=837) in the Modification of Diet in Renal Disease study (original study enrollment, 1989–1992), evaluating the association of vascular disease with ESRD and pre-ESRD mortality using standard survival analysis and competing risk regression. Results The method of analysis resulted in markedly different estimates. Cumulative incidence by standard analysis (censoring at the competing event) implied that, with vascular disease, the 15-year incidence was 66% and 51% for ESRD and pre-ESRD death, respectively. A more accurate representation of absolute risk was estimated with competing risk regression: 15-year incidence was 54% and 29% for ESRD and pre-ESRD death, respectively. For the association of vascular disease with pre-ESRD death, estimates of relative risk by the two methods were similar (standard survival analysis adjusted hazard ratio, 1.63; 95% confidence interval, 1.20–2.20; competing risk regression adjusted subhazard ratio, 1.57; 95% confidence interval, 1.15–2.14). In contrast, the hazard and subhazard ratios differed substantially for other associations, such as GFR and pre-ESRD mortality. Conclusions When competing events exist, absolute risk is better estimated using competing risk regression, but etiologic associations by this method must be carefully interpreted. The presence of vascular disease in CKD decreases the likelihood of survival to ESRD, independent of age and other risk factors. PMID:22859747
Vascular disease, ESRD, and death: interpreting competing risk analyses.
Grams, Morgan E; Coresh, Josef; Segev, Dorry L; Kucirka, Lauren M; Tighiouart, Hocine; Sarnak, Mark J
2012-10-01
Vascular disease, a common condition in CKD, is a risk factor for mortality and ESRD. Optimal patient care requires accurate estimation and ordering of these competing risks. This is a prospective cohort study of screened (n=885) and randomized participants (n=837) in the Modification of Diet in Renal Disease study (original study enrollment, 1989-1992), evaluating the association of vascular disease with ESRD and pre-ESRD mortality using standard survival analysis and competing risk regression. The method of analysis resulted in markedly different estimates. Cumulative incidence by standard analysis (censoring at the competing event) implied that, with vascular disease, the 15-year incidence was 66% and 51% for ESRD and pre-ESRD death, respectively. A more accurate representation of absolute risk was estimated with competing risk regression: 15-year incidence was 54% and 29% for ESRD and pre-ESRD death, respectively. For the association of vascular disease with pre-ESRD death, estimates of relative risk by the two methods were similar (standard survival analysis adjusted hazard ratio, 1.63; 95% confidence interval, 1.20-2.20; competing risk regression adjusted subhazard ratio, 1.57; 95% confidence interval, 1.15-2.14). In contrast, the hazard and subhazard ratios differed substantially for other associations, such as GFR and pre-ESRD mortality. When competing events exist, absolute risk is better estimated using competing risk regression, but etiologic associations by this method must be carefully interpreted. The presence of vascular disease in CKD decreases the likelihood of survival to ESRD, independent of age and other risk factors.
Methods proposed to achieve air quality standards for mobile sources and technology surveillance.
Piver, W T
1975-01-01
The methods proposed to meet the 1975 Standards of the Clean Air Act for mobile sources are alternative antiknocks, exhaust emission control devices, and alternative engine designs. Technology surveillance analysis applied to this situation is an attempt to anticipate potential public and environmental health problems from these methods, before they happen. Components of this analysis are exhaust emission characterization, environmental transport and transformation, levels of public and environmental exposure, and the influence of economics on the selection of alternative methods. The purpose of this presentation is to show trends as a result of the interaction of these different components. In no manner can these trends be interpreted explicitly as to what will really happen. Such an analysis is necessary so that public and environmental health officials have the opportunity to act on potential problems before they become manifest. PMID:50944
From SOPs to Reports to Evaluations: Learning and Memory ...
In an era of global trade and regulatory cooperation, consistent and scientifically based interpretation of developmental neurotoxicity (DNT) studies is essential. Because there is flexibility in the selection of test method(s), consistency can be especially challenging for learning and memory tests required by EPA and OECD DNT guidelines (chemicals and pesticides) and recommended for ICH prenatal/postnatal guidelines (pharmaceuticals). A well reasoned uniform approach is particularly important for variable endpoints and if non-standard tests are used. An understanding of the purpose behind the tests and expected outcomes is critical, and attention to elements of experimental design, conduct, and reporting can improve study design by the investigator as well as accuracy and consistency of interpretation by evaluators. This understanding also directs which information must be clearly described in study reports. While missing information may be available in standardized operating procedures (SOPs), if not clearly reflected in report submissions there may be questions and misunderstandings by evaluators which could impact risk assessments. A practical example will be presented to provide insights into important variables and reporting approaches. Cognitive functions most often tested in guidelines studies include associative, positional, sequential, and spatial learning and memory in weanling and adult animals. These complex behaviors tap different bra
Introduction to a special issue on concept mapping.
Trochim, William M; McLinden, Daniel
2017-02-01
Concept mapping was developed in the 1980s as a unique integration of qualitative (group process, brainstorming, unstructured sorting, interpretation) and quantitative (multidimensional scaling, hierarchical cluster analysis) methods designed to enable a group of people to articulate and depict graphically a coherent conceptual framework or model of any topic or issue of interest. This introduction provides the basic definition and description of the methodology for the newcomer and describes the steps typically followed in its most standard canonical form (preparation, generation, structuring, representation, interpretation and utilization). It also introduces this special issue which reviews the history of the methodology, describes its use in a variety of contexts, shows the latest ways it can be integrated with other methodologies, considers methodological advances and developments, and sketches a vision of the future of the method's evolution. Copyright © 2016 Elsevier Ltd. All rights reserved.
[Method of recording impulses from an implanted cardiostimulator].
Vetkin, A N; Osipov, V P
1976-01-01
An analysis of pulses from an implanted cardiostimulator recorded from the surface of the patient's body is one of the methods permitting it to pass judgment as to its functioning. Because of the possibility of the recording electrodes location coinciding with the equipotential line an erroneous interpretation of the cardiostimulator's condition is not to be ruled out. It is recommended that the pulses should be recorded with their subsequent analysis in no less than 2 standard ECG leads from the limbs.
2015-10-01
capability to meet the task to the standard under the condition, nothing more or less, else the funding is wasted . Also, that funding for the...bin to segregate gaps qualitatively before the gap value model determined preference among gaps within the bins. Computation of a gap’s...for communication, interpretation, or processing by humans or by automatic means (as it pertains to modeling and simulation). Delphi Method -- a
Interpreting international governance standards for health IT use within general medical practice.
Mahncke, Rachel J; Williams, Patricia A H
2014-01-01
General practices in Australia recognise the importance of comprehensive protective security measures. Some elements of information security governance are incorporated into recommended standards, however the governance component of information security is still insufficiently addressed in practice. The International Organistion for Standardisation (ISO) released a new global standard in May 2013 entitled, ISO/IEC 27014:2013 Information technology - Security techniques - Governance of information security. This standard, applicable to organisations of all sizes, offers a framework against which to assess and implement the governance components of information security. The standard demonstrates the relationship between governance and the management of information security, provides strategic principles and processes, and forms the basis for establishing a positive information security culture. An analysis interpretation of this standard for use in Australian general practice was performed. This work is unique as such interpretation for the Australian healthcare environment has not been undertaken before. It demonstrates an application of the standard at a strategic level to inform existing development of an information security governance framework.
Myths and Misconceptions in Fall Protection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epp, R J
2006-02-23
Since 1973, when OSHA CFRs 1910 and 1926 began to influence the workplace, confusion about the interpretation of the standards has been a problem and fall protection issues are among them. This confusion is verified by the issuance of 351 (as of 11/25/05) Standard Interpretations issued by OSHA in response to formally submitted questions asking for clarification. Over the years, many workers and too many ES&H Professionals have become 'self-interpreters', reaching conclusions that do not conform to either the Standards or the published Interpretations. One conclusion that has been reached by the author is that many ES&H Professionals are eithermore » not aware of, or do not pay attention to the Standard Interpretations issued by OSHA, or the State OSHA interpretation mechanism, whoever has jurisdiction. If you fall in this category, you are doing your organization or clients a disservice and are not providing them with the best information available. Several myths and/or misconceptions have been promulgated to the point that they become accepted fact, until an incident occurs and OSHA becomes involved. For example, one very pervasive myth is that you are in compliance as long as you maintain a distance of 6 feet from the edge. No such carte blanche rule exists. In this presentation, this myth and several other common myths/misconceptions will be discussed. This presentation is focused only on Federal OSHA CFR1910 Subpart D--Walking-Working Surfaces, CFR1926 Subpart M--Fall Protection and the Fall Protection Standard Interpretation Letters. This presentation does not cover steel erection, aerial lifts and other fall protection issues. Your regulations will probably be different than those presented if you are operating under a State plan.« less
NASA Astrophysics Data System (ADS)
Huang, L.; Zhu, X.; Guo, W.; Xiang, L.; Chen, X.; Mei, Y.
2012-07-01
Existing implementations of collaborative image interpretation have many limitations for very large satellite imageries, such as inefficient browsing, slow transmission, etc. This article presents a KML-based approach to support distributed, real-time, synchronous collaborative interpretation for remote sensing images in the geo-browser. As an OGC standard, KML (Keyhole Markup Language) has the advantage of organizing various types of geospatial data (including image, annotation, geometry, etc.) in the geo-browser. Existing KML elements can be used to describe simple interpretation results indicated by vector symbols. To enlarge its application, this article expands KML elements to describe some complex image processing operations, including band combination, grey transformation, geometric correction, etc. Improved KML is employed to describe and share interpretation operations and results among interpreters. Further, this article develops some collaboration related services that are collaboration launch service, perceiving service and communication service. The launch service creates a collaborative interpretation task and provides a unified interface for all participants. The perceiving service supports interpreters to share collaboration awareness. Communication service provides interpreters with written words communication. Finally, the GeoGlobe geo-browser (an extensible and flexible geospatial platform developed in LIESMARS) is selected to perform experiments of collaborative image interpretation. The geo-browser, which manage and visualize massive geospatial information, can provide distributed users with quick browsing and transmission. Meanwhile in the geo-browser, GIS data (for example DEM, DTM, thematic map and etc.) can be integrated to assist in improving accuracy of interpretation. Results show that the proposed method is available to support distributed collaborative interpretation of remote sensing image
Mordini, Federico E; Haddad, Tariq; Hsu, Li-Yueh; Kellman, Peter; Lowrey, Tracy B; Aletras, Anthony H; Bandettini, W Patricia; Arai, Andrew E
2014-01-01
This study's primary objective was to determine the sensitivity, specificity, and accuracy of fully quantitative stress perfusion cardiac magnetic resonance (CMR) versus a reference standard of quantitative coronary angiography. We hypothesized that fully quantitative analysis of stress perfusion CMR would have high diagnostic accuracy for identifying significant coronary artery stenosis and exceed the accuracy of semiquantitative measures of perfusion and qualitative interpretation. Relatively few studies apply fully quantitative CMR perfusion measures to patients with coronary disease and comparisons to semiquantitative and qualitative methods are limited. Dual bolus dipyridamole stress perfusion CMR exams were performed in 67 patients with clinical indications for assessment of myocardial ischemia. Stress perfusion images alone were analyzed with a fully quantitative perfusion (QP) method and 3 semiquantitative methods including contrast enhancement ratio, upslope index, and upslope integral. Comprehensive exams (cine imaging, stress/rest perfusion, late gadolinium enhancement) were analyzed qualitatively with 2 methods including the Duke algorithm and standard clinical interpretation. A 70% or greater stenosis by quantitative coronary angiography was considered abnormal. The optimum diagnostic threshold for QP determined by receiver-operating characteristic curve occurred when endocardial flow decreased to <50% of mean epicardial flow, which yielded a sensitivity of 87% and specificity of 93%. The area under the curve for QP was 92%, which was superior to semiquantitative methods: contrast enhancement ratio: 78%; upslope index: 82%; and upslope integral: 75% (p = 0.011, p = 0.019, p = 0.004 vs. QP, respectively). Area under the curve for QP was also superior to qualitative methods: Duke algorithm: 70%; and clinical interpretation: 78% (p < 0.001 and p < 0.001 vs. QP, respectively). Fully quantitative stress perfusion CMR has high diagnostic accuracy for detecting obstructive coronary artery disease. QP outperforms semiquantitative measures of perfusion and qualitative methods that incorporate a combination of cine, perfusion, and late gadolinium enhancement imaging. These findings suggest a potential clinical role for quantitative stress perfusion CMR. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Chen, Yan; James, Jonathan J; Turnbull, Anne E; Gale, Alastair G
2015-10-01
To establish whether lower resolution, lower cost viewing devices have the potential to deliver mammographic interpretation training. On three occasions over eight months, fourteen consultant radiologists and reporting radiographers read forty challenging digital mammography screening cases on three different displays: a digital mammography workstation, a standard LCD monitor, and a smartphone. Standard image manipulation software was available for use on all three devices. Receiver operating characteristic (ROC) analysis and ANOVA (Analysis of Variance) were used to determine the significance of differences in performance between the viewing devices with/without the application of image manipulation software. The effect of reader's experience was also assessed. Performance was significantly higher (p < .05) on the mammography workstation compared to the other two viewing devices. When image manipulation software was applied to images viewed on the standard LCD monitor, performance improved to mirror levels seen on the mammography workstation with no significant difference between the two. Image interpretation on the smartphone was uniformly poor. Film reader experience had no significant effect on performance across all three viewing devices. Lower resolution standard LCD monitors combined with appropriate image manipulation software are capable of displaying mammographic pathology, and are potentially suitable for delivering mammographic interpretation training. • This study investigates potential devices for training in mammography interpretation. • Lower resolution standard LCD monitors are potentially suitable for mammographic interpretation training. • The effect of image manipulation tools on mammography workstation viewing is insignificant. • Reader experience had no significant effect on performance in all viewing devices. • Smart phones are not suitable for displaying mammograms.
Does periodic lung screening of films meets standards?
Binay, Songul; Arbak, Peri; Safak, Alp Alper; Balbay, Ege Gulec; Bilgin, Cahit; Karatas, Naciye
2016-01-01
To determine whether the workers' periodic chest x-ray screening techniques in accordance with the quality standards is the responsibility of physicians. Evaluation of differences of interpretations by physicians in different levels of education and the importance of standardization of interpretation. Previously taken chest radiographs of 400 workers who are working in a factory producing the glass run channels were evaluated according to technical and quality standards by three observers (pulmonologist, radiologist, pulmonologist assistant). There was a perfect concordance between radiologist and pulmonologist for the underpenetrated films. Whereas there was perfect concordance between pulmonologist and pulmonologist assistant for over penetrated films. Pulmonologist (52%) has interpreted the dose of the films as regular more than other observers (radiologist; 44.3%, pulmonologist assistant; 30.4%). The frequency of interpretation of the films as taken in inspiratory phase by the pulmonologist (81.7%) was less than other observers (radiologist; 92.1%, pulmonologist assistant; 92.6%). The rate of the pulmonologist (53.5%) was higher than the other observers (radiologist; 44.6%, pulmonologist assistant; 41.8%) for the assessment of the positioning of the patients as symmetrical. Pulmonologist assistant (15.3%) was the one who most commonly reported the parenchymal findings (radiologist; 2.2%, pulmonologist; 12.9%). It is necessary to reorganize the technical standards and exposure procedures for improving the quality of the chest radiographs. The reappraisal of all interpreters and continuous training of technicians is required.
10 CFR 20.1006 - Interpretations.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 1 2013-01-01 2013-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...
10 CFR 20.1006 - Interpretations.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 1 2011-01-01 2011-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...
10 CFR 20.1006 - Interpretations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 1 2010-01-01 2010-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...
10 CFR 20.1006 - Interpretations.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 1 2012-01-01 2012-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...
10 CFR 20.1006 - Interpretations.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 1 2014-01-01 2014-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...
Web-based comparison of historical vs contemporary methods of fetal heart rate interpretation.
Epstein, Aaron J; Iriye, Brian K; Hancock, Lyle; Quilligan, Edward J; Rumney, Pamela J; Hancock, Judy; Ghamsary, Mark; Eakin, Cortney M; Smith, Cheryl; Wing, Deborah A
2016-10-01
Contemporary interpretation of fetal heart rate patterns is based largely on the tenets of Drs Quilligan and Hon. This method differs from an older method that was championed by Dr Caldeyro-Barcia in recording speed and classification of decelerations. The latter uses a paper speed of 1 cm/min and classifies decelerations referent to uterine contractions as type I or II dips, compared with conventional classification as early, late, or variable with paper speed of 3 cm/min. We hypothesized that 3 cm/min speed may lead to over-analysis of fetal heart rate and that 1 cm/min may provide adequate information without compromising accuracy or efficiency. The purpose of this study was to compare the Hon-Quilligan method of fetal heart rate interpretation with the Caldeyro-Barcia method among groups of obstetrics care providers with the use of an online interactive testing tool. We deidentified 40 fetal heart rate tracings from the terminal 30 minutes before delivery. A website was created to view these tracings with the use of the standard Hon-Quilligan method and adjusted the same tracings to the 1 cm/min monitoring speed for the Caldeyro-Barcia method. We invited 2-4 caregivers to participate: maternal-fetal medicine experts, practicing maternal-fetal medicine specialists, maternal-fetal medicine fellows, obstetrics nurses, and certified nurse midwives. After completing an introductory tutorial and quiz, they were asked to interpret the fetal heart rate tracings (the order was scrambled) to manage and predict maternal and neonatal outcomes using both methods. Their results were compared with those of our expert, Edward Quilligan, and were compared among groups. Analysis was performed with the use of 3 measures: percent classification, Kappa, and adjusted Gwet-Kappa (P < .05 was considered significant). Overall, our results show from moderate to almost perfect agreement with the expert and both between and within examiners (Gwet-Kappa 0.4-0.8). The agreement at each stratum of practitioner was generally highest for ascertainment of baseline and for management; the least agreement was for assessment of variability. We examined the agreement of fetal heart rate interpretation with a defined set of rules among a number of different obstetrics practitioners using 3 different statistical methods and found moderate-to-substantial agreement among the clinicians for matching the interpretation of the expert. This implies that the simpler Caldeyro-Barcia method may perform as well as the newer classification system. Copyright © 2016 Elsevier Inc. All rights reserved.
Dodd, Andrew; Osterhoff, Georg; Guy, Pierre; Lefaivre, Kelly A
2016-06-01
To report methods of measurement of radiographic displacement and radiographic outcomes in acetabular fractures described in the literature. A systematic review of the English literature was performed using EMBASE and Medline in August 2014. Inclusion criteria were studies of operatively treated acetabular fractures in adults with acute (<6 weeks) open reduction and internal fixation that reported radiographic outcomes. Exclusion criteria included case series with <10 patients, fractures managed >6 weeks from injury, acute total hip arthroplasty, periprosthetic fractures, time frame of radiographic outcomes not stated, missing radiographic outcome data, and non-English language articles. Basic information collected included journal, author, year published, number of fractures, and fracture types. Specific data collected included radiographic outcome data, method of measuring radiographic displacement, and methods of interpreting or categorizing radiographic outcomes. The number of reproducible radiographic measurement techniques (2/64) and previously described radiographic interpretation methods (4) were recorded. One radiographic reduction grading criterion (Matta) was used nearly universally in articles that used previously described criteria. Overall, 70% of articles using this criteria documented anatomic reductions. The current standard of measuring radiographic displacement in publications dealing with acetabulum fractures almost universally lacks basic description, making further scientific rigor, such as testing reproducibility, impossible. Further work is necessary to standardize radiographic measurement techniques, test their reproducibility, and qualify their validity or determine which measurements are important to clinical outcomes. Diagnostic Level IV. See Instructions for Authors for a complete description of levels of evidence.
Antinuclear antibody determination in a routine laboratory.
Feltkamp, T E
1996-01-01
Pitfalls in the method for demonstrating antinuclear antibodies (ANA) by the indirect immunofluorescence technique are described and the use of international standard preparations outlined. Determination of the optimal border dilution dividing positive from negative results is discussed. Each laboratory is a unique setting; it must define its own method, which should rarely be changed. One should not rely on copying methods from other laboratories or commercial firms, but the reproducibility of the nuclear substrate, the conjugate, and other variables should be controlled daily by the use of a control serum which has been related to the WHO standard preparation for ANA of the homogeneous type. Since many sera contain mixtures of different ANA, the results of routine tests are best expressed in titres or expressions of the intensity of fluorescence. The ANA test using the immunofluorescence technique should be used as a screening method for other tests allowing a more defined interpretation of the ANA. Each laboratory should individually determine the border between positive and negative results. Therefore about 200 sera from local healthy controls equally distributed over sex and age, and 100 sera from local patients with definite SLE should be tested. Since the local clinicians should become acquainted with this border it should rarely be changed. Finally each laboratory should participate regularly in national and international quality control rounds, where sera known to be difficult to interpret are tested. The judgment of the organisers of these rounds should stimulate improvements in the participating laboratories. PMID:8984936
Mathematics for the Student Scientist
NASA Astrophysics Data System (ADS)
Lauten, A. Darien; Lauten, Gary N.
1998-03-01
The Earth Day:Forest Watch Program, introduces elementary, middle, and secondary students to field laboratory, and satellite-data analysis methods for assessing the health of Eastern White Pine ( Pinus strobus). In this Student-Scientist Partnership program, mathematics, as envisioned in the NCTM Standards, arises naturally and provides opportunities for science-mathematics interdisciplinary student learning. School mathematics becomes the vehicle for students to quantify, represent, analyze, and interpret meaningful, real data.
Computing tools for implementing standards for single-case designs.
Chen, Li-Ting; Peng, Chao-Ying Joanne; Chen, Ming-E
2015-11-01
In the single-case design (SCD) literature, five sets of standards have been formulated and distinguished: design standards, assessment standards, analysis standards, reporting standards, and research synthesis standards. This article reviews computing tools that can assist researchers and practitioners in meeting the analysis standards recommended by the What Works Clearinghouse: Procedures and Standards Handbook-the WWC standards. These tools consist of specialized web-based calculators or downloadable software for SCD data, and algorithms or programs written in Excel, SAS procedures, SPSS commands/Macros, or the R programming language. We aligned these tools with the WWC standards and evaluated them for accuracy and treatment of missing data, using two published data sets. All tools were tested to be accurate. When missing data were present, most tools either gave an error message or conducted analysis based on the available data. Only one program used a single imputation method. This article concludes with suggestions for an inclusive computing tool or environment, additional research on the treatment of missing data, and reasonable and flexible interpretations of the WWC standards. © The Author(s) 2015.
A Community Database of Quartz Microstructures: Can we make measurements that constrain rheology?
NASA Astrophysics Data System (ADS)
Toy, Virginia; Peternell, Mark; Morales, Luiz; Kilian, Ruediger
2014-05-01
Rheology can be explored by performing deformation experiments, and by examining resultant microstructures and textures as links to naturally deformed rocks. Certain deformation processes are assumed to result in certain microstructures or textures, of which some might be uniquely indicative, while most cannot be unequivocally used to interpret the deformation mechanism and hence rheology. Despite our lack of a sufficient understanding of microstructure and texture forming processes, huge advances in texture measurements and quantification of microstructural parameters have been made. Unfortunately, there are neither standard procedures nor a common consensus on interpretation of many parameters (e.g. texture, grain size, shape preferred orientation). Textures (crystallographic preferred orientations) have been extensively correlated to the interpretation of deformation mechanisms. For example the strength of textures can be measured either from the orientation distribution function (e.g. the J-index (Bunge, 1983) or texture entropy (Hielscher et al., 2007) or via the intensity of polefigures. However, there are various ways to identify a representative volume, to measure, to process the data and to calculate an odf and texture descriptors, which restricts their use as a comparative and diagnostic measurement. Microstructural parameters such as grain size, grain shape descriptors and fabric descriptors are similarly used to deduce and quantify deformation mechanisms. However there is very little consensus on how to measure and calculate some of these very important parameters, e.g. grain size which makes comparison of a vast amount of precious data in the literature very difficult. We propose establishing a community database of a standard set of such measurements, made using typical samples of different types of quartz rocks through standard methods of microstructural and texture quantification. We invite suggestions and discussion from the community about the worth of proposed parameters, methodology and usefulness and willingness to contribute to a database with free access of the community. We further invite institutions to participate on a benchmark analysis of a set of 'standard' thin sections. Bunge, H.J. 1983, Texture Analysis in Materials Science: mathematical methods. Butterworth-Heinemann, 593pp. Hielscher, R., Schaeben, H., Chateigner, D., 2007, On the entropy to texture index relationship in quantitative texture analysis: Journal of Applied Crystallography 40, 371-375.
Speeding up local correlation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kats, Daniel
2014-12-28
We present two techniques that can substantially speed up the local correlation methods. The first one allows one to avoid the expensive transformation of the electron-repulsion integrals from atomic orbitals to virtual space. The second one introduces an algorithm for the residual equations in the local perturbative treatment that, in contrast to the standard scheme, does not require holding the amplitudes or residuals in memory. It is shown that even an interpreter-based implementation of the proposed algorithm in the context of local MP2 method is faster and requires less memory than the highly optimized variants of conventional algorithms.
Limitations on near-surface correction for multicomponent offset VSP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macbeth, C.; Li, X.Y.; Horne, S.
1994-12-31
Multicomponent data are degraded due to near-surface scattering and non-ideal or unexpected source behavior. These effects cannot be neglected when interpreting relative wavefield attributes derived from compressional and shear waves. They confuse analyses based on standard scalar procedures and a prima facia interpretation of the vector wavefield properties. Here, the authors highlight two unique polar matrix decompositions for near-surface correction in offset VSPs, consider their inherent mathematical constraints and how they impact on subsurface interpretation. The first method is applied to a four component subset of a six component field data from a configuration of three concentric rings and walkawaymore » source positions forming offset VSPs in the Cymric field, California. The correction appears successful in automatically converting the wavefield into its ideal form, and the qSl polarizations scatter around N15{degree}E in agreement with the layer stripping of Winterstein and Meadows (1991).« less
Visualization of postoperative anterior cruciate ligament reconstruction bone tunnels
2011-01-01
Background and purpose Non-anatomic bone tunnel placement is the most common cause of a failed ACL reconstruction. Accurate and reproducible methods to visualize and document bone tunnel placement are therefore important. We evaluated the reliability of standard radiographs, CT scans, and a 3-dimensional (3D) virtual reality (VR) approach in visualizing and measuring ACL reconstruction bone tunnel placement. Methods 50 consecutive patients who underwent single-bundle ACL reconstructions were evaluated postoperatively by standard radiographs, CT scans, and 3D VR images. Tibial and femoral tunnel positions were measured by 2 observers using the traditional methods of Amis, Aglietti, Hoser, Stäubli, and the method of Benereau for the VR approach. Results The tunnel was visualized in 50–82% of the standard radiographs and in 100% of the CT scans and 3D VR images. Using the intraclass correlation coefficient (ICC), the inter- and intraobserver agreement was between 0.39 and 0.83 for the standard femoral and tibial radiographs. CT scans showed an ICC range of 0.49–0.76 for the inter- and intraobserver agreement. The agreement in 3D VR was almost perfect, with an ICC of 0.83 for the femur and 0.95 for the tibia. Interpretation CT scans and 3D VR images are more reliable in assessing postoperative bone tunnel placement following ACL reconstruction than standard radiographs. PMID:21999625
Weykamp, C W; Penders, T J; Miedema, K; Muskiet, F A; van der Slik, W
1995-01-01
We investigated the effect of calibration with lyophilized calibrators on whole-blood glycohemoglobin (glyHb) results. One hundred three laboratories, using 20 different methods, determined glyHb in two lyophilized calibrators and two whole-blood samples. For whole-blood samples with low (5%) and high (9%) glyHb percentages, respectively, calibration decreased overall interlaboratory variation (CV) from 16% to 9% and from 11% to 6% and decreased intermethod variation from 14% to 6% and from 12% to 5%. Forty-seven laboratories, using 14 different methods, determined mean glyHb percentages in self-selected groups of 10 nondiabetic volunteers each. With calibration their overall mean (2SD) was 5.0% (0.5%), very close to the 5.0% (0.3%) derived from the reference method used in the Diabetes Control and Complications Trial. In both experiments the Abbott IMx and Vision showed deviating results. We conclude that, irrespective of the analytical method used, calibration enables standardization of glyHb results, reference values, and interpretation criteria.
Dong, Ren G; Sinsel, Erik W; Welcome, Daniel E; Warren, Christopher; Xu, Xueyan S; McDowell, Thomas W; Wu, John Z
2015-09-01
The hand coordinate systems for measuring vibration exposures and biodynamic responses have been standardized, but they are not actually used in many studies. This contradicts the purpose of the standardization. The objectives of this study were to identify the major sources of this problem, and to help define or identify better coordinate systems for the standardization. This study systematically reviewed the principles and definition methods, and evaluated typical hand coordinate systems. This study confirms that, as accelerometers remain the major technology for vibration measurement, it is reasonable to standardize two types of coordinate systems: a tool-based basicentric (BC) system and an anatomically based biodynamic (BD) system. However, these coordinate systems are not well defined in the current standard. Definition of the standard BC system is confusing, and it can be interpreted differently; as a result, it has been inconsistently applied in various standards and studies. The standard hand BD system is defined using the orientation of the third metacarpal bone. It is neither convenient nor defined based on important biological or biodynamic features. This explains why it is rarely used in practice. To resolve these inconsistencies and deficiencies, we proposed a revised method for defining the realistic handle BC system and an alternative method for defining the hand BD system. A fingertip-based BD system for measuring the principal grip force is also proposed based on an important feature of the grip force confirmed in this study.
Dong, Ren G.; Sinsel, Erik W.; Welcome, Daniel E.; Warren, Christopher; Xu, Xueyan S.; McDowell, Thomas W.; Wu, John Z.
2015-01-01
The hand coordinate systems for measuring vibration exposures and biodynamic responses have been standardized, but they are not actually used in many studies. This contradicts the purpose of the standardization. The objectives of this study were to identify the major sources of this problem, and to help define or identify better coordinate systems for the standardization. This study systematically reviewed the principles and definition methods, and evaluated typical hand coordinate systems. This study confirms that, as accelerometers remain the major technology for vibration measurement, it is reasonable to standardize two types of coordinate systems: a tool-based basicentric (BC) system and an anatomically based biodynamic (BD) system. However, these coordinate systems are not well defined in the current standard. Definition of the standard BC system is confusing, and it can be interpreted differently; as a result, it has been inconsistently applied in various standards and studies. The standard hand BD system is defined using the orientation of the third metacarpal bone. It is neither convenient nor defined based on important biological or biodynamic features. This explains why it is rarely used in practice. To resolve these inconsistencies and deficiencies, we proposed a revised method for defining the realistic handle BC system and an alternative method for defining the hand BD system. A fingertip-based BD system for measuring the principal grip force is also proposed based on an important feature of the grip force confirmed in this study. PMID:26929824
An Approach to Addressing Selection Bias in Survival Analysis
Carlin, Caroline S.; Solid, Craig A.
2014-01-01
This work proposes a frailty model that accounts for non-random treatment assignment in survival analysis. Using Monte Carlo simulation, we found that estimated treatment parameters from our proposed endogenous selection survival model (esSurv) closely parallel the consistent two-stage residual inclusion (2SRI) results, while offering computational and interpretive advantages. The esSurv method greatly enhances computational speed relative to 2SRI by eliminating the need for bootstrapped standard errors, and generally results in smaller standard errors than those estimated by 2SRI. In addition, esSurv explicitly estimates the correlation of unobservable factors contributing to both treatment assignment and the outcome of interest, providing an interpretive advantage over the residual parameter estimate in the 2SRI method. Comparisons with commonly used propensity score methods and with a model that does not account for non-random treatment assignment show clear bias in these methods that is not mitigated by increased sample size. We illustrate using actual dialysis patient data comparing mortality of patients with mature arteriovenous grafts for venous access to mortality of patients with grafts placed but not yet ready for use at the initiation of dialysis. We find strong evidence of endogeneity (with estimate of correlation in unobserved factors ρ̂ = 0.55), and estimate a mature-graft hazard ratio of 0.197 in our proposed method, with a similar 0.173 hazard ratio using 2SRI. The 0.630 hazard ratio from a frailty model without a correction for the non-random nature of treatment assignment illustrates the importance of accounting for endogeneity. PMID:24845211
Interpretation guidelines of a standard Y-chromosome STR 17-plex PCR-CE assay for crime casework.
Roewer, Lutz; Geppert, Maria
2012-01-01
Y-STR analysis is an invaluable tool to examine evidence in sexual assault cases and in other forensic casework. Unambiguous detection of the male component in DNA mixtures with a high female background is still the main field of application of forensic Y-STR haplotyping. In the last years, powerful technologies including a 17-locus multiplex PCR assay have been introduced in the forensic laboratories. At the same time, statistical methods have been developed and adapted for interpretation of a nonrecombining, linear marker as the Y-chromosome which shows a strongly clustered geographical distribution due to the linear inheritance and the patrilocality of ancestral groups. Large population databases, namely the Y-STR Haplotype Reference Database (YHRD), have been established to assess the evidentiary value of Y-STR matches by means of frequency estimation methods (counting and extrapolation).
Probabilistic registration of an unbiased statistical shape model to ultrasound images of the spine
NASA Astrophysics Data System (ADS)
Rasoulian, Abtin; Rohling, Robert N.; Abolmaesumi, Purang
2012-02-01
The placement of an epidural needle is among the most difficult regional anesthetic techniques. Ultrasound has been proposed to improve success of placement. However, it has not become the standard-of-care because of limitations in the depictions and interpretation of the key anatomical features. We propose to augment the ultrasound images with a registered statistical shape model of the spine to aid interpretation. The model is created with a novel deformable group-wise registration method which utilizes a probabilistic approach to register groups of point sets. The method is compared to a volume-based model building technique and it demonstrates better generalization and compactness. We instantiate and register the shape model to a spine surface probability map extracted from the ultrasound images. Validation is performed on human subjects. The achieved registration accuracy (2-4 mm) is sufficient to guide the choice of puncture site and trajectory of an epidural needle.
O'Daniel, Julianne M; McLaughlin, Heather M; Amendola, Laura M; Bale, Sherri J; Berg, Jonathan S; Bick, David; Bowling, Kevin M; Chao, Elizabeth C; Chung, Wendy K; Conlin, Laura K; Cooper, Gregory M; Das, Soma; Deignan, Joshua L; Dorschner, Michael O; Evans, James P; Ghazani, Arezou A; Goddard, Katrina A; Gornick, Michele; Farwell Hagman, Kelly D; Hambuch, Tina; Hegde, Madhuri; Hindorff, Lucia A; Holm, Ingrid A; Jarvik, Gail P; Knight Johnson, Amy; Mighion, Lindsey; Morra, Massimo; Plon, Sharon E; Punj, Sumit; Richards, C Sue; Santani, Avni; Shirts, Brian H; Spinner, Nancy B; Tang, Sha; Weck, Karen E; Wolf, Susan M; Yang, Yaping; Rehm, Heidi L
2017-05-01
While the diagnostic success of genomic sequencing expands, the complexity of this testing should not be overlooked. Numerous laboratory processes are required to support the identification, interpretation, and reporting of clinically significant variants. This study aimed to examine the workflow and reporting procedures among US laboratories to highlight shared practices and identify areas in need of standardization. Surveys and follow-up interviews were conducted with laboratories offering exome and/or genome sequencing to support a research program or for routine clinical services. The 73-item survey elicited multiple choice and free-text responses that were later clarified with phone interviews. Twenty-one laboratories participated. Practices highly concordant across all groups included consent documentation, multiperson case review, and enabling patient opt-out of incidental or secondary findings analysis. Noted divergence included use of phenotypic data to inform case analysis and interpretation and reporting of case-specific quality metrics and methods. Few laboratory policies detailed procedures for data reanalysis, data sharing, or patient access to data. This study provides an overview of practices and policies of experienced exome and genome sequencing laboratories. The results enable broader consideration of which practices are becoming standard approaches, where divergence remains, and areas of development in best practice guidelines that may be helpful.Genet Med advance online publication 03 Novemeber 2016.
Hunt, Christopher H.; Wood, Christopher P.; Diehn, Felix E.; Eckel, Laurence J.; Schwartz, Kara M.; Erickson, Bradley J.
2014-01-01
OBJECTIVE The purpose of this article is to describe the trends of secondary interpretations, including the total volume and format of cases. MATERIALS AND METHODS This retrospective study involved all outside neuroradiology examinations submitted for secondary interpretation from November 2006 through December 2010. This practice utilizes consistent criteria and includes all images that cover the brain, neck, and spine. For each month, the total number of outside examinations and their format (i.e., hard-copy film, DICOM CD-ROM, or non-DICOM CD-ROM) were recorded. RESULTS There was no significant change in the volume of cases (1043 ± 131 cases/month; p = 0.46, two-sided Student t test). There was a significant decrease in the volume of hard-copy films submitted, with the mean number of examinations submitted per month on hard-copy film declining from 297 in 2007 to 57 in 2010 (p < 0.0001, Student t test). This decrease was mirrored by an increase in the mean number of cases submitted on CD-ROM (753 cases/month in 2007 and 1036 cases/month in 2010; p < 0.0001). Although most were submitted in DICOM format, there was almost a doubling of the volume of cases submitted on non-DICOM CD-ROM (mean number of non-DICOM CD-ROMs, nine cases/month in 2007 and 17 cases/month in 2010; p < 0.001). CONCLUSION There has been a significant decrease in the number of hard-copy films submitted for secondary interpretation. There has been almost a doubling of the volume of cases submitted in non-DICOM formats, which is unfortunate, given the many advantages of the internationally derived DICOM standard, including ease of archiving, standardized display, efficient review, improved interpretation, and quality of patient care. PMID:22451538
Criteria to Evaluate Interpretive Guides for Criterion-Referenced Tests
ERIC Educational Resources Information Center
Trapp, William J.
2007-01-01
This project provides a list of criteria for which the contents of interpretive guides written for customized, criterion-referenced tests can be evaluated. The criteria are based on the "Standards for Educational and Psychological Testing" (1999) and examine the content breadth of interpretive guides. Interpretive guides written for…
NASA Astrophysics Data System (ADS)
Matuk, Camillia Faye
Visual representations are central to expert scientific thinking. Meanwhile, novices tend toward narrative conceptions of scientific phenomena. Until recently, however, relationships between visual design, narrative thinking, and their impacts on learning science have only been theoretically pursued. This dissertation first synthesizes different disciplinary perspectives, then offers a mixed-methods investigation into interpretations of scientific representations. Finally, it considers design issues associated with narrative and visual imagery, and explores the possibilities of a pedagogical notation to scaffold the understanding of a standard scientific notation. Throughout, I distinguish two categories of visual media by their relation to narrative: Narrative visual media, which convey content via narrative structure, and Conceptual visual media, which convey states of relationships among objects. Given the role of narrative in framing conceptions of scientific phenomena and perceptions of its representations, I suggest that novices are especially prone to construe both kinds of media in narrative terms. To illustrate, I first describe how novices make meaning of the science conveyed in narrative visual media. Vignettes of an undergraduate student's interpretation of a cartoon about natural selection; and of four 13-year olds' readings of a comic book about human papillomavirus infection, together demonstrate conditions under which designed visual narrative elements facilitate or hinder understanding. I next consider the interpretation of conceptual visual media with an example of an expert notation from evolutionary biology, the cladogram. By combining clinical interview methods with experimental design, I show how undergraduate students' narrative theories of evolution frame perceptions of the diagram (Study 1); I demonstrate the flexibility of symbolic meaning, both with the content assumed (Study 2A), and with alternate manners of presenting the diagram (Study 2B); finally, I show the effects of content assumptions on the diagrams students invent of phylogenetic data (Study 3A), and how first inventing a diagram influences later interpretations of the standard notation (Study 3B). Lastly, I describe the prototype design and pilot test of an interactive diagram to scaffold biology students' understanding of this expert scientific notation. Insights from this dissertation inform the design of more pedagogically useful representations that might support students' developing fluency with expert scientific representations.
Harmonisation of microbial sampling and testing methods for distillate fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, G.C.; Hill, E.C.
1995-05-01
Increased incidence of microbial infection in distillate fuels has led to a demand for organisations such as the Institute of Petroleum to propose standards for microbiological quality, based on numbers of viable microbial colony forming units. Variations in quality requirements, and in the spoilage significance of contaminating microbes plus a tendency for temporal and spatial changes in the distribution of microbes, makes such standards difficult to implement. The problem is compounded by a diversity in the procedures employed for sampling and testing for microbial contamination and in the interpretation of the data obtained. The following paper reviews these problems andmore » describes the efforts of The Institute of Petroleum Microbiology Fuels Group to address these issues and in particular to bring about harmonisation of sampling and testing methods. The benefits and drawbacks of available test methods, both laboratory based and on-site, are discussed.« less
Convolutional neural networks for vibrational spectroscopic data analysis.
Acquarelli, Jacopo; van Laarhoven, Twan; Gerretzen, Jan; Tran, Thanh N; Buydens, Lutgarde M C; Marchiori, Elena
2017-02-15
In this work we show that convolutional neural networks (CNNs) can be efficiently used to classify vibrational spectroscopic data and identify important spectral regions. CNNs are the current state-of-the-art in image classification and speech recognition and can learn interpretable representations of the data. These characteristics make CNNs a good candidate for reducing the need for preprocessing and for highlighting important spectral regions, both of which are crucial steps in the analysis of vibrational spectroscopic data. Chemometric analysis of vibrational spectroscopic data often relies on preprocessing methods involving baseline correction, scatter correction and noise removal, which are applied to the spectra prior to model building. Preprocessing is a critical step because even in simple problems using 'reasonable' preprocessing methods may decrease the performance of the final model. We develop a new CNN based method and provide an accompanying publicly available software. It is based on a simple CNN architecture with a single convolutional layer (a so-called shallow CNN). Our method outperforms standard classification algorithms used in chemometrics (e.g. PLS) in terms of accuracy when applied to non-preprocessed test data (86% average accuracy compared to the 62% achieved by PLS), and it achieves better performance even on preprocessed test data (96% average accuracy compared to the 89% achieved by PLS). For interpretability purposes, our method includes a procedure for finding important spectral regions, thereby facilitating qualitative interpretation of results. Copyright © 2016 Elsevier B.V. All rights reserved.
Progress in SPECT/CT imaging of prostate cancer.
Seo, Youngho; Franc, Benjamin L; Hawkins, Randall A; Wong, Kenneth H; Hasegawa, Bruce H
2006-08-01
Prostate cancer is the most common type of cancer (other than skin cancer) among men in the United States. Although prostate cancer is one of the few cancers that grow so slowly that it may never threaten the lives of some patients, it can be lethal once metastasized. Indium-111 capromab pendetide (ProstaScint, Cytogen Corporation, Princeton, NJ) imaging is indicated for staging and recurrence detection of the disease, and is particularly useful to determine whether or not the disease has spread to distant metastatic sites. However, the interpretation of 111In-capromab pendetide is challenging without correlated structural information mostly because the radiopharmaceutical demonstrates nonspecific uptake in the normal vasculature, bowel, bone marrow, and the prostate gland. We developed an improved method of imaging and localizing 111In-Capromab pendetide using a SPECT/CT imaging system. The specific goals included: i) development and application of a novel iterative SPECT reconstruction algorithm that utilizes a priori information from coregistered CT; and ii) assessment of clinical impact of adding SPECT/CT for prostate cancer imaging with capromab pendetide utilizing the standard and novel reconstruction techniques. Patient imaging studies with capromab pendetide were performed from 1999 to 2004 using two different SPECT/CT scanners, a prototype SPECT/CT system and a commercial SPECT/CT system (Discovery VH, GE Healthcare, Waukesha, WI). SPECT projection data from both systems were reconstructed using an experimental iterative algorithm that compensates for both photon attenuation and collimator blurring. In addition, the data obtained from the commercial system were reconstructed with attenuation correction using an OSEM reconstruction supplied by the camera manufacturer for routine clinical interpretation. For 12 sets of patient data, SPECT images reconstructed using the experimental algorithm were interpreted separately and compared with interpretation of images obtained using the standard reconstruction technique. The experimental reconstruction algorithm improved spatial resolution, reduced streak artifacts, and yielded a better correlation with anatomic details of CT in comparison to conventional reconstruction methods (e.g., filtered back-projection or OSEM with attenuation correction only). Images produced with the experimental algorithm produced a subjective improvement in the confidence of interpretation for 11 of 12 studies. There were also changes in interpretations for 4 of 12 studies although the changes were not sufficient to alter prognosis or the patient treatment plan.
NASA Astrophysics Data System (ADS)
Castillo, Carlos; Gomez, Jose Alfonso
2016-04-01
Standardization is the process of developing common conventions or proceedings to facilitate the communication, use, comparison and exchange of products or information among different parties. It has been an useful tool in different fields from industry to statistics due to technical, economic and social reasons. In science the need for standardization has been recognised in the definition of methods as well as in publication formats. With respect to gully erosion, a number of initiatives have been carried out to propose common methodologies, for instance, for gully delineation (Castillo et al., 2014) and geometrical measurements (Casalí et al., 2015). The main aims of this work are: 1) to examine previous proposals in gully erosion literature implying standardization processes; 2) to contribute with new approaches to improve the homogeneity of methodologies and presentation of results for a better communication among the gully erosion community. For this purpose, we evaluated the basic information provided on environmental factors, discussed the delineation and measurement procedures proposed in previous works and, finally, we analysed statistically the severity of degradation levels derived from different indicators at the world scale. As a result, we presented suggestions aiming to serve as guidance for survey design as well as for the interpretation of vulnerability levels and degradation rates for future gully erosion studies. References Casalí, J., Giménez, R., and Campo-Bescós, M. A.: Gully geometry: what are we measuring?, SOIL, 1, 509-513, doi:10.5194/soil-1-509-2015, 2015. Castillo C., Taguas E. V., Zarco-Tejada P., James M. R., and Gómez J. A. (2014), The normalized topographic method: an automated procedure for gully mapping using GIS, Earth Surf. Process. Landforms, 39, 2002-2015, doi: 10.1002/esp.3595
A Fair and Balance Approach to the Mean
ERIC Educational Resources Information Center
Peters, Susan A.; Bennett, Victoria Miller; Young, Mandy; Watkins, Jonathan D.
2016-01-01
The mean can be interpreted as a fair-share value and as a balance point. Standards documents, including Common Core State Standards for Mathematics (CCSSM) (CCSSI 2010), suggest focusing on both interpretations. In this article, the authors propose a sequence of five activities to help students develop these understandings of the mean, and they…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-07
... standard genetic toxicology battery for prediction of potential human risks, and on interpreting results... followup testing and interpretation of positive results in vitro and in vivo in the standard genetic... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2008-D-0178...
Applications of Automation Methods for Nonlinear Fracture Test Analysis
NASA Technical Reports Server (NTRS)
Allen, Phillip A.; Wells, Douglas N.
2013-01-01
Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.
Fracture Toughness of Advanced Ceramics at Room Temperature
Quinn, George D.; Salem, Jonathan; Bar-on, Isa; Cho, Kyu; Foley, Michael; Fang, Ho
1992-01-01
This report presents the results obtained by the five U.S. participating laboratories in the Versailles Advanced Materials and Standards (VAMAS) round-robin for fracture toughness of advanced ceramics. Three test methods were used: indentation fracture, indentation strength, and single-edge pre-cracked beam. Two materials were tested: a gas-pressure sintered silicon nitride and a zirconia toughened alumina. Consistent results were obtained with the latter two test methods. Interpretation of fracture toughness in the zirconia alumina composite was complicated by R-curve and environmentally-assisted crack growth phenomena. PMID:28053447
A paperless autoimmunity laboratory: myth or reality?
Lutteri, Laurence; Dierge, Laurine; Pesser, Martine; Watrin, Pascale; Cavalier, Etienne
2016-08-01
Testing for antinuclear antibodies is the most frequently prescribed analysis for the diagnosis of rheumatic diseases. Indirect immunofluorescence remains the gold standard method for their detection despite the increasing use of alternative techniques. In order to standardize the manual microscopy reading, automated acquisition and interpretation systems have emerged. This publication enables us to present our method of interpretation and characterization of antinuclear antibodies based on a cascade of analyses and to share our everyday experience of the G Sight from Menarini. The positive/negative discrimination on Hep cells 2000 is correct in 85% of the cases. In most of the false negative results, it is a question of aspecific or low titers patterns, but a few cases of SSA speckled patterns of low titers demonstrated a probability index below 8. Regarding the pattern recognition, some types and mixed patterns are not properly recognized. Concerning the probability index correlated in some studies to final titer, the weak fluorescence of certain patterns and the random presence of artifacts that distort the index don't lead us to continue it in our daily practice. In conclusion, automated reading systems facilitate the reporting of results and traceability of patterns but still require the expertise of a laboratory technologist for positive/negative discrimination and for pattern recognition.
16 CFR 1201.40 - Interpretation concerning bathtub and shower doors and enclosures.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Interpretation concerning bathtub and shower... Policy and Interpretation § 1201.40 Interpretation concerning bathtub and shower doors and enclosures. (a... and enclosures” and “shower door and enclosure” as they are used in the Standard in subpart A. The...
Castro, Denise A; Naqvi, Asad Ahmed; Vandenkerkhof, Elizabeth; Flavin, Michael P; Manson, David; Soboleski, Donald
2016-01-01
Variability in image interpretation has been attributed to differences in the interpreters' knowledge base, experience level, and access to the clinical scenario. Picture archiving and communication system (PACS) has allowed the user to manipulate the images while developing their impression of the radiograph. The aim of this study was to determine the agreement of chest radiograph (CXR) impressions among radiologists and neonatologists and help determine the effect of image manipulation with PACS on report impression. Prospective cohort study included 60 patients from the Neonatal Intensive Care Unit undergoing CXRs. Three radiologists and three neonatologists reviewed two consecutive frontal CXRs of each patient. Each physician was allowed manipulation of images as needed to provide a decision of "improved," "unchanged," or "disease progression" lung disease for each patient. Each physician repeated the process once more; this time, they were not allowed to individually manipulate the images, but an independent radiologist presets the image brightness and contrast to best optimize the CXR appearance. Percent agreement and opposing reporting views were calculated between all six physicians for each of the two methods (allowing and not allowing image manipulation). One hundred percent agreement in image impression between all six observers was only seen in 5% of cases when allowing image manipulation; 100% agreement was seen in 13% of the cases when there was no manipulation of the images. Agreement in CXR interpretation is poor; the ability to manipulate the images on PACS results in a decrease in agreement in the interpretation of these studies. New methods to standardize image appearance and allow improved comparison with previous studies should be sought to improve clinician agreement in interpretation consistency and advance patient care.
Naeger, D M; Chang, S D; Kolli, P; Shah, V; Huang, W; Thoeni, R F
2011-01-01
Objective The study compared the sensitivity, specificity, confidence and interpretation time of readers of differing experience in diagnosing acute appendicitis with contrast-enhanced CT using neutral vs positive oral contrast agents. Methods Contrast-enhanced CT for right lower quadrant or right flank pain was performed in 200 patients with neutral and 200 with positive oral contrast including 199 with proven acute appendicitis and 201 with other diagnoses. Test set disease prevalence was 50%. Two experienced gastrointestinal radiologists, one fellow and two first-year residents blindly assessed all studies for appendicitis (2000 readings) and assigned confidence scores (1=poor to 4=excellent). Receiver operating characteristic (ROC) curves were generated. Total interpretation time was recorded. Each reader's interpretation with the two agents was compared using standard statistical methods. Results Average reader sensitivity was found to be 96% (range 91–99%) with positive and 95% (89–98%) with neutral oral contrast; specificity was 96% (92–98%) and 94% (90–97%). For each reader, no statistically significant difference was found between the two agents (sensitivities p-values >0.6; specificities p-values>0.08), in the area under the ROC curve (range 0.95–0.99) or in average interpretation times. In cases without appendicitis, positive oral contrast demonstrated improved appendix identification (average 90% vs 78%) and higher confidence scores for three readers. Average interpretation times showed no statistically significant differences between the agents. Conclusion Neutral vs positive oral contrast does not affect the accuracy of contrast-enhanced CT for diagnosing acute appendicitis. Although positive oral contrast might help to identify normal appendices, we continue to use neutral oral contrast given its other potential benefits. PMID:20959365
Mahomed, Nasreen; Fancourt, Nicholas; de Campo, John; de Campo, Margaret; Akano, Aliu; Cherian, Thomas; Cohen, Olivia G; Greenberg, David; Lacey, Stephen; Kohli, Neera; Lederman, Henrique M; Madhi, Shabir A; Manduku, Veronica; McCollum, Eric D; Park, Kate; Ribo-Aristizabal, Jose Luis; Bar-Zeev, Naor; O'Brien, Katherine L; Mulholland, Kim
2017-10-01
Childhood pneumonia is among the leading infectious causes of mortality in children younger than 5 years of age globally. Streptococcus pneumoniae (pneumococcus) is the leading infectious cause of childhood bacterial pneumonia. The diagnosis of childhood pneumonia remains a critical epidemiological task for monitoring vaccine and treatment program effectiveness. The chest radiograph remains the most readily available and common imaging modality to assess childhood pneumonia. In 1997, the World Health Organization Radiology Working Group was established to provide a consensus method for the standardized definition for the interpretation of pediatric frontal chest radiographs, for use in bacterial vaccine efficacy trials in children. The definition was not designed for use in individual patient clinical management because of its emphasis on specificity at the expense of sensitivity. These definitions and endpoint conclusions were published in 2001 and an analysis of observer variation for these conclusions using a reference library of chest radiographs was published in 2005. In response to the technical needs identified through subsequent meetings, the World Health Organization Chest Radiography in Epidemiological Studies (CRES) project was initiated and is designed to be a continuation of the World Health Organization Radiology Working Group. The aims of the World Health Organization CRES project are to clarify the definitions used in the World Health Organization defined standardized interpretation of pediatric chest radiographs in bacterial vaccine impact and pneumonia epidemiological studies, reinforce the focus on reproducible chest radiograph readings, provide training and support with World Health Organization defined standardized interpretation of chest radiographs and develop guidelines and tools for investigators and site staff to assist in obtaining high-quality chest radiographs.
La Barbera, Luigi; Galbusera, Fabio; Wilke, Hans-Joachim; Villa, Tomaso
2016-09-01
To discuss whether the available standard methods for preclinical evaluation of posterior spine stabilization devices can represent basic everyday life activities and how to compare the results obtained with different procedures. A comparative finite element study compared ASTM F1717 and ISO 12189 standards to validated instrumented L2-L4 segments undergoing standing, upper body flexion and extension. The internal loads on the spinal rod and the maximum stress on the implant are analysed. ISO recommended anterior support stiffness and force allow for reproducing bending moments measured in vivo on an instrumented physiological segment during upper body flexion. Despite the significance of ASTM model from an engineering point of view, the overly conservative vertebrectomy model represents an unrealistic worst case scenario. A method is proposed to determine the load to apply on assemblies with different anterior support stiffnesses to guarantee a comparable bending moment and reproduce specific everyday life activities. The study increases our awareness on the use of the current standards to achieve meaningful results easy to compare and interpret.
Rieger, M A; Lohmeyer, M; Nübling, M; Neuhaus, S; Diefenbach, H; Hofmann, F
2005-05-01
On the basis of EU directives 89/391/EEC (to encourage improvements in the safety and health of workers at work) and 2000/54/EC (on the protection of workers from risks related to exposure to biological agents at work), biological hazards at work have to be assessed and preventive measures have to be introduced in all member states of the EU. In Germany, national legislation (Biological Agents Ordinance - BioStoffV and Technical Rules on Biological Agents, TRBA) and recommendations of workers' compensation boards define standardized methods for the assessment of airborne mold, bacteria, and endotoxins. This article describes policies and practices in Germany for measurement of airborne bioaerosols and for interpretation of measurements relative to the standards. As an example, methods and results of measurements in agriculture are shown. The standardized measurement procedures proved suitable for use in livestock buildings. The results of the exploratory measurements in different livestock buildings confirmed the often high concentrations of airborne biological hazards in agriculture that are reported in the literature.
Chen, Chih-Hao; Hsu, Chueh-Lin; Huang, Shih-Hao; Chen, Shih-Yuan; Hung, Yi-Lin; Chen, Hsiao-Rong; Wu, Yu-Chung
2015-01-01
Although genome-wide expression analysis has become a routine tool for gaining insight into molecular mechanisms, extraction of information remains a major challenge. It has been unclear why standard statistical methods, such as the t-test and ANOVA, often lead to low levels of reproducibility, how likely applying fold-change cutoffs to enhance reproducibility is to miss key signals, and how adversely using such methods has affected data interpretations. We broadly examined expression data to investigate the reproducibility problem and discovered that molecular heterogeneity, a biological property of genetically different samples, has been improperly handled by the statistical methods. Here we give a mathematical description of the discovery and report the development of a statistical method, named HTA, for better handling molecular heterogeneity. We broadly demonstrate the improved sensitivity and specificity of HTA over the conventional methods and show that using fold-change cutoffs has lost much information. We illustrate the especial usefulness of HTA for heterogeneous diseases, by applying it to existing data sets of schizophrenia, bipolar disorder and Parkinson’s disease, and show it can abundantly and reproducibly uncover disease signatures not previously detectable. Based on 156 biological data sets, we estimate that the methodological issue has affected over 96% of expression studies and that HTA can profoundly correct 86% of the affected data interpretations. The methodological advancement can better facilitate systems understandings of biological processes, render biological inferences that are more reliable than they have hitherto been and engender translational medical applications, such as identifying diagnostic biomarkers and drug prediction, which are more robust. PMID:25793610
Auvert, J-F; Chleir, F; Coppé, G; Hamel-Desnos, C; Moraglia, L; Pichot, O
2014-02-01
The quality standards of the French Society for Vascular Medicine for the ultrasound assessment of the superficial venous system of the lower limbs are based on the two following requirements: technical know-how (mastering the use of ultrasound devices and the method of examination); medical know-how (ability to adapt the methods and scope of the examination to its clinical indications and purpose and to rationally analyze and interpret its results). To describe an optimal method of examination in relation to the clinical question and hypothesis; to achieve consistent practice, methods, glossary terminologies and reporting; to provide good practice reference points and to promote a high quality process. The three levels of examination. Their clinical indications and goals. The reference standard examination (level 2) and its variants according to clinical needs. The minimal content of the examination report, the letter to the referring physician (synthesis, conclusion and management suggestions) and iconography. Commented glossary (anatomy, hemodynamics, semiology). Technical basis. Ultrasound devices settings. We discuss of use of Duplex ultrasound for the assessment of the superficial veins of the lower limbs in vascular medicine practice. Copyright © 2014. Published by Elsevier Masson SAS.
Haire, Bridget G; Folayan, Morenike Oluwatoyin; Brown, Brandon
2014-09-01
While international standards are important for conducting clinical research, they may require interpretation in particular contexts. Standard of care in HIV prevention research is now complicated, given that there are now two new biomedical prevention interventions - 'treatment-as-prevention', and pre-exposure prophylaxis--in addition to barrier protection, counselling, male circumcision and treatment of sexually transmissible infections. Proper standards of care must be considered with regard to both normative guidance and the circumstances of the particular stakeholders--the community, trial population, researchers and sponsors. In addition, the special circumstances of the lives of participants need to be acknowledged in designing trial protocols and study procedures. When researchers are faced with the dilemma of interpretation of international ethics guidelines and the realities of the daily lives of persons and their practices, the decisions of the local ethics committee become crucial. The challenge then becomes how familiar ethics committee members in these local settings are with these guidelines, and how their interpretation and use in the local context ensures the respect for persons and communities. It also includes justice and the fair selection of study participants without compromising data quality, and ensuring that the risks for study participants and their community do not outweigh the potential benefits.
ERIC Educational Resources Information Center
Butler, Michelle A.; Katayama, Andrew D.; Schindling, Casey; Dials, Katherine
2018-01-01
Although testing accommodations for standardized assessments are available for students with disabilities, interpretation remains challenging. The authors explored resilience to see if it could contribute to the interpretation of academic success for students who are deaf or hard of hearing or blind or have low vision. High school students (30…
Assessing Data Quality in Emergent Domains of Earth Sciences
NASA Astrophysics Data System (ADS)
Darch, P. T.; Borgman, C.
2016-12-01
As earth scientists seek to study known phenomena in new ways, and to study new phenomena, they often develop new technologies and new methods such as embedded network sensing, or reapply extant technologies, such as seafloor drilling. Emergent domains are often highly multidisciplinary as researchers from many backgrounds converge on new research questions. They may adapt existing methods, or develop methods de novo. As a result, emerging domains tend to be methodologically heterogeneous. As these domains mature, pressure to standardize methods increases. Standardization promotes trust, reliability, accuracy, and reproducibility, and simplifies data management. However, for standardization to occur, researchers must be able to assess which of the competing methods produces the highest quality data. The exploratory nature of emerging domains discourages standardization. Because competing methods originate in different disciplinary backgrounds, their scientific credibility is difficult to compare. Instead of direct comparison, researchers attempt to conduct meta-analyses. Scientists compare datasets produced by different methods to assess their consistency and efficiency. This paper presents findings from a long-term qualitative case study of research on the deep subseafloor biosphere, an emergent domain. A diverse community converged on the study of microbes in the seafloor and those microbes' interactions with the physical environments they inhabit. Data on this problem are scarce, leading to calls for standardization as a means to acquire and analyze greater volumes of data. Lacking consistent methods, scientists attempted to conduct meta-analyses to determine the most promising methods on which to standardize. Among the factors that inhibited meta-analyses were disparate approaches to metadata and to curating data. Datasets may be deposited in a variety of databases or kept on individual scientists' servers. Associated metadata may be inconsistent or hard to interpret. Incentive structures, including prospects for journal publication, often favor new data over reanalyzing extant datasets. Assessing data quality in emergent domains is extremely difficult and will require adaptations in infrastructure, culture, and incentives.
On the traceability of gaseous reference materials
NASA Astrophysics Data System (ADS)
Brown, Richard J. C.; Brewer, Paul J.; Harris, Peter M.; Davidson, Stuart; van der Veen, Adriaan M. H.; Ent, Hugo
2017-06-01
The complex and multi-parameter nature of chemical composition measurement means that establishing traceability is a challenging task. As a result incorrect interpretations about the origin of the metrological traceability of chemical measurement results can occur. This discussion paper examines why this is the case by scrutinising the peculiarities of the gas metrology area. It considers in particular: primary methods, dissemination of metrological traceability and the role of documentary standards and accreditation bodies in promulgating best practice. There is also a discussion of documentary standards relevant to the NMI and reference material producer community which need clarification, and the impact which key stakeholders in the quality infrastructure can bring to these issues.
Flight of Sharovipteryx mirabilis: the world's first delta-winged glider.
Dyke, G J; Nudds, R L; Rayner, J M V
2006-07-01
The 225 million-year-old reptile Sharovipteryx mirabilis was the world's first delta-winged glider; this remarkable animal had a flight surface composed entirely of a hind-limb membrane. We use standard delta-wing aerodynamics to reconstruct the flight of S. mirabilis demonstrating that wing shape could have been controlled simply by protraction of the femora at the knees, and by variation in incidence of a small forelimb canard. Our method has allowed us to address the question of how identifying realistic glide performance can be used to set limits on aerodynamic design in this small animal. Our novel interpretation of the bizarre flight mode of S. mirabilis is the first based directly on interpretation of the fossil itself and the first grounded in aerodynamics.
DOE interpretations Guide to OSH standards. Update to the Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-03-31
Reflecting Secretary O`Leary`s focus on occupational safety and health, the Office of Occupational Safety is pleased to provide you with the latest update to the DOE Interpretations Guide to OSH Standards. This Guide was developed in cooperation with the Occupational Safety and Health Administration, which continued its support during this last revision by facilitating access to the interpretations found on the OSHA Computerized Information System (OCIS). This March 31, 1994 update contains 123 formal interpretation letters written OSHA. As a result of the unique requests received by the 1-800 Response Line, this update also contains 38 interpretations developed by DOE.more » This new occupational safety and health information adds still more important guidance to the four volume reference set that you presently have in your possession.« less
Standard methods for open hole tension testing of textile composites
NASA Technical Reports Server (NTRS)
Portanova, M. A.; Masters, J. E.
1995-01-01
Sizing effects have been investigated by comparing the open hole failure strengths of each of the four different braided architectures as a function of specimen thickness, hole diameter, and the ratio of specimen width to hole diameter. The data used to make these comparisons was primarily generated by Boeing. Direct comparisons of Boeing's results were made with experiments conducted at West Virginia University whenever possible. Indirect comparisons were made with test results for other 2-D braids and 3-D weaves tested by Boeing and Lockheed. In general, failure strength was found to decrease with increasing plate thickness, increase with decreasing hole size, and decreasing with decreasing width to diameter ratio. The interpretation of the sensitive to each of these geometrical parameters was complicated by scatter in the test data. For open hole tension testing of textile composites, the use of standard testing practices employed by industry, such as ASTM D5766 - Standard Test Method for Open Hole Tensile Strength of Polymer Matrix Composite Laminates should provide adequate results for material comparisons studies.
Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung
2013-03-01
The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards. Copyright © 2012 Elsevier Inc. All rights reserved.
The Development of Quality Measures for the Performance and Interpretation of Esophageal Manometry
Yadlapati, Rena; Gawron, Andrew J.; Keswani, Rajesh N.; Bilimoria, Karl; Castell, Donald O.; Dunbar, Kerry B.; Gyawali, Chandra P.; Jobe, Blair A.; Katz, Philip O.; Katzka, David A.; Lacy, Brian E.; Massey, Benson T.; Richter, Joel E.; Schnoll-Sussman, Felice; Spechler, Stuart J.; Tatum, Roger; Vela, Marcelo F.; Pandolfino, John E.
2016-01-01
Background and Aims Esophageal manometry (EM) is the gold standard for the diagnosis of esophageal motility disorders. Variations in the performance and interpretation of EM result in discrepant diagnoses and unnecessary repeated procedures, and may negatively impact patient outcomes. A method to benchmark the procedural quality of EM is needed. The primary aim of this study was to develop quality measures for performing and interpreting EM. Methods The RAND/University of California, Los Angeles Appropriateness Methodology (RAM) was utilized. Fifteen experts in esophageal manometry were invited to be a part of the panel. Potential quality measures were identified through a literature search and interviews with experts. The expert panel ranked the proposed quality measures for appropriateness via a two-round process on the basis of RAM. Results Fourteen experts participated in all processes. A total of 29 measures were considered; 17 of these measures were ranked as appropriate and related to competency (2), pre-procedure (2), procedure (3) and interpretation (10). The latter 10 were integrated into a single composite measure. Thus, 8 final measures were determined to be appropriate quality measures for EM. Five strong recommendations were also endorsed by the experts, however they were not ranked as appropriate quality measures. Conclusions Eight formally validated quality measures for the performance and interpretation of EM were developed on the basis of RAM. These measures represent key aspects of a high-quality EM study and should be uniformly adopted. Evaluation of these measures in clinical practice is needed to assess their impact on outcomes. PMID:26499925
Analysis of Indonesian educational system standard with KSIM cross-impact method
NASA Astrophysics Data System (ADS)
Arridjal, F.; Aldila, D.; Bustamam, A.
2017-07-01
The Result of The Programme of International Student Assessment (PISA) on 2012 shows that Indonesia is on 64'th position from 65 countries in Mathematics Mean Score. The 2013 Learning Curve Mapping, Indonesia is included in the 10th category of countries with the lowest performance on cognitive skills aspect, i.e. 37'th position from 40 countries. Competency is built by 3 aspects, one of them is cognitive aspect. The low result of mapping on cognitive aspect, describe the low of graduate competences as an output of Indonesia National Education System (INES). INES adopting a concept Eight Educational System Standards (EESS), one of them is graduate competency standard which connected directly with Indonesia's students. This research aims is to model INES by using KSIM cross-impact. Linear regression models of EESS constructed using the accreditation national data of Senior High Schools in Indonesia. The results then interpreted as impact value on the construction of KSIM cross-impact INES. The construction is used to analyze the interaction of EESS and doing numerical simulation for possible public policy in the education sector, i.e. stimulate the growth of education staff standard, content, process and infrastructure. All simulations of public policy has been done with 2 methods i.e with a multiplier impact method and with constant intervention method. From numerical simulation result, it is shown that stimulate the growth standard of content in the construction KSIM cross-impact EESS is the best option for public policy to maximize the growth of graduate competency standard.
A Primer on Health Economic Evaluations in Thoracic Oncology.
Whittington, Melanie D; Atherly, Adam J; Bocsi, Gregary T; Camidge, D Ross
2016-08-01
There is growing interest for economic evaluation in oncology to illustrate the value of multiple new diagnostic and therapeutic interventions. As these analyses have started to move from specialist publications into mainstream medical literature, the wider medical audience consuming this information may need additional education to evaluate it appropriately. Here we review standard practices in economic evaluation, illustrating the different methods with thoracic oncology examples where possible. When interpreting and conducting health economic studies, it is important to appraise the method, perspective, time horizon, modeling technique, discount rate, and sensitivity analysis. Guidance on how to do this is provided. To provide a method to evaluate this literature, a literature search was conducted in spring 2015 to identify economic evaluations published in the Journal of Thoracic Oncology. Articles were reviewed for their study design, and areas for improvement were noted. Suggested improvements include using more rigorous sensitivity analyses, adopting a standard approach to reporting results, and conducting complete economic evaluations. Researchers should design high-quality studies to ensure the validity of the results, and consumers of this research should interpret these studies critically on the basis of a full understanding of the methodologies used before considering any of the conclusions. As advancements occur on both the research and consumer sides, this literature can be further developed to promote the best use of resources for this field. Copyright © 2016 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Peterman, Karen; Cranston, Kayla A.; Pryor, Marie; Kermish-Allen, Ruth
2015-11-01
This case study was conducted within the context of a place-based education project that was implemented with primary school students in the USA. The authors and participating teachers created a performance assessment of standards-aligned tasks to examine 6-10-year-old students' graph interpretation skills as part of an exploratory research project. Fifty-five students participated in a performance assessment interview at the beginning and end of a place-based investigation. Two forms of the assessment were created and counterbalanced within class at pre and post. In situ scoring was conducted such that responses were scored as correct versus incorrect during the assessment's administration. Criterion validity analysis demonstrated an age-level progression in student scores. Tests of discriminant validity showed that the instrument detected variability in interpretation skills across each of three graph types (line, bar, dot plot). Convergent validity was established by correlating in situ scores with those from the Graph Interpretation Scoring Rubric. Students' proficiency with interpreting different types of graphs matched expectations based on age and the standards-based progression of graphs across primary school grades. The assessment tasks were also effective at detecting pre-post gains in students' interpretation of line graphs and dot plots after the place-based project. The results of the case study are discussed in relation to the common challenges associated with performance assessment. Implications are presented in relation to the need for authentic and performance-based instructional and assessment tasks to respond to the Common Core State Standards and the Next Generation Science Standards.
Laurin, E; Thakur, K K; Gardner, I A; Hick, P; Moody, N J G; Crane, M S J; Ernst, I
2018-05-01
Design and reporting quality of diagnostic accuracy studies (DAS) are important metrics for assessing utility of tests used in animal and human health. Following standards for designing DAS will assist in appropriate test selection for specific testing purposes and minimize the risk of reporting biased sensitivity and specificity estimates. To examine the benefits of recommending standards, design information from published DAS literature was assessed for 10 finfish, seven mollusc, nine crustacean and two amphibian diseases listed in the 2017 OIE Manual of Diagnostic Tests for Aquatic Animals. Of the 56 DAS identified, 41 were based on field testing, eight on experimental challenge studies and seven on both. Also, we adapted human and terrestrial-animal standards and guidelines for DAS structure for use in aquatic animal diagnostic research. Through this process, we identified and addressed important metrics for consideration at the design phase: study purpose, targeted disease state, selection of appropriate samples and specimens, laboratory analytical methods, statistical methods and data interpretation. These recommended design standards for DAS are presented as a checklist including risk-of-failure points and actions to mitigate bias at each critical step. Adherence to standards when designing DAS will also facilitate future systematic review and meta-analyses of DAS research literature. © 2018 John Wiley & Sons Ltd.
Development of WAIS-III General Ability Index Minus WMS-III memory discrepancy scores.
Lange, Rael T; Chelune, Gordon J; Tulsky, David S
2006-09-01
Analysis of the discrepancy between intellectual functioning and memory ability has received some support as a useful means for evaluating memory impairment. In recent additions to Wechlser scale interpretation, the WAIS-III General Ability Index (GAI) and the WMS-III Delayed Memory Index (DMI) were developed. The purpose of this investigation is to develop base rate data for GAI-IMI, GAI-GMI, and GAI-DMI discrepancy scores using data from the WAIS-III/WMS-III standardization sample (weighted N = 1250). Base rate tables were developed using the predicted-difference method and two simple-difference methods (i.e., stratified and non-stratified). These tables provide valuable data for clinical reference purposes to determine the frequency of GAI-IMI, GAI-GMI, and GAI-DMI discrepancy scores in the WAIS-III/WMS-III standardization sample.
ERIC Educational Resources Information Center
Van den Berghe, Wouter
This report brings together European experience on the interpretation and implementation of ISO 9000 in education and training (ET) environments. Chapter 1 discusses the importance of quality concepts in ET and summarizes key concepts of total quality management (TQM) and its relevance for ET. Chapter 2 introduces the ISO 9000 standards. It…
ERIC Educational Resources Information Center
Salleh, Safrul Izani Mohd; Gardner, John C.; Sulong, Zunaidah; McGowan, Carl B., Jr.
2011-01-01
This study examines the differences in the interpretation of ten "in context" verbal probability expressions used in accounting standards between native Chinese speaking and native English speaking accounting students in United Kingdom universities. The study assesses the degree of grouping factors consensus on the numerical…
Niederstebruch, N; Sixt, D
2013-02-01
In the industrial world, the agar diffusion test is a standard procedure for the susceptibility testing of bacteria isolates. Beta-hemolytic Streptococcus spp. are tested with Müller-Hinton agar supplemented with 5 % blood, a so-called blood agar. The results are interpreted using standardized tables, which only exist for this type of nutrient matrix. Because of a number difficulties, both with respect to technical issues and to manual skills, blood agar is not a feasible option in many developing countries. Beta-hemolytic Streptococcus spp. also grow on Standard Nutrient Agar 1 (StNA1). This suggests using that type of nutrient medium for running agar diffusion tests. However, there are no standardized tables that can be used for interpreting the diameters of the zones of inhibition on StNA1 1. Using the existing standardized tables for blood agar to interpret cultures on StNA1 1 would be of great benefit under such circumstances where blood agar is not available. With this in mind, we conducted comparative tests to evaluate the growth characteristics of beta-hemolytic Streptococcus spp. on StNA1 1 compared to Müller-Hinton agar supplemented with 5 % sheep blood. In this study, we were able to show that beta-hemolytic Streptococcus spp. develop similar zones of inhibition on blood agar and on StNA1 1. Therefore, it is suggested that, for the interpretation of antibiograms of beta-hemolytic Streptococcus spp. performed on StNA1 1, the standard tables for blood agar can be used.
Tuzun, Erdem; Berrih-Aknin, Sonia; Brenner, Talma; Kusner, Linda L; Le Panse, Rozen; Yang, Huan; Tzartos, Socrates; Christadoss, Premkumar
2015-08-01
Myasthenia gravis (MG) is an autoimmune disorder characterized by generalized muscle weakness due to neuromuscular junction (NMJ) dysfunction brought by acetylcholine receptor (AChR) antibodies in most cases. Although steroids and other immunosuppressants are effectively used for treatment of MG, these medications often cause severe side effects and a complete remission cannot be obtained in many cases. For pre-clinical evaluation of more effective and less toxic treatment methods for MG, the experimental autoimmune myasthenia gravis (EAMG) induced by Torpedo AChR immunization has become one of the standard animal models. Although numerous compounds have been recently proposed for MG mostly by using the active immunization EAMG model, only a few have been proven to be effective in MG patients. The variability in the experimental design, immunization methods and outcome measurements of pre-clinical EAMG studies make it difficult to interpret the published reports and assess the potential for application to MG patients. In an effort to standardize the active immunization EAMG model, we propose standard procedures for animal care conditions, sampling and randomization of mice, experimental design and outcome measures. Utilization of these standard procedures might improve the power of pre-clinical EAMG experiments and increase the chances for identifying promising novel treatment methods that can be effectively translated into clinical trials for MG. Copyright © 2015 Elsevier Inc. All rights reserved.
Automated indirect immunofluorescence evaluation of antinuclear autoantibodies on HEp-2 cells.
Voigt, Jörn; Krause, Christopher; Rohwäder, Edda; Saschenbrecker, Sandra; Hahn, Melanie; Danckwardt, Maick; Feirer, Christian; Ens, Konstantin; Fechner, Kai; Barth, Erhardt; Martinetz, Thomas; Stöcker, Winfried
2012-01-01
Indirect immunofluorescence (IIF) on human epithelial (HEp-2) cells is considered as the gold standard screening method for the detection of antinuclear autoantibodies (ANA). However, in terms of automation and standardization, it has not been able to keep pace with most other analytical techniques used in diagnostic laboratories. Although there are already some automation solutions for IIF incubation in the market, the automation of result evaluation is still in its infancy. Therefore, the EUROPattern Suite has been developed as a comprehensive automated processing and interpretation system for standardized and efficient ANA detection by HEp-2 cell-based IIF. In this study, the automated pattern recognition was compared to conventional visual interpretation in a total of 351 sera. In the discrimination of positive from negative samples, concordant results between visual and automated evaluation were obtained for 349 sera (99.4%, kappa = 0.984). The system missed out none of the 272 antibody-positive samples and identified 77 out of 79 visually negative samples (analytical sensitivity/specificity: 100%/97.5%). Moreover, 94.0% of all main antibody patterns were recognized correctly by the software. Owing to its performance characteristics, EUROPattern enables fast, objective, and economic IIF ANA analysis and has the potential to reduce intra- and interlaboratory variability.
Automated Indirect Immunofluorescence Evaluation of Antinuclear Autoantibodies on HEp-2 Cells
Voigt, Jörn; Krause, Christopher; Rohwäder, Edda; Saschenbrecker, Sandra; Hahn, Melanie; Danckwardt, Maick; Feirer, Christian; Ens, Konstantin; Fechner, Kai; Barth, Erhardt; Martinetz, Thomas; Stöcker, Winfried
2012-01-01
Indirect immunofluorescence (IIF) on human epithelial (HEp-2) cells is considered as the gold standard screening method for the detection of antinuclear autoantibodies (ANA). However, in terms of automation and standardization, it has not been able to keep pace with most other analytical techniques used in diagnostic laboratories. Although there are already some automation solutions for IIF incubation in the market, the automation of result evaluation is still in its infancy. Therefore, the EUROPattern Suite has been developed as a comprehensive automated processing and interpretation system for standardized and efficient ANA detection by HEp-2 cell-based IIF. In this study, the automated pattern recognition was compared to conventional visual interpretation in a total of 351 sera. In the discrimination of positive from negative samples, concordant results between visual and automated evaluation were obtained for 349 sera (99.4%, kappa = 0.984). The system missed out none of the 272 antibody-positive samples and identified 77 out of 79 visually negative samples (analytical sensitivity/specificity: 100%/97.5%). Moreover, 94.0% of all main antibody patterns were recognized correctly by the software. Owing to its performance characteristics, EUROPattern enables fast, objective, and economic IIF ANA analysis and has the potential to reduce intra- and interlaboratory variability. PMID:23251220
Rubinstein, Jack; Dhoble, Abhijeet; Ferenchick, Gary
2009-01-01
Background Most medical professionals are expected to possess basic electrocardiogram (EKG) interpretation skills. But, published data suggests that residents' and physicians' EKG interpretation skills are suboptimal. Learning styles differ among medical students; individualization of teaching methods has been shown to be viable and may result in improved learning. Puzzles have been shown to facilitate learning in a relaxed environment. The objective of this study was to assess efficacy of teaching puzzle in EKG interpretation skills among medical students. Methods This is a reader blinded crossover trial. Third year medical students from College of Human Medicine, Michigan State University participated in this study. Two groups (n = 9) received two traditional EKG interpretation skills lectures followed by a standardized exam and two extra sessions with the teaching puzzle and a different exam. Two other groups (n = 6) received identical courses and exams with the puzzle session first followed by the traditional teaching. EKG interpretation scores on final test were used as main outcome measure. Results The average score after only traditional teaching was 4.07 ± 2.08 while after only the puzzle session was 4.04 ± 2.36 (p = 0.97). The average improvement after the traditional session was followed up with a puzzle session was 2.53 ± 1.94 while the average improvement after the puzzle session was followed with the traditional session was 2.08 ± 1.73 (p = 0.67). The final EKG exam score for this cohort (n = 15) was 84.1 compared to 86.6 (p = 0.22) for a comparable sample of medical students (n = 15) at a different campus. Conclusion Teaching EKG interpretation with puzzles is comparable to traditional teaching and may be particularly useful for certain subgroups of students. Puzzle session are more interactive and relaxing, and warrant further investigations on larger scale. PMID:19144134
2011-01-01
Background Clinical researchers have often preferred to use a fixed effects model for the primary interpretation of a meta-analysis. Heterogeneity is usually assessed via the well known Q and I2 statistics, along with the random effects estimate they imply. In recent years, alternative methods for quantifying heterogeneity have been proposed, that are based on a 'generalised' Q statistic. Methods We review 18 IPD meta-analyses of RCTs into treatments for cancer, in order to quantify the amount of heterogeneity present and also to discuss practical methods for explaining heterogeneity. Results Differing results were obtained when the standard Q and I2 statistics were used to test for the presence of heterogeneity. The two meta-analyses with the largest amount of heterogeneity were investigated further, and on inspection the straightforward application of a random effects model was not deemed appropriate. Compared to the standard Q statistic, the generalised Q statistic provided a more accurate platform for estimating the amount of heterogeneity in the 18 meta-analyses. Conclusions Explaining heterogeneity via the pre-specification of trial subgroups, graphical diagnostic tools and sensitivity analyses produced a more desirable outcome than an automatic application of the random effects model. Generalised Q statistic methods for quantifying and adjusting for heterogeneity should be incorporated as standard into statistical software. Software is provided to help achieve this aim. PMID:21473747
Translating Radiometric Requirements for Satellite Sensors to Match International Standards.
Pearlman, Aaron; Datla, Raju; Kacker, Raghu; Cao, Changyong
2014-01-01
International scientific standards organizations created standards on evaluating uncertainty in the early 1990s. Although scientists from many fields use these standards, they are not consistently implemented in the remote sensing community, where traditional error analysis framework persists. For a satellite instrument under development, this can create confusion in showing whether requirements are met. We aim to create a methodology for translating requirements from the error analysis framework to the modern uncertainty approach using the product level requirements of the Advanced Baseline Imager (ABI) that will fly on the Geostationary Operational Environmental Satellite R-Series (GOES-R). In this paper we prescribe a method to combine several measurement performance requirements, written using a traditional error analysis framework, into a single specification using the propagation of uncertainties formula. By using this approach, scientists can communicate requirements in a consistent uncertainty framework leading to uniform interpretation throughout the development and operation of any satellite instrument.
Translating Radiometric Requirements for Satellite Sensors to Match International Standards
Pearlman, Aaron; Datla, Raju; Kacker, Raghu; Cao, Changyong
2014-01-01
International scientific standards organizations created standards on evaluating uncertainty in the early 1990s. Although scientists from many fields use these standards, they are not consistently implemented in the remote sensing community, where traditional error analysis framework persists. For a satellite instrument under development, this can create confusion in showing whether requirements are met. We aim to create a methodology for translating requirements from the error analysis framework to the modern uncertainty approach using the product level requirements of the Advanced Baseline Imager (ABI) that will fly on the Geostationary Operational Environmental Satellite R-Series (GOES-R). In this paper we prescribe a method to combine several measurement performance requirements, written using a traditional error analysis framework, into a single specification using the propagation of uncertainties formula. By using this approach, scientists can communicate requirements in a consistent uncertainty framework leading to uniform interpretation throughout the development and operation of any satellite instrument. PMID:26601032
Oster, Natalia V; Carney, Patricia A; Allison, Kimberly H; Weaver, Donald L; Reisch, Lisa M; Longton, Gary; Onega, Tracy; Pepe, Margaret; Geller, Berta M; Nelson, Heidi D; Ross, Tyler R; Tosteson, Aanna N A; Elmore, Joann G
2013-02-05
Diagnostic test sets are a valuable research tool that contributes importantly to the validity and reliability of studies that assess agreement in breast pathology. In order to fully understand the strengths and weaknesses of any agreement and reliability study, however, the methods should be fully reported. In this paper we provide a step-by-step description of the methods used to create four complex test sets for a study of diagnostic agreement among pathologists interpreting breast biopsy specimens. We use the newly developed Guidelines for Reporting Reliability and Agreement Studies (GRRAS) as a basis to report these methods. Breast tissue biopsies were selected from the National Cancer Institute-funded Breast Cancer Surveillance Consortium sites. We used a random sampling stratified according to woman's age (40-49 vs. ≥50), parenchymal breast density (low vs. high) and interpretation of the original pathologist. A 3-member panel of expert breast pathologists first independently interpreted each case using five primary diagnostic categories (non-proliferative changes, proliferative changes without atypia, atypical ductal hyperplasia, ductal carcinoma in situ, and invasive carcinoma). When the experts did not unanimously agree on a case diagnosis a modified Delphi method was used to determine the reference standard consensus diagnosis. The final test cases were stratified and randomly assigned into one of four unique test sets. We found GRRAS recommendations to be very useful in reporting diagnostic test set development and recommend inclusion of two additional criteria: 1) characterizing the study population and 2) describing the methods for reference diagnosis, when applicable.
Inference of median difference based on the Box-Cox model in randomized clinical trials.
Maruo, K; Isogawa, N; Gosho, M
2015-05-10
In randomized clinical trials, many medical and biological measurements are not normally distributed and are often skewed. The Box-Cox transformation is a powerful procedure for comparing two treatment groups for skewed continuous variables in terms of a statistical test. However, it is difficult to directly estimate and interpret the location difference between the two groups on the original scale of the measurement. We propose a helpful method that infers the difference of the treatment effect on the original scale in a more easily interpretable form. We also provide statistical analysis packages that consistently include an estimate of the treatment effect, covariance adjustments, standard errors, and statistical hypothesis tests. The simulation study that focuses on randomized parallel group clinical trials with two treatment groups indicates that the performance of the proposed method is equivalent to or better than that of the existing non-parametric approaches in terms of the type-I error rate and power. We illustrate our method with cluster of differentiation 4 data in an acquired immune deficiency syndrome clinical trial. Copyright © 2015 John Wiley & Sons, Ltd.
Comprehension and reproducibility of the Judet and Letournel classification
Polesello, Giancarlo Cavalli; Nunes, Marcus Aurelius Araujo; Azuaga, Thiago Leonardi; de Queiroz, Marcelo Cavalheiro; Honda, Emerson Kyoshi; Ono, Nelson Keiske
2012-01-01
Objective To evaluate the effectiveness of the method of radiographic interpretation of acetabular fractures, according to the classification of Judet and Letournel, used by a group of residents of Orthopedics at a university hospital. Methods We selected ten orthopedic residents, who were divided into two groups; one group received training in a methodology for the classification of acetabular fractures, which involves transposing the radiographic images to a graphic two-dimensional representation. We classified fifty cases of acetabular fracture on two separate occasions, and determined the intraobserver and interobserver agreement. Result The success rate was 16.2% (10-26%) for the trained group and 22.8% (10-36%) for the untrained group. The mean kappa coefficients for interobserver and intraobserver agreement in the trained group were 0.08 and 0.12, respectively, and for the untrained group, 0.14 and 0.29. Conclusion Training in the method of radiographic interpretation of acetabular fractures was not effective for assisting in the classification of acetabular fractures. Level of evidence I, Testing of previously developed diagnostic criteria on consecutive patients (with universally applied reference "gold" standard). PMID:24453583
Some comments on the substituted judgement standard.
Egonsson, Dan
2010-02-01
On a traditional interpretation of the substituted judgement standard (SJS) a person who makes treatment decisions on behalf of a non-competent patient (e.g. concerning euthanasia) ought to decide as the patient would have decided had she been competent. I propose an alternative interpretation of SJS in which the surrogate is required to infer what the patient actually thought about these end-of-life decisions. In clarifying SJS it is also important to differentiate the patient's consent and preference. If SJS is part of an autonomy ideal of the sort found in Kantian ethics, consent seems more important than preference. From a utilitarian perspective a preference-based reading of SJS seems natural. I argue that the justification of SJS within a utilitarian framework will boil down to the question whether a non-competent patient can be said to have any surviving preferences. If we give a virtue-ethical justification of SJS the relative importance of consent and preferences depends on which virtue one stresses--respect or care. I argue that SJS might be an independent normative method for extending the patient's autonomy, both from a Kantian and a virtue ethical perspective.
Bone development in laboratory mammals used in developmental toxicity studies.
DeSesso, John M; Scialli, Anthony R
2018-06-19
Evaluation of the skeleton in laboratory animals is a standard component of developmental toxicology testing. Standard methods of performing the evaluation have been established, and modification of the evaluation using imaging technologies is under development. The embryology of the rodent, rabbit, and primate skeleton has been characterized in detail and summarized herein. The rich literature on variations and malformations in skeletal development that can occur in the offspring of normal animals and animals exposed to test articles in toxicology studies is reviewed. These perturbations of skeletal development include ossification delays, alterations in number, shape, and size of ossification centers, and alterations in numbers of ribs and vertebrae. Because the skeleton is undergoing developmental changes at the time fetuses are evaluated in most study designs, transient delays in development can produce apparent findings of abnormal skeletal structure. The determination of whether a finding represents a permanent change in embryo development with adverse consequences for the organism is important in study interpretation. Knowledge of embryological processes and schedules can assist in interpretation of skeletal findings. © 2018 The Authors. Birth Defects Research Published by Wiley Periodicals, Inc.
Gupta, Malika; Cox, Amanda; Nowak-Węgrzyn, Anna; Wang, Julie
2018-02-01
Food allergy diagnosis remains challenging. Most standard methods are unable to differentiate sensitization from clinical allergy. Recognizing food allergy is of utmost importance to prevent life-threatening reactions. On the other hand, faulty interpretation of tests leads to overdiagnosis and unnecessary food avoidances. Highly predictive models have been established for major food allergens based on skin prick testing and food-specific immunoglobulin E but are lacking for most other foods. Although many newer diagnostic techniques are improving the accuracy of food allergy diagnostics, an oral food challenge remains the only definitive method of confirming a food allergy. Copyright © 2017 Elsevier Inc. All rights reserved.
Moyer, Jason T; Gnatkovsky, Vadym; Ono, Tomonori; Otáhal, Jakub; Wagenaar, Joost; Stacey, William C; Noebels, Jeffrey; Ikeda, Akio; Staley, Kevin; de Curtis, Marco; Litt, Brian; Galanopoulou, Aristea S
2017-11-01
Electroencephalography (EEG)-the direct recording of the electrical activity of populations of neurons-is a tremendously important tool for diagnosing, treating, and researching epilepsy. Although standard procedures for recording and analyzing human EEG exist and are broadly accepted, there are no such standards for research in animal models of seizures and epilepsy-recording montages, acquisition systems, and processing algorithms may differ substantially among investigators and laboratories. The lack of standard procedures for acquiring and analyzing EEG from animal models of epilepsy hinders the interpretation of experimental results and reduces the ability of the scientific community to efficiently translate new experimental findings into clinical practice. Accordingly, the intention of this report is twofold: (1) to review current techniques for the collection and software-based analysis of neural field recordings in animal models of epilepsy, and (2) to offer pertinent standards and reporting guidelines for this research. Specifically, we review current techniques for signal acquisition, signal conditioning, signal processing, data storage, and data sharing, and include applicable recommendations to standardize collection and reporting. We close with a discussion of challenges and future opportunities, and include a supplemental report of currently available acquisition systems and analysis tools. This work represents a collaboration on behalf of the American Epilepsy Society/International League Against Epilepsy (AES/ILAE) Translational Task Force (TASK1-Workgroup 5), and is part of a larger effort to harmonize video-EEG interpretation and analysis methods across studies using in vivo and in vitro seizure and epilepsy models. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Design and Initial Characterization of the SC-200 Proteomics Standard Mixture
Bauman, Andrew; Higdon, Roger; Rapson, Sean; Loiue, Brenton; Hogan, Jason; Stacy, Robin; Napuli, Alberto; Guo, Wenjin; van Voorhis, Wesley; Roach, Jared; Lu, Vincent; Landorf, Elizabeth; Stewart, Elizabeth; Kolker, Natali; Collart, Frank; Myler, Peter; van Belle, Gerald
2011-01-01
Abstract High-throughput (HTP) proteomics studies generate large amounts of data. Interpretation of these data requires effective approaches to distinguish noise from biological signal, particularly as instrument and computational capacity increase and studies become more complex. Resolving this issue requires validated and reproducible methods and models, which in turn requires complex experimental and computational standards. The absence of appropriate standards and data sets for validating experimental and computational workflows hinders the development of HTP proteomics methods. Most protein standards are simple mixtures of proteins or peptides, or undercharacterized reference standards in which the identity and concentration of the constituent proteins is unknown. The Seattle Children's 200 (SC-200) proposed proteomics standard mixture is the next step toward developing realistic, fully characterized HTP proteomics standards. The SC-200 exhibits a unique modular design to extend its functionality, and consists of 200 proteins of known identities and molar concentrations from 6 microbial genomes, distributed into 10 molar concentration tiers spanning a 1,000-fold range. We describe the SC-200's design, potential uses, and initial characterization. We identified 84% of SC-200 proteins with an LTQ-Orbitrap and 65% with an LTQ-Velos (false discovery rate = 1% for both). There were obvious trends in success rate, sequence coverage, and spectral counts with protein concentration; however, protein identification, sequence coverage, and spectral counts vary greatly within concentration levels. PMID:21250827
Design and initial characterization of the SC-200 proteomics standard mixture.
Bauman, Andrew; Higdon, Roger; Rapson, Sean; Loiue, Brenton; Hogan, Jason; Stacy, Robin; Napuli, Alberto; Guo, Wenjin; van Voorhis, Wesley; Roach, Jared; Lu, Vincent; Landorf, Elizabeth; Stewart, Elizabeth; Kolker, Natali; Collart, Frank; Myler, Peter; van Belle, Gerald; Kolker, Eugene
2011-01-01
High-throughput (HTP) proteomics studies generate large amounts of data. Interpretation of these data requires effective approaches to distinguish noise from biological signal, particularly as instrument and computational capacity increase and studies become more complex. Resolving this issue requires validated and reproducible methods and models, which in turn requires complex experimental and computational standards. The absence of appropriate standards and data sets for validating experimental and computational workflows hinders the development of HTP proteomics methods. Most protein standards are simple mixtures of proteins or peptides, or undercharacterized reference standards in which the identity and concentration of the constituent proteins is unknown. The Seattle Children's 200 (SC-200) proposed proteomics standard mixture is the next step toward developing realistic, fully characterized HTP proteomics standards. The SC-200 exhibits a unique modular design to extend its functionality, and consists of 200 proteins of known identities and molar concentrations from 6 microbial genomes, distributed into 10 molar concentration tiers spanning a 1,000-fold range. We describe the SC-200's design, potential uses, and initial characterization. We identified 84% of SC-200 proteins with an LTQ-Orbitrap and 65% with an LTQ-Velos (false discovery rate = 1% for both). There were obvious trends in success rate, sequence coverage, and spectral counts with protein concentration; however, protein identification, sequence coverage, and spectral counts vary greatly within concentration levels.
NASA Astrophysics Data System (ADS)
Zhang, Wen-Yan; Lin, Chao-Yuan
2017-04-01
The Soil Conservation Service Curve Number (SCS-CN) method, which was originally developed by the USDA Natural Resources Conservation Service, is widely used to estimate direct runoff volume from rainfall. The runoff Curve Number (CN) parameter is based on the hydrologic soil group and land use factors. In Taiwan, the national land use maps were interpreted from aerial photos in 1995 and 2008. Rapid updating of post-disaster land use map is limited due to the high cost of production, so the classification of satellite images is the alternative method to obtain the land use map. In this study, Normalized Difference Vegetation Index (NDVI) in Chen-You-Lan Watershed was derived from dry and wet season of Landsat imageries during 2003 - 2008. Land covers were interpreted from mean value and standard deviation of NDVI and were categorized into 4 groups i.e. forest, grassland, agriculture and bare land. Then, the runoff volume of typhoon events during 2005 - 2009 were estimated using SCS-CN method and verified with the measured runoff data. The result showed that the model efficiency coefficient is 90.77%. Therefore, estimating runoff by using the land cover map classified from satellite images is practicable.
Brodén, Cyrus; Olivecrona, Henrik; Maguire, Gerald Q; Noz, Marilyn E; Zeleznik, Michael P; Sköldenberg, Olof
2016-01-01
Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting.
Semantic Segmentation of Building Elements Using Point Cloud Hashing
NASA Astrophysics Data System (ADS)
Chizhova, M.; Gurianov, A.; Hess, M.; Luhmann, T.; Brunn, A.; Stilla, U.
2018-05-01
For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect) into different building types and structural elements (dome, nave, transept etc.), including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling).
A Method for the Interpretation of Flow Cytometry Data Using Genetic Algorithms.
Angeletti, Cesar
2018-01-01
Flow cytometry analysis is the method of choice for the differential diagnosis of hematologic disorders. It is typically performed by a trained hematopathologist through visual examination of bidimensional plots, making the analysis time-consuming and sometimes too subjective. Here, a pilot study applying genetic algorithms to flow cytometry data from normal and acute myeloid leukemia subjects is described. Initially, Flow Cytometry Standard files from 316 normal and 43 acute myeloid leukemia subjects were transformed into multidimensional FITS image metafiles. Training was performed through introduction of FITS metafiles from 4 normal and 4 acute myeloid leukemia in the artificial intelligence system. Two mathematical algorithms termed 018330 and 025886 were generated. When tested against a cohort of 312 normal and 39 acute myeloid leukemia subjects, both algorithms combined showed high discriminatory power with a receiver operating characteristic (ROC) curve of 0.912. The present results suggest that machine learning systems hold a great promise in the interpretation of hematological flow cytometry data.
The arcsine is asinine: the analysis of proportions in ecology.
Warton, David I; Hui, Francis K C
2011-01-01
The arcsine square root transformation has long been standard procedure when analyzing proportional data in ecology, with applications in data sets containing binomial and non-binomial response variables. Here, we argue that the arcsine transform should not be used in either circumstance. For binomial data, logistic regression has greater interpretability and higher power than analyses of transformed data. However, it is important to check the data for additional unexplained variation, i.e., overdispersion, and to account for it via the inclusion of random effects in the model if found. For non-binomial data, the arcsine transform is undesirable on the grounds of interpretability, and because it can produce nonsensical predictions. The logit transformation is proposed as an alternative approach to address these issues. Examples are presented in both cases to illustrate these advantages, comparing various methods of analyzing proportions including untransformed, arcsine- and logit-transformed linear models and logistic regression (with or without random effects). Simulations demonstrate that logistic regression usually provides a gain in power over other methods.
Carrillo, Maria C; Blennow, Kaj; Soares, Holly; Lewczuk, Piotr; Mattsson, Niklas; Oberoi, Pankaj; Umek, Robert; Vandijck, Manu; Salamone, Salvatore; Bittner, Tobias; Shaw, Leslie M; Stephenson, Diane; Bain, Lisa; Zetterberg, Henrik
2013-03-01
Recognizing that international collaboration is critical for the acceleration of biomarker standardization efforts and the efficient development of improved diagnosis and therapy, the Alzheimer's Association created the Global Biomarkers Standardization Consortium (GBSC) in 2010. The consortium brings together representatives of academic centers, industry, and the regulatory community with the common goal of developing internationally accepted common reference standards and reference methods for the assessment of cerebrospinal fluid (CSF) amyloid β42 (Aβ42) and tau biomarkers. Such standards are essential to ensure that analytical measurements are reproducible and consistent across multiple laboratories and across multiple kit manufacturers. Analytical harmonization for CSF Aβ42 and tau will help reduce confusion in the AD community regarding the absolute values associated with the clinical interpretation of CSF biomarker results and enable worldwide comparison of CSF biomarker results across AD clinical studies. Copyright © 2013 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
28 CFR 904.2 - Interpretation of the criminal history record screening requirement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Interpretation of the criminal history... PRIVACY COMPACT COUNCIL STATE CRIMINAL HISTORY RECORD SCREENING STANDARDS § 904.2 Interpretation of the criminal history record screening requirement. Compact Article IV(c) provides that “Any record obtained...
28 CFR 904.2 - Interpretation of the criminal history record screening requirement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Interpretation of the criminal history... PRIVACY COMPACT COUNCIL STATE CRIMINAL HISTORY RECORD SCREENING STANDARDS § 904.2 Interpretation of the criminal history record screening requirement. Compact Article IV(c) provides that “Any record obtained...
28 CFR 904.2 - Interpretation of the criminal history record screening requirement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Interpretation of the criminal history... PRIVACY COMPACT COUNCIL STATE CRIMINAL HISTORY RECORD SCREENING STANDARDS § 904.2 Interpretation of the criminal history record screening requirement. Compact Article IV(c) provides that “Any record obtained...
28 CFR 904.2 - Interpretation of the criminal history record screening requirement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Interpretation of the criminal history... PRIVACY COMPACT COUNCIL STATE CRIMINAL HISTORY RECORD SCREENING STANDARDS § 904.2 Interpretation of the criminal history record screening requirement. Compact Article IV(c) provides that “Any record obtained...
28 CFR 904.2 - Interpretation of the criminal history record screening requirement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Interpretation of the criminal history... PRIVACY COMPACT COUNCIL STATE CRIMINAL HISTORY RECORD SCREENING STANDARDS § 904.2 Interpretation of the criminal history record screening requirement. Compact Article IV(c) provides that “Any record obtained...
ERIC Educational Resources Information Center
Nimon, Kim; Henson, Robin K.; Gates, Michael S.
2010-01-01
In the face of multicollinearity, researchers face challenges interpreting canonical correlation analysis (CCA) results. Although standardized function and structure coefficients provide insight into the canonical variates produced, they fall short when researchers want to fully report canonical effects. This article revisits the interpretation of…
Middle Grade Students' Interpretations of Contourmaps
ERIC Educational Resources Information Center
Carter, Glenda; Cook, Michelle; Park, John C.; Wiebe, Eric N.; Butler, Susan M.
2008-01-01
This study examined eighth graders' approach to three tasks implemented to assist students with learning to interpret contour maps. Students' approach to and interpretation of these three tasks were analyzed qualitatively. When students were rank ordered according to their scores on a standardized test of spatial ability, the Minnesota Paper Form…
Wolff, A P; Groen, G J; Crul, B J
2001-01-01
Selective spinal nerve infiltration blocks are used diagnostically in patients with chronic low back pain radiating into the leg. Generally, a segmental nerve block is considered successful if the pain is reduced substantially. Hypesthesia and elicited paresthesias coinciding with the presumed segmental level are used as controls. The interpretation depends on a standard dermatomal map. However, it is not clear if this interpretation is reliable enough, because standard dermatomal maps do not show the overlap of neighboring dermatomes. The goal of the present study is to establish if dissimilarities exist between areas of hypesthesia, spontaneous pain reported by the patient, pain reduction by local anesthetics, and paresthesias elicited by sensory electrostimulation. A secondary goal is to determine to what extent the interpretation is improved when the overlaps of neighboring dermatomes are taken into account. Patients suffering from chronic low back pain with pain radiating into the leg underwent lumbosacral segmental nerve root blocks at subsequent levels on separate days. Lidocaine (2%, 0.5 mL) mixed with radiopaque fluid (0.25 mL) was injected after verifying the target location using sensory and motor electrostimulation. Sensory changes (pinprick method), paresthesias (reported by the patient), and pain reduction (Numeric Rating Scale) were reported. Hypesthesia and paresthesias were registered in a standard dermatomal map and in an adapted map which included overlap of neighboring dermatomes. The relationships between spinal level of injection, extent of hypesthesia, location of paresthesias, and corresponding dermatome were assessed quantitatively. Comparison of the results between both dermatomal maps was done by paired t-tests. After inclusion, data were processed for 40 segmental nerve blocks (L2-S1) performed in 29 patients. Pain reduction was achieved in 43%. Hypesthetic areas showed a large variability in size and location, and also in comparison to paresthesias. Mean hypesthetic area amounted 2.7 +/- 1.4 (+/- SD: range, 0 to 6; standard map) and 3.6 +/- 1.8 (0 to 6; adapted map; P <.001) dermatomes. In these cases, hypesthesia in the corresponding dermatome was found in 80% (standard map) and 88% of the cases (adapted map, not significant). Paresthesias occurring in the corresponding dermatome were found in 80% (standard map) compared with 98% (adapted map, P <.001). In 85% (standard map) and 88% (adapted map), spontaneous pain was present in the dermatome corresponding to the level of local anesthetic injection. In 55% (standard map) versus 75% (adapted map, P <.005), a combination of spontaneous pain, hypesthesia, and paresthesias was found in the corresponding dermatome. Hypesthetic areas determined after lumbosacral segmental nerve blocks show a large variability in size and location compared with elicited paresthesias. Confirmation of an adequately performed segmental nerve block, determined by coexistence of hypesthesia, elicited paresthesias and pain in the presumed dermatome, is more reliable when the overlap of neighboring dermatomes is taken into account.
CAD system for automatic analysis of CT perfusion maps
NASA Astrophysics Data System (ADS)
Hachaj, T.; Ogiela, M. R.
2011-03-01
In this article, authors present novel algorithms developed for the computer-assisted diagnosis (CAD) system for analysis of dynamic brain perfusion, computer tomography (CT) maps, cerebral blood flow (CBF), and cerebral blood volume (CBV). Those methods perform both quantitative analysis [detection and measurement and description with brain anatomy atlas (AA) of potential asymmetries/lesions] and qualitative analysis (semantic interpretation of visualized symptoms). The semantic interpretation (decision about type of lesion: ischemic/hemorrhagic, is the brain tissue at risk of infraction or not) of visualized symptoms is done by, so-called, cognitive inference processes allowing for reasoning on character of pathological regions based on specialist image knowledge. The whole system is implemented in.NET platform (C# programming language) and can be used on any standard PC computer with.NET framework installed.
E.H. Helmer; T.A. Kennaway; D.H. Pedreros; M.L. Clark; H. Marcano-Vega; L.L. Tieszen; S.R. Schill; C.M.S. Carrington
2008-01-01
Satellite image-based mapping of tropical forests is vital to conservation planning. Standard methods for automated image classification, however, limit classification detail in complex tropical landscapes. In this study, we test an approach to Landsat image interpretation on four islands of the Lesser Antilles, including Grenada and St. Kitts, Nevis and St. Eustatius...
Competency Assessment in Senior Emergency Medicine Residents for Core Ultrasound Skills.
Schmidt, Jessica N; Kendall, John; Smalley, Courtney
2015-11-01
Quality resident education in point-of-care ultrasound (POC US) is becoming increasingly important in emergency medicine (EM); however, the best methods to evaluate competency in graduating residents has not been established. We sought to design and implement a rigorous assessment of image acquisition and interpretation in POC US in a cohort of graduating residents at our institution. We evaluated nine senior residents in both image acquisition and image interpretation for five core US skills (focused assessment with sonography for trauma (FAST), aorta, echocardiogram (ECHO), pelvic, central line placement). Image acquisition, using an observed clinical skills exam (OSCE) directed assessment with a standardized patient model. Image interpretation was measured with a multiple-choice exam including normal and pathologic images. Residents performed well on image acquisition for core skills with an average score of 85.7% for core skills and 74% including advanced skills (ovaries, advanced ECHO, advanced aorta). Residents scored well but slightly lower on image interpretation with an average score of 76%. Senior residents performed well on core POC US skills as evaluated with a rigorous assessment tool. This tool may be developed further for other EM programs to use for graduating resident evaluation.
Alternative Test Methods for Electronic Parts
NASA Technical Reports Server (NTRS)
Plante, Jeannette
2004-01-01
It is common practice within NASA to test electronic parts at the manufacturing lot level to demonstrate, statistically, that parts from the lot tested will not fail in service using generic application conditions. The test methods and the generic application conditions used have been developed over the years through cooperation between NASA, DoD, and industry in order to establish a common set of standard practices. These common practices, found in MIL-STD-883, MIL-STD-750, military part specifications, EEE-INST-002, and other guidelines are preferred because they are considered to be effective and repeatable and their results are usually straightforward to interpret. These practices can sometimes be unavailable to some NASA projects due to special application conditions that must be addressed, such as schedule constraints, cost constraints, logistical constraints, or advances in the technology that make the historical standards an inappropriate choice for establishing part performance and reliability. Alternate methods have begun to emerge and to be used by NASA programs to test parts individually or as part of a system, especially when standard lot tests cannot be applied. Four alternate screening methods will be discussed in this paper: Highly accelerated life test (HALT), forward voltage drop tests for evaluating wire-bond integrity, burn-in options during or after highly accelerated stress test (HAST), and board-level qualification.
Lie, Octavian V; Papanastassiou, Alexander M; Cavazos, José E; Szabó, Ákos C
2015-10-01
Poor seizure outcomes after epilepsy surgery often reflect an incorrect localization of the epileptic sources by standard intracranial EEG interpretation because of limited electrode coverage of the epileptogenic zone. This study investigates whether, in such conditions, source modeling is able to provide more accurate source localization than the standard clinical method that can be used prospectively to improve surgical resection planning. Suboptimal epileptogenic zone sampling is simulated by subsets of the electrode configuration used to record intracranial EEG in a patient rendered seizure free after surgery. sLORETA and the clinical method solutions are applied to interictal spikes sampled with these electrode subsets and are compared for colocalization with the resection volume and displacement due to electrode downsampling. sLORETA provides often congruent and at times more accurate source localization when compared with the standard clinical method. However, with electrode downsampling, individual sLORETA solution locations can vary considerably and shift consistently toward the remaining electrodes. sLORETA application can improve source localization based on the clinical method but does not reliably compensate for suboptimal electrode placement. Incorporating sLORETA solutions based on intracranial EEG in surgical planning should proceed cautiously in cases where electrode repositioning is planned on clinical grounds.
Brandt, Jaden; Alkabanni, Wajd; Alessi-Severini, Silvia; Leong, Christine
2018-04-04
Drug utilization research on benzodiazepines remains important for measuring trends in consumption within and across borders over time for the sake of monitoring prescribing patterns and identifying potential population safety concerns. The defined daily dose (DDD) system by the World Health Organization (WHO) remains the internationally accepted standard for measuring drug consumption; however, beyond consumption, DDD-based results are difficult to interpret when individual agents are compared with one another or are pooled into a total class-based estimate. The diazepam milligram equivalent (DME) system provides approximate conversions between benzodiazepines and Z-drugs (i.e. zopiclone, zolpidem, zaleplon) based on their pharmacologic potency. Despite this, conversion of total dispensed benzodiazepine quantities into DME values retains diazepam milligrams as the total unit of measurement, which is also impractical for population-level interpretation. In this paper, we propose the use of an integrated DME-DDD metric to obviate the limitations encountered when the component metrics are used in isolation. Through a case example, we demonstrate significant change in results between the DDD and DME-DDD method. Unlike the DDD method, the integrated DME-DDD metric offers estimation of population pharmacologic exposure, and enables superior interpretation of drug utilization results, especially for drug class summary reporting.
A roadmap for interpreting the literature on vision and driving.
Owsley, Cynthia; Wood, Joanne M; McGwin, Gerald
2015-01-01
Over the past several decades there has been a sharp increase in the number of studies focused on the relationship between vision and driving. The intensified attention to this topic has most likely been stimulated by the lack of an evidence basis for determining vision standards for driving licensure and a poor understanding about how vision impairment impacts driver safety and performance. Clinicians depend on the literature on vision and driving to advise visually impaired patients appropriately about driving fitness. Policy makers also depend on the scientific literature in order to develop guidelines that are evidence-based and are thus fair to persons who are visually impaired. Thus it is important for clinicians and policy makers alike to understand how various study designs and measurement methods should be interpreted so that the conclusions and recommendations they make are not overly broad, too narrowly constrained, or even misguided. We offer a methodological framework to guide interpretations of studies on vision and driving that can also serve as a heuristic for researchers in the area. Here, we discuss research designs and general measurement methods for the study of vision as they relate to driver safety, driver performance, and driver-centered (self-reported) outcomes. Copyright © 2015 Elsevier Inc. All rights reserved.
Caudle, Kelly E.; Dunnenberger, Henry M.; Freimuth, Robert R.; Peterson, Josh F.; Burlison, Jonathan D.; Whirl-Carrillo, Michelle; Scott, Stuart A.; Rehm, Heidi L.; Williams, Marc S.; Klein, Teri E.; Relling, Mary V.; Hoffman, James M.
2017-01-01
Introduction: Reporting and sharing pharmacogenetic test results across clinical laboratories and electronic health records is a crucial step toward the implementation of clinical pharmacogenetics, but allele function and phenotype terms are not standardized. Our goal was to develop terms that can be broadly applied to characterize pharmacogenetic allele function and inferred phenotypes. Materials and methods: Terms currently used by genetic testing laboratories and in the literature were identified. The Clinical Pharmacogenetics Implementation Consortium (CPIC) used the Delphi method to obtain a consensus and agree on uniform terms among pharmacogenetic experts. Results: Experts with diverse involvement in at least one area of pharmacogenetics (clinicians, researchers, genetic testing laboratorians, pharmacogenetics implementers, and clinical informaticians; n = 58) participated. After completion of five surveys, a consensus (>70%) was reached with 90% of experts agreeing to the final sets of pharmacogenetic terms. Discussion: The proposed standardized pharmacogenetic terms will improve the understanding and interpretation of pharmacogenetic tests and reduce confusion by maintaining consistent nomenclature. These standard terms can also facilitate pharmacogenetic data sharing across diverse electronic health care record systems with clinical decision support. Genet Med 19 2, 215–223. PMID:27441996
International recommendations for electrocardiographic interpretation in athletes.
Sharma, Sanjay; Drezner, Jonathan A; Baggish, Aaron; Papadakis, Michael; Wilson, Mathew G; Prutkin, Jordan M; La Gerche, Andre; Ackerman, Michael J; Borjesson, Mats; Salerno, Jack C; Asif, Irfan M; Owens, David S; Chung, Eugene H; Emery, Michael S; Froelicher, Victor F; Heidbuchel, Hein; Adamuz, Carmen; Asplund, Chad A; Cohen, Gordon; Harmon, Kimberly G; Marek, Joseph C; Molossi, Silvana; Niebauer, Josef; Pelto, Hank F; Perez, Marco V; Riding, Nathan R; Saarel, Tess; Schmied, Christian M; Shipon, David M; Stein, Ricardo; Vetter, Victoria L; Pelliccia, Antonio; Corrado, Domenico
2018-04-21
Sudden cardiac death (SCD) is the leading cause of mortality in athletes during sport. A variety of mostly hereditary, structural, or electrical cardiac disorders are associated with SCD in young athletes, the majority of which can be identified or suggested by abnormalities on a resting 12-lead electrocardiogram (ECG). Whether used for diagnostic or screening purposes, physicians responsible for the cardiovascular care of athletes should be knowledgeable and competent in ECG interpretation in athletes. However, in most countries a shortage of physician expertise limits wider application of the ECG in the care of the athlete. A critical need exists for physician education in modern ECG interpretation that distinguishes normal physiological adaptations in athletes from distinctly abnormal findings suggestive of underlying pathology. Since the original 2010 European Society of Cardiology recommendations for ECG interpretation in athletes, ECG standards have evolved quickly over the last decade; pushed by a growing body of scientific data that both tests proposed criteria sets and establishes new evidence to guide refinements. On 26-27 February 2015, an international group of experts in sports cardiology, inherited cardiac disease, and sports medicine convened in Seattle, Washington, to update contemporary standards for ECG interpretation in athletes. The objective of the meeting was to define and revise ECG interpretation standards based on new and emerging research and to develop a clear guide to the proper evaluation of ECG abnormalities in athletes. This statement represents an international consensus for ECG interpretation in athletes and provides expert opinion-based recommendations linking specific ECG abnormalities and the secondary evaluation for conditions associated with SCD.
Reproducibility in Computational Neuroscience Models and Simulations
McDougal, Robert A.; Bulanova, Anna S.; Lytton, William W.
2016-01-01
Objective Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results Building on these standard practices, model sharing sites and tools have been developed that fit into several categories: 1. standardized neural simulators, 2. shared computational resources, 3. declarative model descriptors, ontologies and standardized annotations; 4. model sharing repositories and sharing standards. Conclusion A number of complementary innovations have been proposed to enhance sharing, transparency and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance Model management will become increasingly important as multiscale models become larger, more detailed and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment. PMID:27046845
Jarnevich, Catherine S.; Talbert, Marian; Morisette, Jeffrey T.; Aldridge, Cameron L.; Brown, Cynthia; Kumar, Sunil; Manier, Daniel; Talbert, Colin; Holcombe, Tracy R.
2017-01-01
Evaluating the conditions where a species can persist is an important question in ecology both to understand tolerances of organisms and to predict distributions across landscapes. Presence data combined with background or pseudo-absence locations are commonly used with species distribution modeling to develop these relationships. However, there is not a standard method to generate background or pseudo-absence locations, and method choice affects model outcomes. We evaluated combinations of both model algorithms (simple and complex generalized linear models, multivariate adaptive regression splines, Maxent, boosted regression trees, and random forest) and background methods (random, minimum convex polygon, and continuous and binary kernel density estimator (KDE)) to assess the sensitivity of model outcomes to choices made. We evaluated six questions related to model results, including five beyond the common comparison of model accuracy assessment metrics (biological interpretability of response curves, cross-validation robustness, independent data accuracy and robustness, and prediction consistency). For our case study with cheatgrass in the western US, random forest was least sensitive to background choice and the binary KDE method was least sensitive to model algorithm choice. While this outcome may not hold for other locations or species, the methods we used can be implemented to help determine appropriate methodologies for particular research questions.
Hassett, Brenna R
2014-03-01
Linear enamel hypoplasia (LEH), the presence of linear defects of dental enamel formed during periods of growth disruption, is frequently analyzed in physical anthropology as evidence for childhood health in the past. However, a wide variety of methods for identifying and interpreting these defects in archaeological remains exists, preventing easy cross-comparison of results from disparate studies. This article compares a standard approach to identifying LEH using the naked eye to the evidence of growth disruption observed microscopically from the enamel surface. This comparison demonstrates that what is interpreted as evidence of growth disruption microscopically is not uniformly identified with the naked eye, and provides a reference for the level of consistency between the number and timing of defects identified using microscopic versus macroscopic approaches. This is done for different tooth types using a large sample of unworn permanent teeth drawn from several post-medieval London burial assemblages. The resulting schematic diagrams showing where macroscopic methods achieve more or less similar results to microscopic methods are presented here and clearly demonstrate that "naked-eye" methods of identifying growth disruptions do not identify LEH as often as microscopic methods in areas where perikymata are more densely packed. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Van doninck, Jasper; Tuomisto, Hanna
2017-06-01
Biodiversity mapping in extensive tropical forest areas poses a major challenge for the interpretation of Landsat images, because floristically clearly distinct forest types may show little difference in reflectance. In such cases, the effects of the bidirectional reflection distribution function (BRDF) can be sufficiently strong to cause erroneous image interpretation and classification. Since the opening of the Landsat archive in 2008, several BRDF normalization methods for Landsat have been developed. The simplest of these consist of an empirical view angle normalization, whereas more complex approaches apply the semi-empirical Ross-Li BRDF model and the MODIS MCD43-series of products to normalize directional Landsat reflectance to standard view and solar angles. Here we quantify the effect of surface anisotropy on Landsat TM/ETM+ images over old-growth Amazonian forests, and evaluate five angular normalization approaches. Even for the narrow swath of the Landsat sensors, we observed directional effects in all spectral bands. Those normalization methods that are based on removing the surface reflectance gradient as observed in each image were adequate to normalize TM/ETM+ imagery to nadir viewing, but were less suitable for multitemporal analysis when the solar vector varied strongly among images. Approaches based on the MODIS BRDF model parameters successfully reduced directional effects in the visible bands, but removed only half of the systematic errors in the infrared bands. The best results were obtained when the semi-empirical BRDF model was calibrated using pairs of Landsat observation. This method produces a single set of BRDF parameters, which can then be used to operationally normalize Landsat TM/ETM+ imagery over Amazonian forests to nadir viewing and a standard solar configuration.
Kligfield, Paul; Gettes, Leonard S; Bailey, James J; Childers, Rory; Deal, Barbara J; Hancock, E William; van Herpen, Gerard; Kors, Jan A; Macfarlane, Peter; Mirvis, David M; Pahlm, Olle; Rautaharju, Pentti; Wagner, Galen S
2007-03-01
This statement examines the relation of the resting ECG to its technology. Its purpose is to foster understanding of how the modern ECG is derived and displayed and to establish standards that will improve the accuracy and usefulness of the ECG in practice. Derivation of representative waveforms and measurements based on global intervals are described. Special emphasis is placed on digital signal acquisition and computer-based signal processing, which provide automated measurements that lead to computer-generated diagnostic statements. Lead placement, recording methods, and waveform presentation are reviewed. Throughout the statement, recommendations for ECG standards are placed in context of the clinical implications of evolving ECG technology.
ERIC Educational Resources Information Center
Boote, Stacy K.; Boote, David N.
2017-01-01
Students often struggle to interpret graphs correctly, despite emphasis on graphic literacy in U.S. education standards documents. The purpose of this study was to describe challenges sixth graders with varying levels of science and mathematics achievement encounter when transitioning from interpreting graphs having discrete independent variables…
Useful Effect Size Interpretations for Single Case Research
ERIC Educational Resources Information Center
Parker, Richard I.; Hagan-Burke, Shanna
2007-01-01
An obstacle to broader acceptability of effect sizes in single case research is their lack of intuitive and useful interpretations. Interpreting Cohen's d as "standard deviation units difference" and R[superscript 2] as "percent of variance accounted for" do not resound with most visual analysts. In fact, the only comparative analysis widely…
ERIC Educational Resources Information Center
Weber, Wilhelm K.
An examination of translation and conference interpretation as well-established academic professions focuses on how they should be taught in order to maintain the integrity of the two professions and the highest standards in their exercise. An introductory section answers the question, "Can translation and interpretation be taught?,"…
2013-01-01
Background Shoulder complaints are the third most common musculoskeletal problem in the general population. There are an abundance of physical examination maneuvers for diagnosing shoulder pathology. The validity of these maneuvers has not been adequately addressed. We propose a large Phase III study to investigate the accuracy of these tests in an orthopaedic setting. Methods We will recruit consecutive new shoulder patients who are referred to two tertiary orthopaedic clinics. We will select which physical examination tests to include using a modified Delphi process. The physician will take a thorough history from the patient and indicate their certainty about each possible diagnosis (certain the diagnosis is absent, present or requires further testing). The clinician will only perform the physical examination maneuvers for diagnoses where uncertainty remains. We will consider arthroscopy the reference standard for patients who undergo surgery within 8 months of physical examination and magnetic resonance imaging with arthrogram for patients who do not. We will calculate the sensitivity, specificity and positive and negative likelihood ratios and investigate whether combinations of the top tests provide stronger predictions of the presence or absence of disease. Discussion There are several considerations when performing a diagnostic study to ensure that the results are applicable in a clinical setting. These include, 1) including a representative sample, 2) selecting an appropriate reference standard, 3) avoiding verification bias, 4) blinding the interpreters of the physical examination tests to the interpretation of the gold standard and, 5) blinding the interpreters of the gold standard to the interpretation of the physical examination tests. The results of this study will inform clinicians of which tests, or combination of tests, successfully reduce diagnostic uncertainty, which tests are misleading and how physical examination may affect the magnitude of the confidence the clinician feels about their diagnosis. The results of this study may reduce the number of costly and invasive imaging studies (MRI, CT or arthrography) that are requisitioned when uncertainty about diagnosis remains following history and physical exam. We also hope to reduce the variability between specialists in which maneuvers are used during physical examination and how they are used, all of which will assist in improving consistency of care between centres. PMID:23394210
Miller, Matthew P; Kostakoglu, Lale; Pryma, Daniel; Yu, Jian Qin; Chau, Albert; Perlman, Eric; Clarke, Bonnie; Rosen, Donald; Ward, Penelope
2017-10-01
18 F-Fluciclovine is a novel PET/CT tracer. This blinded image evaluation (BIE) sought to demonstrate that, after limited training, readers naïve to 18 F-fluciclovine could interpret 18 F-fluciclovine images from subjects with biochemically recurrent prostate cancer with acceptable diagnostic performance and reproducibility. The primary objectives were to establish individual readers' diagnostic performance and the overall interpretation (2/3 reader concordance) compared with standard-of-truth data (histopathology or clinical follow-up) and to evaluate interreader reproducibility. Secondary objectives included comparison to the expert reader and assessment of intrareader reproducibility. Methods: 18 F-Fluciclovine PET/CT images ( n = 121) and corresponding standard-of-truth data were collected from 110 subjects at Emory University using a single-time-point static acquisition starting 5 min after injection of approximately 370 MBq of 18 F-fluciclovine. Three readers were trained using standardized interpretation methodology and subsequently evaluated the images in a blinded manner. Analyses were conducted at the lesion, region (prostate, including bed and seminal vesicle, or extraprostatic, including all lymph nodes, bone, or soft-tissue metastasis), and subject level. Results: Lesion-level overall positive predictive value was 70.5%. The readers' positive predictive value and negative predictive value were broadly consistent with each other and with the onsite read. Sensitivity was highest for readers 1 and 2 (68.5% and 63.9%, respectively) whereas specificity was highest for reader 3 (83.6%). Overall, prostate-level sensitivity was high (91.4%), but specificity was moderate (48.7%). Interreader agreement was 94.7%, 74.4%, and 70.3% for the lesion, prostate, and extraprostatic levels, respectively, with associated Fleiss' κ-values of 0.54, 0.50, and 0.57. Intrareader agreement was 97.8%, 96.9%, and 99.1% at the lesion level; 100%, 100%, and 91.7% in the prostate region; and 83.3%, 75.0%, and 83.3% in the extraprostatic region for readers 1, 2, and 3, respectively. Concordance between the BIE and the onsite reader exceeded 75% for each reader at the lesion, region, and subject levels. Conclusion: Specific training in the use of standardized interpretation methodology for assessment of 18 F-fluciclovine PET/CT images enables naïve readers to achieve acceptable diagnostic performance and reproducibility when staging recurrent prostate cancer. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
NASA Technical Reports Server (NTRS)
Hughes, William O.; McNelis, Anne M.; Chris Nottoli; Eric Wolfram
2015-01-01
The absorption coefficient for material specimens are needed to quantify the expected acoustic performance of that material in its actual usage and environment. The ASTM C423-09a standard, "Standard Test Method for Sound Absorption and Sound Absorption Coefficients by the Reverberant Room Method" is often used to measure the absorption coefficient of material test specimens. This method has its basics in the Sabine formula. Although widely used, the interpretation of these measurements are a topic of interest. For example, in certain cases the measured Sabine absorption coefficients are greater than 1.0 for highly absorptive materials. This is often attributed to the diffraction edge effect phenomenon. An investigative test program to measure the absorption properties of highly absorbent melamine foam has been performed at the Riverbank Acoustical Laboratories. This paper will present and discuss the test results relating to the effect of the test materials' surface area, thickness and edge sealing conditions. A follow-on paper is envisioned that will present and discuss the results relating to the spacing between multiple piece specimens, and the mounting condition of the test specimen.
Fraccaro, Paolo; Nicolo, Massimo; Bonetto, Monica; Giacomini, Mauro; Weller, Peter; Traverso, Carlo Enrico; Prosperi, Mattia; OSullivan, Dympna
2015-01-27
To investigate machine learning methods, ranging from simpler interpretable techniques to complex (non-linear) "black-box" approaches, for automated diagnosis of Age-related Macular Degeneration (AMD). Data from healthy subjects and patients diagnosed with AMD or other retinal diseases were collected during routine visits via an Electronic Health Record (EHR) system. Patients' attributes included demographics and, for each eye, presence/absence of major AMD-related clinical signs (soft drusen, retinal pigment epitelium, defects/pigment mottling, depigmentation area, subretinal haemorrhage, subretinal fluid, macula thickness, macular scar, subretinal fibrosis). Interpretable techniques known as white box methods including logistic regression and decision trees as well as less interpreitable techniques known as black box methods, such as support vector machines (SVM), random forests and AdaBoost, were used to develop models (trained and validated on unseen data) to diagnose AMD. The gold standard was confirmed diagnosis of AMD by physicians. Sensitivity, specificity and area under the receiver operating characteristic (AUC) were used to assess performance. Study population included 487 patients (912 eyes). In terms of AUC, random forests, logistic regression and adaboost showed a mean performance of (0.92), followed by SVM and decision trees (0.90). All machine learning models identified soft drusen and age as the most discriminating variables in clinicians' decision pathways to diagnose AMD. Both black-box and white box methods performed well in identifying diagnoses of AMD and their decision pathways. Machine learning models developed through the proposed approach, relying on clinical signs identified by retinal specialists, could be embedded into EHR to provide physicians with real time (interpretable) support.
Huang, Ay Huey; Wu, Jiunn Jong; Weng, Yu Mei; Ding, Hwia Cheng; Chang, Tsung Chain
1998-01-01
Nonfastidious aerobic gram-negative bacilli (GNB) are commonly isolated from blood cultures. The feasibility of using an electrochemical method for direct antimicrobial susceptibility testing of GNB in positive blood cultures was evaluated. An aliquot (10 μl) of 1:10-diluted positive blood cultures containing GNB was inoculated into the Bactometer module well (bioMérieux Vitek, Hazelwood, Mo.) containing 1 ml of Mueller-Hinton broth supplemented with an antibiotic. Susceptibility tests were performed in a breakpoint broth dilution format, with the results being categorized as resistant, intermediate, or susceptible. Seven antibiotics (ampicillin, cephalothin, gentamicin, amikacin, cefamandole, cefotaxime, and ciprofloxacin) were used in this study, with each agent being tested at the two interpretive breakpoint concentrations. The inoculated modules were incubated at 35°C, and the change in impedance in each well was continuously monitored for 24 h by the Bactometer. The MICs of the seven antibiotics for each blood isolate were also determined by the standardized broth microdilution method. Of 146 positive blood cultures (1,022 microorganism-antibiotic combinations) containing GNB tested by the direct method, the rates of very major, major, and minor errors were 0, 1.1, and 2.5%, respectively. The impedance method was simple; no centrifugation, preincubation, or standardization of the inocula was required, and the susceptibility results were normally available within 3 to 6 h after inoculation. The rapid method may allow proper antimicrobial treatment almost 30 to 40 h before the results of the standard methods are available. PMID:9738038
A formal approach to the analysis of clinical computer-interpretable guideline modeling languages.
Grando, M Adela; Glasspool, David; Fox, John
2012-01-01
To develop proof strategies to formally study the expressiveness of workflow-based languages, and to investigate their applicability to clinical computer-interpretable guideline (CIG) modeling languages. We propose two strategies for studying the expressiveness of workflow-based languages based on a standard set of workflow patterns expressed as Petri nets (PNs) and notions of congruence and bisimilarity from process calculus. Proof that a PN-based pattern P can be expressed in a language L can be carried out semi-automatically. Proof that a language L cannot provide the behavior specified by a PNP requires proof by exhaustion based on analysis of cases and cannot be performed automatically. The proof strategies are generic but we exemplify their use with a particular CIG modeling language, PROforma. To illustrate the method we evaluate the expressiveness of PROforma against three standard workflow patterns and compare our results with a previous similar but informal comparison. We show that the two proof strategies are effective in evaluating a CIG modeling language against standard workflow patterns. We find that using the proposed formal techniques we obtain different results to a comparable previously published but less formal study. We discuss the utility of these analyses as the basis for principled extensions to CIG modeling languages. Additionally we explain how the same proof strategies can be reused to prove the satisfaction of patterns expressed in the declarative language CIGDec. The proof strategies we propose are useful tools for analysing the expressiveness of CIG modeling languages. This study provides good evidence of the benefits of applying formal methods of proof over semi-formal ones. Copyright © 2011 Elsevier B.V. All rights reserved.
Interpretation of Blood Microbiology Results - Function of the Clinical Microbiologist.
Kristóf, Katalin; Pongrácz, Júlia
2016-04-01
The proper use and interpretation of blood microbiology results may be one of the most challenging and one of the most important functions of clinical microbiology laboratories. Effective implementation of this function requires careful consideration of specimen collection and processing, pathogen detection techniques, and prompt and precise reporting of identification and susceptibility results. The responsibility of the treating physician is proper formulation of the analytical request and to provide the laboratory with complete and precise patient information, which are inevitable prerequisites of a proper testing and interpretation. The clinical microbiologist can offer advice concerning the differential diagnosis, sampling techniques and detection methods to facilitate diagnosis. Rapid detection methods are essential, since the sooner a pathogen is detected, the better chance the patient has of getting cured. Besides the gold-standard blood culture technique, microbiologic methods that decrease the time in obtaining a relevant result are more and more utilized today. In the case of certain pathogens, the pathogen can be identified directly from the blood culture bottle after propagation with serological or automated/semi-automated systems or molecular methods or with MALDI-TOF MS (matrix-assisted laser desorption-ionization time of flight mass spectrometry). Molecular biology methods are also suitable for the rapid detection and identification of pathogens from aseptically collected blood samples. Another important duty of the microbiology laboratory is to notify the treating physician immediately about all relevant information if a positive sample is detected. The clinical microbiologist may provide important guidance regarding the clinical significance of blood isolates, since one-third to one-half of blood culture isolates are contaminants or isolates of unknown clinical significance. To fully exploit the benefits of blood culture and other (non- culture based) diagnoses, the microbiologist and the clinician should interact directly.
DOE interpretations Guide to OSH standards. Update to the Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-03-31
Reflecting Secretary O`Leary`s focus on occupational safety and health, the Office of Occupational Safety is pleased to provide you with the latest update to the DOE Interpretations Guide to OSH Standards. This Guide was developed in cooperation with the Occupational Safety and Health Administration, which continued it`s support during this last revision by facilitating access to the interpretations found on the OSHA Computerized Information System (OCIS). This March 31, 1994 update contains 123 formal in letter written by OSHA. As a result of the unique requests received by the 1-800 Response Line, this update also contains 38 interpretations developed bymore » DOE. This new occupational safety and health information adds still more important guidance to the four volume reference set that you presently have in your possession.« less
76 FR 17191 - Staff Accounting Bulletin No. 114
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-28
...This Staff Accounting Bulletin (SAB) revises or rescinds portions of the interpretive guidance included in the codification of the Staff Accounting Bulletin Series. This update is intended to make the relevant interpretive guidance consistent with current authoritative accounting guidance issued as part of the Financial Accounting Standards Board's Accounting Standards Codification. The principal changes involve revision or removal of accounting guidance references and other conforming changes to ensure consistency of referencing throughout the SAB Series.
Norris, Ross L; Martin, Jennifer H; Thompson, Erin; Ray, John E; Fullinfaw, Robert O; Joyce, David; Barras, Michael; Jones, Graham R; Morris, Raymond G
2010-10-01
The measurement of drug concentrations, for clinical purposes, occurs in many diagnostic laboratories throughout Australia and New Zealand. However, the provision of a comprehensive therapeutic drug monitoring (TDM) service requires the additional elements of pre- and postanalytical advice to ensure that concentrations reported are meaningful, interpretable, and clinically applicable to the individual patient. The aim of this project was to assess the status of TDM services in Australia and New Zealand. A range of professions involved in key aspects of TDM was surveyed by questionnaire in late 2007. Information gathered included: the list of drugs assayed; analytical methods used; interpretation services offered; interpretative methods used; and further monitoring advice provided. Fifty-seven responses were received, of which 42% were from hospitals (public and/or private); 11% a hospital (public and/or private) and pathology provider; and 47% a pathology provider only (public and/or private). Results showed that TDM is applied to a large number of different drugs. Poorly performing assay methods were used in some cases, even when published guidelines recommended alternative practices. Although there was a wide array of assays available, the evidence suggested a need for better selection of assay methods. In addition, only limited advice and/or interpretation of results was offered. Of concern, less than 50% of those providing advice on aminoglycoside dosing in adults used pharmacokinetic tools with six of 37 (16.2%) respondents using Bayesian pharmacokinetic tools, the method recommended in the Australian Therapeutic Guidelines: Antibiotic. In conclusion, the survey highlighted deficiencies in the provision of TDM services, in particular assay method selection and both quality and quantity of postanalytical advice. A range of recommendations, some of which may have international implications, are discussed. There is a need to include measures of impact on clinical decision-making when assessing assay methodologies. Best practice guidelines and professional standards of practice in TDM are needed, supported by an active program of professional development to ensure the benefits of TDM are realized. This will require significant partnerships between the various professions involved.
NGS-based likelihood ratio for identifying contributors in two- and three-person DNA mixtures.
Chan Mun Wei, Joshua; Zhao, Zicheng; Li, Shuai Cheng; Ng, Yen Kaow
2018-06-01
DNA fingerprinting, also known as DNA profiling, serves as a standard procedure in forensics to identify a person by the short tandem repeat (STR) loci in their DNA. By comparing the STR loci between DNA samples, practitioners can calculate a probability of match to identity the contributors of a DNA mixture. Most existing methods are based on 13 core STR loci which were identified by the Federal Bureau of Investigation (FBI). Analyses based on these loci of DNA mixture for forensic purposes are highly variable in procedures, and suffer from subjectivity as well as bias in complex mixture interpretation. With the emergence of next-generation sequencing (NGS) technologies, the sequencing of billions of DNA molecules can be parallelized, thus greatly increasing throughput and reducing the associated costs. This allows the creation of new techniques that incorporate more loci to enable complex mixture interpretation. In this paper, we propose a computation for likelihood ratio that uses NGS (next generation sequencing) data for DNA testing on mixed samples. We have applied the method to 4480 simulated DNA mixtures, which consist of various mixture proportions of 8 unrelated whole-genome sequencing data. The results confirm the feasibility of utilizing NGS data in DNA mixture interpretations. We observed an average likelihood ratio as high as 285,978 for two-person mixtures. Using our method, all 224 identity tests for two-person mixtures and three-person mixtures were correctly identified. Copyright © 2018 Elsevier Ltd. All rights reserved.
Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin
2017-10-01
In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.
Olive ingestion causing a false suspicion of relapsed neuroblastoma: A case of "oliveblastoma?"
Flynn, Nick; LeFebvre, Amanda; Messahel, Boo; Hogg, Sarah L
2018-06-19
Measurement of the urine catecholamine metabolites homovanillic acid (HVA) and vanillylmandelic acid (VMA) are the standard method for detecting disease recurrence in neuroblastoma. We present a case of abnormal concentrations of catecholamine metabolites that prompted investigations for relapsed neuroblastoma. However, further study revealed that the abnormal biochemistry was likely due to ingestion of olives. Olive ingestion should be considered when interpreting urine HVA and VMA results, and excluded if concentrations are unexpectedly abnormal. © 2018 Wiley Periodicals, Inc.
Investigation of physical parameters in stellar flares observed by GINGA
NASA Technical Reports Server (NTRS)
Stern, Robert A.
1994-01-01
This program involves analysis and interpretation of results from GINGA Large Area Counter (LAC) observations from a group of large stellar x-ray flares. All LAC data are re-extracted using the standard Hayashida method of LAC background subtraction and analyzed using various models available with the XSPEC spectral fitting program. Temperature-emission measure histories are available for a total of 5 flares observed by GINGA. These will be used to compare physical parameters of these flares with solar and stellar flare models.
Investigation of physical parameters in stellar flares observed by GINGA
NASA Technical Reports Server (NTRS)
Stern, Robert A.
1994-01-01
This program involves analysis and interpretation of results from GINGA Large Area Counter (LAC) observations from a group of large stellar X-ray flares. All LAC data are re-extracted using the standard Hayashida method of LAC background subtraction and analyzed using various models available with the XSPEC spectral fitting program.Temperature-emission measure histories are available for a total of 5 flares observed by GINGA. These will be used to compare physical parameters of these flares with solar and stellar flare models.
Flexible Method for Inter-object Communication in C++
NASA Technical Reports Server (NTRS)
Curlett, Brian P.; Gould, Jack J.
1994-01-01
A method has been developed for organizing and sharing large amounts of information between objects in C++ code. This method uses a set of object classes to define variables and group them into tables. The variable tables presented here provide a convenient way of defining and cataloging data, as well as a user-friendly input/output system, a standardized set of access functions, mechanisms for ensuring data integrity, methods for interprocessor data transfer, and an interpretive language for programming relationships between parameters. The object-oriented nature of these variable tables enables the use of multiple data types, each with unique attributes and behavior. Because each variable provides its own access methods, redundant table lookup functions can be bypassed, thus decreasing access times while maintaining data integrity. In addition, a method for automatic reference counting was developed to manage memory safely.
Harmonization in laboratory medicine: Requests, samples, measurements and reports.
Plebani, Mario
2016-01-01
In laboratory medicine, the terms "standardization" and "harmonization" are frequently used interchangeably as the final goal is the same: the equivalence of measurement results among different routine measurement procedures over time and space according to defined analytical and clinical quality specifications. However, the terms define two distinct, albeit closely linked, concepts based on traceability principles. The word "standardization" is used when results for a measurement are equivalent and traceable to the International System of Units (SI) through a high-order primary reference material and/or a reference measurement procedure (RMP). "Harmonization" is generally used when results are equivalent, but neither a high-order primary reference material nor a reference measurement procedure is available. Harmonization is a fundamental aspect of quality in laboratory medicine as its ultimate goal is to improve patient outcomes through the provision of accurate and actionable laboratory information. Patients, clinicians and other healthcare professionals assume that clinical laboratory tests performed by different laboratories at different times on the same sample and specimen can be compared, and that results can be reliably and consistently interpreted. Unfortunately, this is not necessarily the case, because many laboratory test results are still highly variable and poorly standardized and harmonized. Although the initial focus was mainly on harmonizing and standardizing analytical processes and methods, the scope of harmonization now also includes all other aspects of the total testing process (TTP), such as terminology and units, report formats, reference intervals and decision limits as well as tests and test profiles, requests and criteria for interpretation. Several projects and initiatives aiming to improve standardization and harmonization in the testing process are now underway. Laboratory professionals should therefore step up their efforts to provide interchangeable and comparable laboratory information in order to ultimately assure better diagnosis and treatment in patient care.
Iridology: A systematic review.
Ernst, E
1999-02-01
Iridologists claim to be able to diagnose medical conditions through abnormalities of pigmentation in the iris. This technique is popular in many countries. Therefore it is relevant to ask whether it is valid. To systematically review all interpretable tests of the validity of iridology as a diagnostic tool. DATA SOURCE AND EXTRACTION: Three independent literature searches were performed to identify all blinded tests. Data were extracted in a predefined, standardized fashion. Four case control studies were found. The majority of these investigations suggests that iridology is not a valid diagnostic method. The validity of iridology as a diagnostic tool is not supported by scientific evaluations. Patients and therapists should be discouraged from using this method.
Lintott, Paul R; Davison, Sophie; van Breda, John; Kubasiewicz, Laura; Dowse, David; Daisley, Jonathan; Haddy, Emily; Mathews, Fiona
2018-01-01
Acoustic surveys of bats are one of the techniques most commonly used by ecological practitioners. The results are used in Ecological Impact Assessments to assess the likely impacts of future developments on species that are widely protected in law, and to monitor developments' postconstruction. However, there is no standardized methodology for analyzing or interpreting these data, which can make the assessment of the ecological value of a site very subjective. Comparisons of sites and projects are therefore difficult for ecologists and decision-makers, for example, when trying to identify the best location for a new road based on relative bat activity levels along alternative routes. Here, we present a new web-based, data-driven tool, Ecobat, which addresses the need for a more robust way of interpreting ecological data. Ecobat offers users an easy, standardized, and objective method for analyzing bat activity data. It allows ecological practitioners to compare bat activity data at regional and national scales and to generate a numerical indicator of the relative importance of a night's worth of bat activity. The tool is free and open-source; because the underlying algorithms are already developed, it could easily be expanded to new geographical regions and species. Data donation is required to ensure the robustness of the analyses; we use a positive feedback mechanism to encourage ecological practitioners to share data by providing in return high quality, contextualized data analysis, and graphical visualizations for direct use in ecological reports.
ERIC Educational Resources Information Center
Skyba, Kateryna
2014-01-01
The article presents an overview of the certification process by which potential translators and interpreters demonstrate minimum standards of performance to warrant official or professional recognition of their ability to translate or interpret and to practice professionally in Australia, Canada, the USA and Ukraine. The aim of the study is to…
NASA Astrophysics Data System (ADS)
Chesterman, Frédérique; Manssens, Hannah; Morel, Céline; Serrell, Guillaume; Piepers, Bastian; Kimpe, Tom
2017-03-01
Medical displays for primary diagnosis are calibrated to the DICOM GSDF1 but there is no accepted standard today that describes how display systems for medical modalities involving color should be calibrated. Recently the Color Standard Display Function3,4 (CSDF), a calibration using the CIEDE2000 color difference metric to make a display as perceptually linear as possible has been proposed. In this work we present the results of a first observer study set up to investigate the interpretation accuracy of a rainbow color scale when a medical display is calibrated to CSDF versus DICOM GSDF and a second observer study set up to investigate the detectability of color differences when a medical display is calibrated to CSDF, DICOM GSDF and sRGB. The results of the first study indicate that the error when interpreting a rainbow color scale is lower for CSDF than for DICOM GSDF with statistically significant difference (Mann-Whitney U test) for eight out of twelve observers. The results correspond to what is expected based on CIEDE2000 color differences between consecutive colors along the rainbow color scale for both calibrations. The results of the second study indicate a statistical significant improvement in detecting color differences when a display is calibrated to CSDF compared to DICOM GSDF and a (non-significant) trend indicating improved detection for CSDF compared to sRGB. To our knowledge this is the first work that shows the added value of a perceptual color calibration method (CSDF) in interpreting medical color images using the rainbow color scale. Improved interpretation of the rainbow color scale may be beneficial in the area of quantitative medical imaging (e.g. PET SUV, quantitative MRI and CT and doppler US), where a medical specialist needs to interpret quantitative medical data based on a color scale and/or detect subtle color differences and where improved interpretation accuracy and improved detection of color differences may contribute to a better diagnosis. Our results indicate that for diagnostic applications involving both grayscale and color images, CSDF should be chosen over DICOM GSDF and sRGB as it assures excellent detection for color images and at the same time maintains DICOM GSDF for grayscale images.
Inertial effects on mechanically braked Wingate power calculations.
Reiser, R F; Broker, J P; Peterson, M L
2000-09-01
The standard procedure for determining subject power output from a 30-s Wingate test on a mechanically braked (friction-loaded) ergometer includes only the braking resistance and flywheel velocity in the computations. However, the inertial effects associated with accelerating and decelerating the crank and flywheel also require energy and, therefore, represent a component of the subject's power output. The present study was designed to determine the effects of drive-system inertia on power output calculations. Twenty-eight male recreational cyclists completed Wingate tests on a Monark 324E mechanically braked ergometer (resistance: 8.5% body mass (BM), starting cadence: 60 rpm). Power outputs were then compared using both standard (without inertial contribution) and corrected methods (with inertial contribution) of calculating power output. Relative 5-s peak power and 30-s average power for the corrected method (14.8 +/- 1.2 W x kg(-1) BM; 9.9 +/- 0.7 W x kg(-1) BM) were 20.3% and 3.1% greater than that of the standard method (12.3 +/- 0.7 W x kg(-1) BM; 9.6 +/- 0.7 W x kg(-1) BM), respectively. Relative 5-s minimum power for the corrected method (6.8 +/- 0.7 W x kg(-1) BM) was 6.8% less than that of the standard method (7.3 +/- 0.8 W x kg(-1) BM). The combined differences in the peak power and minimum power produced a fatigue index for the corrected method (54 +/- 5%) that was 31.7% greater than that of the standard method (41 +/- 6%). All parameter differences were significant (P < 0.01). The inertial contribution to power output was dominated by the flywheel; however, the contribution from the crank was evident. These results indicate that the inertial components of the ergometer drive system influence the power output characteristics, requiring care when computing, interpreting, and comparing Wingate results, particularly among different ergometer designs and test protocols.
McLaren, Stuart J; Page, Wyatt H; Parker, Lou; Rushton, Martin
2013-12-19
An evaluation of 28 commercially available toys imported into New Zealand revealed that 21% of these toys do not meet the acoustic criteria in the ISO standard, ISO 8124-1:2009 Safety of Toys, adopted by Australia and New Zealand as AS/NZS ISO 8124.1:2010. While overall the 2010 standard provided a greater level of protection than the earlier 2002 standard, there was one high risk toy category where the 2002 standard provided greater protection. A secondary set of toys from the personal collections of children known to display atypical methods of play with toys, such as those with autism spectrum disorders (ASD), was part of the evaluation. Only one of these toys cleanly passed the 2010 standard, with the remainder failing or showing a marginal-pass. As there is no tolerance level stated in the standards to account for interpretation of data and experimental error, a value of +2 dB was used. The findings of the study indicate that the current standard is inadequate in providing protection against excessive noise exposure. Amendments to the criteria have been recommended that apply to the recently adopted 2013 standard. These include the integration of the new approaches published in the recently amended European standard (EN 71) on safety of toys.
McLaren, Stuart J.; Page, Wyatt H.; Parker, Lou; Rushton, Martin
2013-01-01
An evaluation of 28 commercially available toys imported into New Zealand revealed that 21% of these toys do not meet the acoustic criteria in the ISO standard, ISO 8124-1:2009 Safety of Toys, adopted by Australia and New Zealand as AS/NZS ISO 8124.1:2010. While overall the 2010 standard provided a greater level of protection than the earlier 2002 standard, there was one high risk toy category where the 2002 standard provided greater protection. A secondary set of toys from the personal collections of children known to display atypical methods of play with toys, such as those with autism spectrum disorders (ASD), was part of the evaluation. Only one of these toys cleanly passed the 2010 standard, with the remainder failing or showing a marginal-pass. As there is no tolerance level stated in the standards to account for interpretation of data and experimental error, a value of +2 dB was used. The findings of the study indicate that the current standard is inadequate in providing protection against excessive noise exposure. Amendments to the criteria have been recommended that apply to the recently adopted 2013 standard. These include the integration of the new approaches published in the recently amended European standard (EN 71) on safety of toys. PMID:24452254
Visualizing tumor evolution with the fishplot package for R.
Miller, Christopher A; McMichael, Joshua; Dang, Ha X; Maher, Christopher A; Ding, Li; Ley, Timothy J; Mardis, Elaine R; Wilson, Richard K
2016-11-07
Massively-parallel sequencing at depth is now enabling tumor heterogeneity and evolution to be characterized in unprecedented detail. Tracking these changes in clonal architecture often provides insight into therapeutic response and resistance. In complex cases involving multiple timepoints, standard visualizations, such as scatterplots, can be difficult to interpret. Current data visualization methods are also typically manual and laborious, and often only approximate subclonal fractions. We have developed an R package that accurately and intuitively displays changes in clonal structure over time. It requires simple input data and produces illustrative and easy-to-interpret graphs suitable for diagnosis, presentation, and publication. The simplicity, power, and flexibility of this tool make it valuable for visualizing tumor evolution, and it has potential utility in both research and clinical settings. The fishplot package is available at https://github.com/chrisamiller/fishplot .
Marginalized zero-altered models for longitudinal count data.
Tabb, Loni Philip; Tchetgen, Eric J Tchetgen; Wellenius, Greg A; Coull, Brent A
2016-10-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias.
Marginalized zero-altered models for longitudinal count data
Tabb, Loni Philip; Tchetgen, Eric J. Tchetgen; Wellenius, Greg A.; Coull, Brent A.
2015-01-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias. PMID:27867423
Policies and practices of beach monitoring in the Great Lakes, USA: a critical review
Nevers, Meredith B.; Whitman, Richard L.
2010-01-01
Beaches throughout the Great Lakes are monitored for fecal indicator bacteria (typically Escherichia coli) in order to protect the public from potential sewage contamination. Currently, there is no universal standard for sample collection and analysis or results interpretation. Monitoring policies are developed by individual beach management jurisdictions, and applications are highly variable across and within lakes, states, and provinces. Extensive research has demonstrated that sampling decisions for time, depth, number of replicates, frequency of sampling, and laboratory analysis all influence the results outcome, as well as calculations of the mean and interpretation of the results in policy decisions. Additional shortcomings to current monitoring approaches include appropriateness and reliability of currently used indicator bacteria and the overall goal of these monitoring programs. Current research is attempting to circumvent these complex issues by developing new tools and methods for beach monitoring. In this review, we highlight the variety of sampling routines used across the Great Lakes and the extensive body of research that challenges comparisons among beaches. We also assess the future of Great Lakes monitoring and the advantages and disadvantages of establishing standards that are evenly applied across all beaches.
Informed consent in neurosurgery—translating ethical theory into action
Schmitz, Dagmar; Reinacher, Peter C
2006-01-01
Objective Although a main principle of medical ethics and law since the 1970s, standards of informed consent are regarded with great scepticism by many clinicans. Methods By reviewing the reactions to and adoption of this principle of medical ethics in neurosurgery, the characteristic conflicts that emerge between theory and everyday clinical experience are emphasised and a modified conception of informed consent is proposed. Results The adoption and debate of informed consent in neurosurgery took place in two steps. Firstly, respect for patient autonomy was included into the ethical codes of the professional organisations. Secondly, the legal demands of the principle were questioned by clinicians. Informed consent is mainly interpreted in terms of freedom from interference and absolute autonomy. It lacks a constructive notion of physician–patient interaction in its effort to promote the best interest of the patient, which, however, potentially emerges from a reconsideration of the principle of beneficence. Conclusion To avoid insufficient legal interpretations, informed consent should be understood in terms of autonomy and beneficence. A continuous interaction between the patient and the given physician is considered as an essential prerequisite for the realisation of the standards of informed consent. PMID:16943326
Preparation of pyrolysis reference samples: evaluation of a standard method using a tube furnace.
Sandercock, P Mark L
2012-05-01
A new, simple method for the reproducible creation of pyrolysis products from different materials that may be found at a fire scene is described. A temperature programmable steady-state tube furnace was used to generate pyrolysis products from different substrates, including softwoods, paper, vinyl sheet flooring, and carpet. The temperature profile of the tube furnace was characterized, and the suitability of the method to reproducibly create pyrolysates similar to those found in real fire debris was assessed. The use of this method to create proficiency tests to realistically test an examiner's ability to interpret complex gas chromatograph-mass spectrometric fire debris data, and to create a library of pyrolsates generated from materials commonly found at a fire scene, is demonstrated. © 2011 American Academy of Forensic Sciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogorelov, A. A.; Suslov, I. M.
2008-06-15
New estimates of the critical exponents have been obtained from the field-theoretical renormalization group using a new method for summing divergent series. The results almost coincide with the central values obtained by Le Guillou and Zinn-Justin (the so-called standard values), but have lower uncertainty. It has been shown that usual field-theoretical estimates implicitly imply the smoothness of the coefficient functions. The last assumption is open for discussion in view of the existence of the oscillating contribution to the coefficient functions. The appropriate interpretation of the last contribution is necessary both for the estimation of the systematic errors of the standardmore » values and for a further increase in accuracy.« less
Gutierrez, Amanda M; Robinson, Jill O; Statham, Emily E; Scollon, Sarah; Bergstrom, Katie L; Slashinski, Melody J; Parsons, Donald W; Plon, Sharon E; McGuire, Amy L; Street, Richard L
2017-11-01
Describe modifications to technical genomic terminology made by interpreters during disclosure of whole exome sequencing (WES) results. Using discourse analysis, we identified and categorized interpretations of genomic terminology in 42 disclosure sessions where Spanish-speaking parents received their child's WES results either from a clinician using a medical interpreter, or directly from a bilingual physician. Overall, 76% of genomic terms were interpreted accordantly, 11% were misinterpreted and 13% were omitted. Misinterpretations made by interpreters and bilingual physicians included using literal and nonmedical terminology to interpret genomic concepts. Modifications to genomic terminology made during interpretation highlight the need to standardize bilingual genomic lexicons. We recommend Spanish terms that can be used to refer to genomic concepts.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Section 9901.302 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL...) The Cost Accounting Standards Board (hereinafter referred to as the “Board”) is established by and..., promulgate, amend, and rescind cost accounting standards and regulations, including interpretations thereof...
King, B
2001-11-01
The new laboratory accreditation standard, ISO/IEC 17025, reflects current thinking on good measurement practice by requiring more explicit and more demanding attention to a number of activities. These include client interactions, method validation, traceability, and measurement uncertainty. Since the publication of the standard in 1999 there has been extensive debate about its interpretation. It is the author's view that if good quality practices are already in place and if the new requirements are introduced in a manner that is fit for purpose, the additional work required to comply with the new requirements can be expected to be modest. The paper argues that the rigour required in addressing the issues should be driven by customer requirements and the factors that need to be considered in this regard are discussed. The issues addressed include the benefits, interim arrangements, specifying the analytical requirement, establishing traceability, evaluating the uncertainty and reporting the information.
Kligfield, Paul; Gettes, Leonard S; Bailey, James J; Childers, Rory; Deal, Barbara J; Hancock, E William; van Herpen, Gerard; Kors, Jan A; Macfarlane, Peter; Mirvis, David M; Pahlm, Olle; Rautaharju, Pentti; Wagner, Galen S; Josephson, Mark; Mason, Jay W; Okin, Peter; Surawicz, Borys; Wellens, Hein
2007-03-13
This statement examines the relation of the resting ECG to its technology. Its purpose is to foster understanding of how the modern ECG is derived and displayed and to establish standards that will improve the accuracy and usefulness of the ECG in practice. Derivation of representative waveforms and measurements based on global intervals are described. Special emphasis is placed on digital signal acquisition and computer-based signal processing, which provide automated measurements that lead to computer-generated diagnostic statements. Lead placement, recording methods, and waveform presentation are reviewed. Throughout the statement, recommendations for ECG standards are placed in context of the clinical implications of evolving ECG technology.
Kligfield, Paul; Gettes, Leonard S; Bailey, James J; Childers, Rory; Deal, Barbara J; Hancock, E William; van Herpen, Gerard; Kors, Jan A; Macfarlane, Peter; Mirvis, David M; Pahlm, Olle; Rautaharju, Pentti; Wagner, Galen S; Josephson, Mark; Mason, Jay W; Okin, Peter; Surawicz, Borys; Wellens, Hein
2007-03-13
This statement examines the relation of the resting ECG to its technology. Its purpose is to foster understanding of how the modern ECG is derived and displayed and to establish standards that will improve the accuracy and usefulness of the ECG in practice. Derivation of representative waveforms and measurements based on global intervals are described. Special emphasis is placed on digital signal acquisition and computer-based signal processing, which provide automated measurements that lead to computer-generated diagnostic statements. Lead placement, recording methods, and waveform presentation are reviewed. Throughout the statement, recommendations for ECG standards are placed in context of the clinical implications of evolving ECG technology.
Codetype-based interpretation of the MMPI-2 in an outpatient psychotherapy sample.
Koffmann, Andrew
2015-01-01
In an evaluation of the codetype-based interpretation of the MMPI-2, 48 doctoral student psychotherapists rated their clients' (N = 120) standardized interpretations as more accurate when based on the profile's codetype, in comparison with ratings for interpretations based on alternate codetypes. Effect sizes ranged from nonsignificant to large, depending on the degree of proximity between the profile's codetype and the alternate codetype. There was weak evidence to suggest that well-defined profiles yielded more accurate interpretations than undefined profiles. It appears that codetype-based interpretation of the MMPI-2 is generally valid, but there might be little difference in the accuracy of interpretations based on nearby codetypes.
Blackmore, C Craig; Terasawa, Teruhiko
2006-02-01
Error in radiology can be reduced by standardizing the interpretation of imaging studies to the optimum sensitivity and specificity. In this report, the authors demonstrate how the optimal interpretation of appendiceal computed tomography (CT) can be determined and how it varies in different clinical scenarios. Utility analysis and receiver operating characteristic (ROC) curve modeling were used to determine the trade-off between false-positive and false-negative test results to determine the optimal operating point on the ROC curve for the interpretation of appendicitis CT. Modeling was based on a previous meta-analysis for the accuracy of CT and on literature estimates of the utilities of various health states. The posttest probability of appendicitis was derived using Bayes's theorem. At a low prevalence of disease (screening), appendicitis CT should be interpreted at high specificity (97.7%), even at the expense of lower sensitivity (75%). Conversely, at a high probability of disease, high sensitivity (97.4%) is preferred (specificity 77.8%). When the clinical diagnosis of appendicitis is equivocal, CT interpretation should emphasize both sensitivity and specificity (sensitivity 92.3%, specificity 91.5%). Radiologists can potentially decrease medical error and improve patient health by varying the interpretation of appendiceal CT on the basis of the clinical probability of appendicitis. This report is an example of how utility analysis can be used to guide radiologists in the interpretation of imaging studies and provide guidance on appropriate targets for the standardization of interpretation.
Fasihi, Yasser; Fooladi, Saba; Mohammadi, Mohammad Ali; Emaneini, Mohammad; Kalantar-Neyestanaki, Davood
2017-09-06
Molecular typing is an important tool for control and prevention of infection. A suitable molecular typing method for epidemiological investigation must be easy to perform, highly reproducible, inexpensive, rapid and easy to interpret. In this study, two molecular typing methods including the conventional PCR-sequencing method and high resolution melting (HRM) analysis were used for staphylococcal protein A (spa) typing of 30 Methicillin-resistant Staphylococcus aureus (MRSA) isolates recovered from clinical samples. Based on PCR-sequencing method results, 16 different spa types were identified among the 30 MRSA isolates. Among the 16 different spa types, 14 spa types separated by HRM method. Two spa types including t4718 and t2894 were not separated from each other. According to our results, spa typing based on HRM analysis method is very rapid, easy to perform and cost-effective, but this method must be standardized for different regions, spa types, and real-time machinery.
A data-driven approach for quality assessment of radiologic interpretations.
Hsu, William; Han, Simon X; Arnold, Corey W; Bui, Alex At; Enzmann, Dieter R
2016-04-01
Given the increasing emphasis on delivering high-quality, cost-efficient healthcare, improved methodologies are needed to measure the accuracy and utility of ordered diagnostic examinations in achieving the appropriate diagnosis. Here, we present a data-driven approach for performing automated quality assessment of radiologic interpretations using other clinical information (e.g., pathology) as a reference standard for individual radiologists, subspecialty sections, imaging modalities, and entire departments. Downstream diagnostic conclusions from the electronic medical record are utilized as "truth" to which upstream diagnoses generated by radiology are compared. The described system automatically extracts and compares patient medical data to characterize concordance between clinical sources. Initial results are presented in the context of breast imaging, matching 18 101 radiologic interpretations with 301 pathology diagnoses and achieving a precision and recall of 84% and 92%, respectively. The presented data-driven method highlights the challenges of integrating multiple data sources and the application of information extraction tools to facilitate healthcare quality improvement. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Canivez, Gary L; Watkins, Marley W; Dombrowski, Stefan C
2016-08-01
The factor structure of the 16 Primary and Secondary subtests of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014a) standardization sample was examined with exploratory factor analytic methods (EFA) not included in the WISC-V Technical and Interpretive Manual (Wechsler, 2014b). Factor extraction criteria suggested 1 to 4 factors and results favored 4 first-order factors. When this structure was transformed with the Schmid and Leiman (1957) orthogonalization procedure, the hierarchical g-factor accounted for large portions of total and common variance while the 4 first-order factors accounted for small portions of total and common variance; rendering interpretation at the factor index level less appropriate. Although the publisher favored a 5-factor model where the Perceptual Reasoning factor was split into separate Visual Spatial and Fluid Reasoning dimensions, no evidence for 5 factors was found. It was concluded that the WISC-V provides strong measurement of general intelligence and clinical interpretation should be primarily, if not exclusively, at that level. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Sovio, Ulla; Smith, Gordon C S
2018-02-01
It has been proposed that correction of offspring weight percentiles (customization) might improve the prediction of adverse pregnancy outcome; however, the approach is not accepted universally. A complication in the interpretation of the data is that the main method for calculation of customized percentiles uses a fetal growth standard, and multiple analyses have compared the results with birthweight-based standards. First, we aimed to determine whether women who deliver small-for-gestational-age infants using a customized standard differed from other women. Second, we aimed to compare the association between birthweight percentile and adverse outcome using 3 different methods for percentile calculation: (1) a noncustomized actual birthweight standard, (2) a noncustomized fetal growth standard, and (3) a fully customized fetal growth standard. We analyzed data from the Pregnancy Outcome Prediction study, a prospective cohort study of nulliparous women who delivered in Cambridge, UK, between 2008 and 2013. We used a composite adverse outcome, namely, perinatal morbidity or preeclampsia. Receiver operating characteristic curve analysis was used to compare the 3 methods of calculating birthweight percentiles in relation to the composite adverse outcome. We confirmed previous observations that delivering an infant who was small for gestational age (<10th percentile) with the use of a fully customized fetal growth standard but who was appropriate for gestational age with the use of a noncustomized actual birthweight standard was associated with higher rates of adverse outcomes. However, we also observed that the mothers of these infants were 3-4 times more likely to be obese and to deliver preterm. When we compared the risk of adverse outcome from logistic regression models that were fitted to the birthweight percentiles that were derived by each of the 3 predefined methods, the areas under the receiver operating characteristic curves were similar for all 3 methods: 0.56 (95% confidence interval, 0.54-0.59) fully customized, 0.56 (95% confidence interval, 0.53-0.59) noncustomized fetal weight standard, and 0.55 (95% confidence interval, 0.53-0.58) noncustomized actual birthweight standard. When we classified the top 5% of predicted risk as high risk, the methods that used a fetal growth standard showed attenuation after adjustment for gestational age, whereas the birthweight standard did not. Further adjustment for the maternal characteristics, which included weight, attenuated the association with the customized standard, but not the other 2 methods. The associations after full adjustment were similar when we compared the 3 approaches. The independent association between birthweight percentile and adverse outcome was similar when we compared actual birthweight standards and fetal growth standards and compared customized and noncustomized standards. Use of fetal weight standards and customized percentiles for maternal characteristics could lead to stronger associations with adverse outcome through confounding by preterm birth and maternal obesity. Copyright © 2017 Elsevier Inc. All rights reserved.
Carney, Patricia A; Allison, Kimberly H; Oster, Natalia V; Frederick, Paul D; Morgan, Thomas R; Geller, Berta M; Weaver, Donald L; Elmore, Joann G
2016-07-01
We examined how pathologists' process their perceptions of how their interpretations on diagnoses for breast pathology cases agree with a reference standard. To accomplish this, we created an individualized self-directed continuing medical education program that showed pathologists interpreting breast specimens how their interpretations on a test set compared with a reference diagnosis developed by a consensus panel of experienced breast pathologists. After interpreting a test set of 60 cases, 92 participating pathologists were asked to estimate how their interpretations compared with the standard for benign without atypia, atypia, ductal carcinoma in situ and invasive cancer. We then asked pathologists their thoughts about learning about differences in their perceptions compared with actual agreement. Overall, participants tended to overestimate their agreement with the reference standard, with a mean difference of 5.5% (75.9% actual agreement; 81.4% estimated agreement), especially for atypia and were least likely to overestimate it for invasive breast cancer. Non-academic affiliated pathologists were more likely to more closely estimate their performance relative to academic affiliated pathologists (77.6 vs 48%; P=0.001), whereas participants affiliated with an academic medical center were more likely to underestimate agreement with their diagnoses compared with non-academic affiliated pathologists (40 vs 6%). Before the continuing medical education program, nearly 55% (54.9%) of participants could not estimate whether they would overinterpret the cases or underinterpret them relative to the reference diagnosis. Nearly 80% (79.8%) reported learning new information from this individualized web-based continuing medical education program, and 23.9% of pathologists identified strategies they would change their practice to improve. In conclusion, when evaluating breast pathology specimens, pathologists do a good job of estimating their diagnostic agreement with a reference standard, but for atypia cases, pathologists tend to overestimate diagnostic agreement. Many participants were able to identify ways to improve.
Scale structure: Processing Minimum Standard and Maximum Standard Scalar Adjectives
Frazier, Lyn; Clifton, Charles; Stolterfoht, Britta
2008-01-01
Gradable adjectives denote a function that takes an object and returns a measure of the degree to which the object possesses some gradable property (Kennedy, 1999). Scales, ordered sets of degrees, have begun to be studied systematically in semantics (Kennedy, to appear, Kennedy & McNally, 2005, Rotstein & Winter, 2004). We report four experiments designed to investigate the processing of absolute adjectives with a maximum standard (e.g., clean) and their minimum standard antonyms (dirty). The central hypothesis is that the denotation of an absolute adjective introduces a ‘standard value’ on a scale as part of the normal comprehension of a sentence containing the adjective (the “Obligatory Scale” hypothesis). In line with the predictions of Kennedy and McNally (2005) and Rotstein and Winter (2004), maximum standard adjectives and minimum standard adjectives systematically differ from each other when they are combined with minimizing modifiers like slightly, as indicated by speeded acceptability judgments. An eye movement recording study shows that, as predicted by the Obligatory Scale hypothesis, the penalty due to combining slightly with a maximum standard adjective can be observed during the processing of the sentence; the penalty is not the result of some after-the-fact inferencing mechanism. Further, a type of ‘quantificational variability effect’ may be observed when a quantificational adverb (mostly) is combined with a minimum standard adjective in sentences like The dishes are mostly dirty, which may receive either a degree interpretation (e.g. 80% dirty) or a quantity interpretation (e.g., 80% of the dishes are dirty). The quantificational variability results provide suggestive support for the Obligatory Scale hypothesis by showing that the standard of a scalar adjective influences the preferred interpretation of other constituents in the sentence. PMID:17376422
Evaluation of different methods for determining growing degree-day thresholds in apricot cultivars
NASA Astrophysics Data System (ADS)
Ruml, Mirjana; Vuković, Ana; Milatović, Dragan
2010-07-01
The aim of this study was to examine different methods for determining growing degree-day (GDD) threshold temperatures for two phenological stages (full bloom and harvest) and select the optimal thresholds for a greater number of apricot ( Prunus armeniaca L.) cultivars grown in the Belgrade region. A 10-year data series were used to conduct the study. Several commonly used methods to determine the threshold temperatures from field observation were evaluated: (1) the least standard deviation in GDD; (2) the least standard deviation in days; (3) the least coefficient of variation in GDD; (4) regression coefficient; (5) the least standard deviation in days with a mean temperature above the threshold; (6) the least coefficient of variation in days with a mean temperature above the threshold; and (7) the smallest root mean square error between the observed and predicted number of days. In addition, two methods for calculating daily GDD, and two methods for calculating daily mean air temperatures were tested to emphasize the differences that can arise by different interpretations of basic GDD equation. The best agreement with observations was attained by method (7). The lower threshold temperature obtained by this method differed among cultivars from -5.6 to -1.7°C for full bloom, and from -0.5 to 6.6°C for harvest. However, the “Null” method (lower threshold set to 0°C) and “Fixed Value” method (lower threshold set to -2°C for full bloom and to 3°C for harvest) gave very good results. The limitations of the widely used method (1) and methods (5) and (6), which generally performed worst, are discussed in the paper.
Domínguez, J; Boettger, E C; Cirillo, D; Cobelens, F; Eisenach, K D; Gagneux, S; Hillemann, D; Horsburgh, R; Molina-Moya, B; Niemann, S; Tortoli, E; Whitelaw, A; Lange, C
2016-01-01
The emergence of drug-resistant strains of Mycobacterium tuberculosis is a challenge to global tuberculosis (TB) control. Although culture-based methods have been regarded as the gold standard for drug susceptibility testing (DST), molecular methods provide rapid information on mutations in the M. tuberculosis genome associated with resistance to anti-tuberculosis drugs. We ascertained consensus on the use of the results of molecular DST for clinical treatment decisions in TB patients. This document has been developed by TBNET and RESIST-TB groups to reach a consensus about reporting standards in the clinical use of molecular DST results. Review of the available literature and the search for evidence included hand-searching journals and searching electronic databases. The panel identified single nucleotide mutations in genomic regions of M. tuberculosis coding for katG, inhA, rpoB, embB, rrs, rpsL and gyrA that are likely related to drug resistance in vivo. Identification of any of these mutations in clinical isolates of M. tuberculosis has implications for the management of TB patients, pending the results of in vitro DST. However, false-positive and false-negative results in detecting resistance-associated mutations in drugs for which there is poor or unproven correlation between phenotypic and clinical drug resistance complicate the interpretation. Reports of molecular DST results should therefore include specific information on the mutations identified and provide guidance for clinicians on interpretation and on the choice of the appropriate initial drug regimen.
Cao, Yiping; Sivaganesan, Mano; Kelty, Catherine A; Wang, Dan; Boehm, Alexandria B; Griffith, John F; Weisberg, Stephen B; Shanks, Orin C
2018-01-01
Human fecal pollution of recreational waters remains a public health concern worldwide. As a result, there is a growing interest in the application of human-associated fecal source identification quantitative real-time PCR (qPCR) technologies for water quality research and management. However, there are currently no standardized approaches for field implementation and interpretation of qPCR data. In this study, a standardized HF183/BacR287 qPCR method was combined with a water sampling strategy and a novel Bayesian weighted average approach to establish a human fecal contamination score (HFS) that can be used to prioritize sampling sites for remediation based on measured human waste levels. The HFS was then used to investigate 975 study design scenarios utilizing different combinations of sites with varying sampling intensities (daily to once per week) and number of qPCR replicates per sample (2-14 replicates). Findings demonstrate that site prioritization with HFS is feasible and that both sampling intensity and number of qPCR replicates influence reliability of HFS estimates. The novel data analysis strategy presented here provides a prescribed approach for the implementation and interpretation of human-associated HF183/BacR287 qPCR data with the goal of site prioritization based on human fecal pollution levels. In addition, information is provided for future users to customize study designs for optimal HFS performance. Published by Elsevier Ltd.
David, Matthieu; Fertin, Guillaume; Rogniaux, Hélène; Tessier, Dominique
2017-08-04
The analysis of discovery proteomics experiments relies on algorithms that identify peptides from their tandem mass spectra. The almost exhaustive interpretation of these spectra remains an unresolved issue. At present, an important number of missing interpretations is probably due to peptides displaying post-translational modifications and variants that yield spectra that are particularly difficult to interpret. However, the emergence of a new generation of mass spectrometers that provide high fragment ion accuracy has paved the way for more efficient algorithms. We present a new software, SpecOMS, that can handle the computational complexity of pairwise comparisons of spectra in the context of large volumes. SpecOMS can compare a whole set of experimental spectra generated by a discovery proteomics experiment to a whole set of theoretical spectra deduced from a protein database in a few minutes on a standard workstation. SpecOMS can ingeniously exploit those capabilities to improve the peptide identification process, allowing strong competition between all possible peptides for spectrum interpretation. Remarkably, this software resolves the drawbacks (i.e., efficiency problems and decreased sensitivity) that usually accompany open modification searches. We highlight this promising approach using results obtained from the analysis of a public human data set downloaded from the PRIDE (PRoteomics IDEntification) database.
Helping Standards Make the Grade.
ERIC Educational Resources Information Center
Guskey, Thomas R.
2001-01-01
Educators can develop fair and accurate standards-based grading/reporting by switching to criterion-referenced grading practices; using differentiated criteria (denoting product, process, and progress); clarifying the purpose of each reporting tool; and developing a reporting form that identifies standards, facilitates interpretation, and…
Rubinstein, Jack; Dhoble, Abhijeet; Ferenchick, Gary
2009-01-13
Most medical professionals are expected to possess basic electrocardiogram (EKG) interpretation skills. But, published data suggests that residents' and physicians' EKG interpretation skills are suboptimal. Learning styles differ among medical students; individualization of teaching methods has been shown to be viable and may result in improved learning. Puzzles have been shown to facilitate learning in a relaxed environment. The objective of this study was to assess efficacy of teaching puzzle in EKG interpretation skills among medical students. This is a reader blinded crossover trial. Third year medical students from College of Human Medicine, Michigan State University participated in this study. Two groups (n = 9) received two traditional EKG interpretation skills lectures followed by a standardized exam and two extra sessions with the teaching puzzle and a different exam. Two other groups (n = 6) received identical courses and exams with the puzzle session first followed by the traditional teaching. EKG interpretation scores on final test were used as main outcome measure. The average score after only traditional teaching was 4.07 +/- 2.08 while after only the puzzle session was 4.04 +/- 2.36 (p = 0.97). The average improvement after the traditional session was followed up with a puzzle session was 2.53 +/- 1.94 while the average improvement after the puzzle session was followed with the traditional session was 2.08 +/- 1.73 (p = 0.67). The final EKG exam score for this cohort (n = 15) was 84.1 compared to 86.6 (p = 0.22) for a comparable sample of medical students (n = 15) at a different campus. Teaching EKG interpretation with puzzles is comparable to traditional teaching and may be particularly useful for certain subgroups of students. Puzzle session are more interactive and relaxing, and warrant further investigations on larger scale.
Critically re-evaluating a common technique
Geisbush, Thomas; Jones, Lyell; Weiss, Michael; Mozaffar, Tahseen; Gronseth, Gary; Rutkove, Seward B.
2016-01-01
Objectives: (1) To assess the diagnostic accuracy of EMG in radiculopathy. (2) To evaluate the intrarater reliability and interrater reliability of EMG in radiculopathy. (3) To assess the presence of confirmation bias in EMG. Methods: Three experienced academic electromyographers interpreted 3 compact discs with 20 EMG videos (10 normal, 10 radiculopathy) in a blinded, standardized fashion without information regarding the nature of the study. The EMGs were interpreted 3 times (discs A, B, C) 1 month apart. Clinical information was provided only with disc C. Intrarater reliability was calculated by comparing interpretations in discs A and B, interrater reliability by comparing interpretation between reviewers. Confirmation bias was estimated by the difference in correct interpretations when clinical information was provided. Results: Sensitivity was similar to previous reports (77%, confidence interval [CI] 63%–90%); specificity was 71%, CI 56%–85%. Intrarater reliability was good (κ 0.61, 95% CI 0.41–0.81); interrater reliability was lower (κ 0.53, CI 0.35–0.71). There was no substantial confirmation bias when clinical information was provided (absolute difference in correct responses 2.2%, CI −13.3% to 17.7%); the study lacked precision to exclude moderate confirmation bias. Conclusions: This study supports that (1) serial EMG studies should be performed by the same electromyographer since intrarater reliability is better than interrater reliability; (2) knowledge of clinical information does not bias EMG interpretation substantially; (3) EMG has moderate diagnostic accuracy for radiculopathy with modest specificity and electromyographers should exercise caution interpreting mild abnormalities. Classification of evidence: This study provides Class III evidence that EMG has moderate diagnostic accuracy and specificity for radiculopathy. PMID:26701380
Managing Highway Maintenance: Standards for Maintenance Work, Part 3, Unit 8, Level 2.
ERIC Educational Resources Information Center
Federal Highway Administration (DOT), Washington, DC. Offices of Research and Development.
Part of the series "Managing Highway Maintenance," the unit explains various uses of maintenance standards and how standards should be interpreted and communicated to formen and crew leaders. Several examples are given of the decisions made when applying the standards to routine work. The preceding units on standards (parts 1 and 2)…
Gannotti, Mary E; Handwerker, W Penn
2002-12-01
Validating the cultural context of health is important for obtaining accurate and useful information from standardized measures of child health adapted for cross-cultural applications. This paper describes the application of ethnographic triangulation for cultural validation of a measure of childhood disability, the Pediatric Evaluation of Disability Inventory (PEDI) for use with children living in Puerto Rico. The key concepts include macro-level forces such as geography, demography, and economics, specific activities children performed and their key social interactions, beliefs, attitudes, emotions, and patterns of behavior surrounding independence in children and childhood disability, as well as the definition of childhood disability. Methods utilize principal components analysis to establish the validity of cultural concepts and multiple regression analysis to identify intracultural variation. Findings suggest culturally specific modifications to the PEDI, provide contextual information for informed interpretation of test scores, and point to the need to re-standardize normative values for use with Puerto Rican children. Without this type of information, Puerto Rican children may appear more disabled than expected for their level of impairment or not to be making improvements in functional status. The methods also allow for cultural boundaries to be quantitatively established, rather than presupposed. Copyright 2002 Elsevier Science Ltd.
NASA Astrophysics Data System (ADS)
Pandzic, K.; Likso, T.
2012-04-01
Conventional Palmer Drought Index (PDI) and recent Standardized Precipitation Index (SPI) for Zagreb Gric Observatory are compared by spectral analysis technique. Data for a period 1862-2010 are used. The results indicate that SPI is simpler for interpretation but PDI more comprehensive index. On the other side, lack of temperature within SPI, make impossible application of it on climate change interpretation. Possible applications of them in irrigation scheduling system is considered as well for drought risk assessment.
What Bell proved: A reply to Blaylock
NASA Astrophysics Data System (ADS)
Maudlin, Tim
2010-01-01
Blaylock argues that the derivation of Bell's inequality requires a hidden assumption, counterfactual definiteness, of which Bell was unaware. A careful analysis of Bell's argument shows that Bell presupposes only locality and the predictions of standard quantum mechanics. Counterfactual definiteness, insofar as it is required, is derived in the course of the argument rather than presumed. Bell's theorem has no direct bearing on the many worlds interpretation not because that interpretation denies counterfactual definiteness but because it does not recover the predictions of standard quantum mechanics.
The Role of Materials Research in Ceramics and ARCHAEOLOGY1
NASA Astrophysics Data System (ADS)
Vandiver, Pamela
2001-08-01
Materials research has been applied successfully to the study of archaeological ceramics for the last fifty years. To learn about our history and the human condition is not just to analyze and preserve the objects but also to investigate and understand the knowledge and skills used to produce and use them. Many researchers have probed the limits and methods of such studies, always mindful that a glimpse at ancient reality lies in the details of time and place, context of finds, and experimentally produced data, usually compared with standards that were collected in an equivalent ethnographic setting or that were fabricated in a laboratory in order to elucidate the critical questions in a technology that could be understood in no other way. The basis of most studies of ancient technology has been established as microstructure; composition and firing; methods and sequence of manufacture; differentiation of use; use-wear and post-depositional processes; technological variability that can be interpreted as a pattern of stasis or innovation, which can be related to cultural continuity or change; and interpretation that can involve technology, subsistence trade, organization, and symbolic group- and self-definition.
A Roadmap for Interpreting the Literature on Vision and Driving
Owsley, Cynthia; Wood, Joanne M.; McGwin, Gerald
2015-01-01
Over the past several decades there has been a sharp increase in the number of studies focused on the relationship between vision and driving. The intensified scientific attention to this topic has most likely been stimulated by the lack of an evidence-basis for determining vision standards for driving licensure and a poor understanding about how vision impairment impacts driver safety and performance. Clinicians depend on the scientific literature on vision and driving as a resource to appropriately advise visually impaired patients about driving fitness. Policy makers also depend on the scientific literature in order to develop guidelines that are evidence-based and are thus fair to persons who are visually impaired. Thus it is important for clinicians and policy makers alike to understand how various study designs and measurement methods should be appropriately interpreted so that the conclusions and recommendations they make based on this literature are not overly broad, too narrowly constrained, or even misguided. In this overview, based on our 25 years of experience in this field, we offer a methodological framework to guide interpretations of studies on vision and driving, which can also serve as a heuristic for researchers in the area. Here we discuss research designs and general measurement methods for the study of vision as they relate to driver safety, driver performance, and driver-centered (self-reported) outcomes. PMID:25753389
Richard-Davis, Gloria; Whittemore, Brianna; Disher, Anthony; Rice, Valerie Montgomery; Lenin, Rathinasamy B; Dollins, Camille; Siegel, Eric R; Eswaran, Hari
2018-01-01
Objective: Increased mammographic breast density is a well-established risk factor for breast cancer development, regardless of age or ethnic background. The current gold standard for categorizing breast density consists of a radiologist estimation of percent density according to the American College of Radiology (ACR) Breast Imaging Reporting and Data System (BI-RADS) criteria. This study compares paired qualitative interpretations of breast density on digital mammograms with quantitative measurement of density using Hologic’s Food and Drug Administration–approved R2 Quantra volumetric breast density assessment tool. Our goal was to find the best cutoff value of Quantra-calculated breast density for stratifying patients accurately into high-risk and low-risk breast density categories. Methods: Screening digital mammograms from 385 subjects, aged 18 to 64 years, were evaluated. These mammograms were interpreted by a radiologist using the ACR’s BI-RADS density method, and had quantitative density measured using the R2 Quantra breast density assessment tool. The appropriate cutoff for breast density–based risk stratification using Quantra software was calculated using manually determined BI-RADS scores as a gold standard, in which scores of D3/D4 denoted high-risk densities and D1/D2 denoted low-risk densities. Results: The best cutoff value for risk stratification using Quantra-calculated breast density was found to be 14.0%, yielding a sensitivity of 65%, specificity of 77%, and positive and negative predictive values of 75% and 69%, respectively. Under bootstrap analysis, the best cutoff value had a mean ± SD of 13.70% ± 0.89%. Conclusions: Our study is the first to publish on a North American population that assesses the accuracy of the R2 Quantra system at breast density stratification. Quantitative breast density measures will improve accuracy and reliability of density determination, assisting future researchers to accurately calculate breast cancer risks associated with density increase. PMID:29511356
An introduction to using Bayesian linear regression with clinical data.
Baldwin, Scott A; Larson, Michael J
2017-11-01
Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; MacEachren, Alan M
2008-01-01
Background Kulldorff's spatial scan statistic and its software implementation – SaTScan – are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. Results We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of SaTScan results. Conclusion The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. Method We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit. PMID:18992163
Cutibacterium acnes molecular typing: time to standardize the method.
Dagnelie, M-A; Khammari, A; Dréno, B; Corvec, S
2018-03-12
The Gram-positive, anaerobic/aerotolerant bacterium Cutibacterium acnes is a commensal of healthy human skin; it is subdivided into six main phylogenetic groups or phylotypes: IA1, IA2, IB, IC, II and III. To decipher how far specific subgroups of C. acnes are involved in disease physiopathology, different molecular typing methods have been developed to identify these subgroups: i.e. phylotypes, clonal complexes, and types defined by single-locus sequence typing (SLST). However, as several molecular typing methods have been developed over the last decade, it has become a difficult task to compare the results from one article to another. Based on the scientific literature, the aim of this narrative review is to propose a standardized method to perform molecular typing of C. acnes, according to the degree of resolution needed (phylotypes, clonal complexes, or SLST types). We discuss the existing different typing methods from a critical point of view, emphasizing their advantages and drawbacks, and we identify the most frequently used methods. We propose a consensus algorithm according to the needed phylogeny resolution level. We first propose to use multiplex PCR for phylotype identification, MLST9 for clonal complex determination, and SLST for phylogeny investigation including numerous isolates. There is an obvious need to create a consensus about molecular typing methods for C. acnes. This standardization will facilitate the comparison of results between one article and another, and also the interpretation of clinical data. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Forcing scheme analysis for the axisymmetric lattice Boltzmann method under incompressible limit.
Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chen, Jie; Yin, Linmao; Chew, Jia Wei
2017-04-01
Because the standard lattice Boltzmann (LB) method is proposed for Cartesian Navier-Stokes (NS) equations, additional source terms are necessary in the axisymmetric LB method for representing the axisymmetric effects. Therefore, the accuracy and applicability of the axisymmetric LB models depend on the forcing schemes adopted for discretization of the source terms. In this study, three forcing schemes, namely, the trapezium rule based scheme, the direct forcing scheme, and the semi-implicit centered scheme, are analyzed theoretically by investigating their derived macroscopic equations in the diffusive scale. Particularly, the finite difference interpretation of the standard LB method is extended to the LB equations with source terms, and then the accuracy of different forcing schemes is evaluated for the axisymmetric LB method. Theoretical analysis indicates that the discrete lattice effects arising from the direct forcing scheme are part of the truncation error terms and thus would not affect the overall accuracy of the standard LB method with general force term (i.e., only the source terms in the momentum equation are considered), but lead to incorrect macroscopic equations for the axisymmetric LB models. On the other hand, the trapezium rule based scheme and the semi-implicit centered scheme both have the advantage of avoiding the discrete lattice effects and recovering the correct macroscopic equations. Numerical tests applied for validating the theoretical analysis show that both the numerical stability and the accuracy of the axisymmetric LB simulations are affected by the direct forcing scheme, which indicate that forcing schemes free of the discrete lattice effects are necessary for the axisymmetric LB method.
Genotoxicity investigations on nanomaterials.
Oesch, Franz; Landsiedel, Robert
2012-07-01
This review is based on the lecture presented at the April 2010 nanomaterials safety assessment Postsatellite to the 2009 EUROTOX Meeting and summarizes genotoxicity investigations on nanomaterials published in the open scientific literature (up to 2008). Special attention is paid to the relationship between particle size and positive versus negative outcome, as well as the dependence of the outcome on the test used. Salient conclusions and outstanding recommendations emerging from the information summarized in this review are as follows: recognize that nanomaterials are not all the same; therefore know and document what nanomaterial has been tested and in what form; take nanomaterials specific properties into account; in order to make your results comparable with those of others and on other nanomaterials: use or at least include in your studies standardized methods; use in vivo studies to put in vitro results into perspective; take uptake and distribution of the nanomaterial into account; and in order to become able to make extrapolations to risk for human: learn about the mechanism of nanomaterials genotoxic effects. Past experience with standard non-nanosubstances already had shown that mechanisms of genotoxic effects can be complex and their elucidation can be demanding, while there often is an immediate need to assess the genotoxic hazard. Thus, a practical and pragmatic approach to genotoxicity investigations of novel nanomaterials is the use of a battery of standard genotoxicity testing methods covering a wide range of mechanisms. Application of these standard methods to nanomaterials demands, however, adaptations, and the interpretation of results from the genotoxicity testing of nanomaterials needs additional considerations exceeding those used for standard size materials.
7 CFR 611.10 - Standards, guidelines, and plans.
Code of Federal Regulations, 2012 CFR
2012-01-01
... CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE CONSERVATION OPERATIONS SOIL SURVEYS Soil Survey Operations § 611.10 Standards, guidelines, and plans. (a) NRCS conducts soil surveys under national standards and guidelines for naming, classifying, and interpreting soils and for disseminating soil survey information. (b...
7 CFR 611.10 - Standards, guidelines, and plans.
Code of Federal Regulations, 2014 CFR
2014-01-01
... CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE CONSERVATION OPERATIONS SOIL SURVEYS Soil Survey Operations § 611.10 Standards, guidelines, and plans. (a) NRCS conducts soil surveys under national standards and guidelines for naming, classifying, and interpreting soils and for disseminating soil survey information. (b...
7 CFR 611.10 - Standards, guidelines, and plans.
Code of Federal Regulations, 2010 CFR
2010-01-01
... CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE CONSERVATION OPERATIONS SOIL SURVEYS Soil Survey Operations § 611.10 Standards, guidelines, and plans. (a) NRCS conducts soil surveys under national standards and guidelines for naming, classifying, and interpreting soils and for disseminating soil survey information. (b...
7 CFR 611.10 - Standards, guidelines, and plans.
Code of Federal Regulations, 2011 CFR
2011-01-01
... CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE CONSERVATION OPERATIONS SOIL SURVEYS Soil Survey Operations § 611.10 Standards, guidelines, and plans. (a) NRCS conducts soil surveys under national standards and guidelines for naming, classifying, and interpreting soils and for disseminating soil survey information. (b...
7 CFR 611.10 - Standards, guidelines, and plans.
Code of Federal Regulations, 2013 CFR
2013-01-01
... CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE CONSERVATION OPERATIONS SOIL SURVEYS Soil Survey Operations § 611.10 Standards, guidelines, and plans. (a) NRCS conducts soil surveys under national standards and guidelines for naming, classifying, and interpreting soils and for disseminating soil survey information. (b...
ERIC Educational Resources Information Center
James, Richard A.
2013-01-01
In standards-based education the importance of interpreting standards and effectively embedding them into instructional design is critical in connecting curriculum and instruction. Finding the link between standards and instruction while striving to engage students has proven difficult. Too often instructional design does not meet the cognitive…
Comparing Standard Deviation Effects across Contexts
ERIC Educational Resources Information Center
Ost, Ben; Gangopadhyaya, Anuj; Schiman, Jeffrey C.
2017-01-01
Studies using tests scores as the dependent variable often report point estimates in student standard deviation units. We note that a standard deviation is not a standard unit of measurement since the distribution of test scores can vary across contexts. As such, researchers should be cautious when interpreting differences in the numerical size of…
Standard Setting: A Systematic Approach to Interpreting Student Learning.
ERIC Educational Resources Information Center
DeMars, Christine E.; Sundre, Donna L.; Wise, Steven L.
2002-01-01
Describes workshops designed to set standards for freshman technological literacy at James Madison University (Virginia). Results indicated that about 30% of incoming freshmen could meet the standards set initially; by the end of the year, an additional 50-60% could meet them. Provides recommendations for standard setting in a general education…
Interpreting and Reporting Radiological Water-Quality Data
McCurdy, David E.; Garbarino, John R.; Mullin, Ann H.
2008-01-01
This document provides information to U.S. Geological Survey (USGS) Water Science Centers on interpreting and reporting radiological results for samples of environmental matrices, most notably water. The information provided is intended to be broadly useful throughout the United States, but it is recommended that scientists who work at sites containing radioactive hazardous wastes need to consult additional sources for more detailed information. The document is largely based on recognized national standards and guidance documents for radioanalytical sample processing, most notably the Multi-Agency Radiological Laboratory Analytical Protocols Manual (MARLAP), and on documents published by the U.S. Environmental Protection Agency and the American National Standards Institute. It does not include discussion of standard USGS practices including field quality-control sample analysis, interpretive report policies, and related issues, all of which shall always be included in any effort by the Water Science Centers. The use of 'shall' in this report signifies a policy requirement of the USGS Office of Water Quality.
A Bayesian account of quantum histories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marlow, Thomas
2006-05-15
We investigate whether quantum history theories can be consistent with Bayesian reasoning and whether such an analysis helps clarify the interpretation of such theories. First, we summarise and extend recent work categorising two different approaches to formalising multi-time measurements in quantum theory. The standard approach consists of describing an ordered series of measurements in terms of history propositions with non-additive 'probabilities.' The non-standard approach consists of defining multi-time measurements to consist of sets of exclusive and exhaustive history propositions and recovering the single-time exclusivity of results when discussing single-time history propositions. We analyse whether such history propositions can be consistentmore » with Bayes' rule. We show that certain class of histories are given a natural Bayesian interpretation, namely, the linearly positive histories originally introduced by Goldstein and Page. Thus, we argue that this gives a certain amount of interpretational clarity to the non-standard approach. We also attempt a justification of our analysis using Cox's axioms of probability theory.« less
29 CFR 5.13 - Rulings and interpretations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... SUBJECT TO THE CONTRACT WORK HOURS AND SAFETY STANDARDS ACT) Davis-Bacon and Related Acts Provisions and... Hour Division, Employment Standards Administration, U.S. Department of Labor, Washington, DC 20210. ...
29 CFR 5.13 - Rulings and interpretations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... SUBJECT TO THE CONTRACT WORK HOURS AND SAFETY STANDARDS ACT) Davis-Bacon and Related Acts Provisions and... Hour Division, Employment Standards Administration, U.S. Department of Labor, Washington, DC 20210. ...
Impact of HIPAA’s Minimum Necessary Standard on Genomic Data Sharing
Evans, Barbara J.; Jarvik, Gail P.
2017-01-01
Purpose This article provides a brief introduction to the HIPAA Privacy Rule’s minimum necessary standard, which applies to sharing of genomic data, particularly clinical data, following 2013 Privacy Rule revisions. Methods This research used the Thomson Reuters Westlaw™ database and law library resources in its legal analysis of the HIPAA privacy tiers and the impact of the minimum necessary standard on genomic data-sharing. We considered relevant example cases of genomic data-sharing needs. Results In a climate of stepped-up HIPAA enforcement, this standard is of concern to laboratories that generate, use, and share genomic information. How data-sharing activities are characterized—whether for research, public health, or clinical interpretation and medical practice support—affects how the minimum necessary standard applies and its overall impact on data access and use. Conclusion There is no clear regulatory guidance on how to apply HIPAA’s minimum necessary standard when considering the sharing of information in the data-rich environment of genomic testing. Laboratories that perform genomic testing should engage with policy-makers to foster sound, well-informed policies and appropriate characterization of data-sharing activities to minimize adverse impacts on day-to-day workflows. PMID:28914268
Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; Maceachren, Alan M
2008-11-07
Kulldorff's spatial scan statistic and its software implementation - SaTScan - are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of SaTScan results. The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit.
In Vitro Activity of Cephalothin and Three Penicillins Against Escherichia coli and Proteus Species
Barry, Arthur L.; Hoeprich, Paul D.
1973-01-01
The susceptibility of clinical isolates of Escherichia coli (67) and Proteus species (58) to cephalothin, ampicillin, benzyl penicillin, and phenoxymethyl penicillin was determined in vitro by using broth dilution and disk diffusion tests. The data were correlated by using a four-category scheme for interpreting minimal inhibitory concentrations (groups 1 to 4) as recommended by a subcommittee of an international collaborative study of susceptibility testing. With cephalothin and ampicillin, groups 1 (susceptible) and 2 (moderately susceptible) were susceptible by the disk test, and with benzyl penicillin, similar results were observed when the interpretive zone standards were changed. Strains categorized as group 4 (very resistant) were resistant by the disk method, but group 3 (moderately resistant) strains were not adequately distinguished by disk testing. Group 3 susceptibility to benzyl and phenoxymethyl penicillins can be predicted by extrapolating results from tests with ampicillin disks. PMID:4202343
Weak ergodicity breaking, irreproducibility, and ageing in anomalous diffusion processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metzler, Ralf
2014-01-14
Single particle traces are standardly evaluated in terms of time averages of the second moment of the position time series r(t). For ergodic processes, one can interpret such results in terms of the known theories for the corresponding ensemble averaged quantities. In anomalous diffusion processes, that are widely observed in nature over many orders of magnitude, the equivalence between (long) time and ensemble averages may be broken (weak ergodicity breaking), and these time averages may no longer be interpreted in terms of ensemble theories. Here we detail some recent results on weakly non-ergodic systems with respect to the time averagedmore » mean squared displacement, the inherent irreproducibility of individual measurements, and methods to determine the exact underlying stochastic process. We also address the phenomenon of ageing, the dependence of physical observables on the time span between initial preparation of the system and the start of the measurement.« less
Considering relatives when assessing the evidential strength of mixed DNA profiles.
Taylor, Duncan; Bright, Jo-Anne; Buckleton, John
2014-11-01
Sophisticated methods of DNA profile interpretation have enabled scientists to calculate weights for genotype sets proposed to explain some observed data. Using standard formulae these weights can be incorporated into an LR calculation that considers two competing propositions. We demonstrate here how consideration of relatedness to the person of interest can be incorporated into a LR calculation and how the same calculation can be used for familial searches of complex mixtures. We provide a general formula that can be used in semi or fully automated methods of calculation and demonstrate their use by working through an example. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.
Review of measurement instruments in clinical and research ethics, 1999–2003
Redman, B K
2006-01-01
Every field of practice has the responsibility to evaluate its outcomes and to test its theories. Evidence of the underdevelopment of measurement instruments in bioethics suggests that attending to strengthening existing instruments and developing new ones will facilitate the interpretation of accumulating bodies of research as well as the making of clinical judgements. A review of 65 instruments reported in the published literature showed 10 with even a minimal level of psychometric data. Two newly developed instruments provide examples of the full use of psychometric and ethical theory. Bioethicists use a wide range of methods for knowledge development and verification; each method should meet stringent standards of quality. PMID:16507659
Considerations on the ASTM standards 1789-04 and 1422-05 on the forensic examination of ink.
Neumann, Cedric; Margot, Pierre
2010-09-01
The ASTM standards on Writing Ink Identification (ASTM 1789-04) and on Writing Ink Comparison (ASTM 1422-05) are the most up-to-date guidelines that have been published on the forensic analysis of ink. The aim of these documents is to cover most aspects of the forensic analysis of ink evidence, from the analysis of ink samples, the comparison of the analytical profile of these samples (with the aim to differentiate them or not), through to the interpretation of the result of the examination of these samples in a forensic context. Significant evolutions in the technology available to forensic scientists, in the quality assurance requirements brought onto them, and in the understanding of frameworks to interpret forensic evidence have been made in recent years. This article reviews the two standards in the light of these evolutions and proposes some practical improvements in terms of the standardization of the analyses, the comparison of ink samples, and the interpretation of ink examination. Some of these suggestions have already been included in a DHS funded project aimed at creating a digital ink library for the United States Secret Service. © 2010 American Academy of Forensic Sciences.
Matched Comparison Group Design Standards in Systematic Reviews of Early Childhood Interventions.
Thomas, Jaime; Avellar, Sarah A; Deke, John; Gleason, Philip
2017-06-01
Systematic reviews assess the quality of research on program effectiveness to help decision makers faced with many intervention options. Study quality standards specify criteria that studies must meet, including accounting for baseline differences between intervention and comparison groups. We explore two issues related to systematic review standards: covariate choice and choice of estimation method. To help systematic reviews develop/refine quality standards and support researchers in using nonexperimental designs to estimate program effects, we address two questions: (1) How well do variables that systematic reviews typically require studies to account for explain variation in key child and family outcomes? (2) What methods should studies use to account for preexisting differences between intervention and comparison groups? We examined correlations between baseline characteristics and key outcomes using Early Childhood Longitudinal Study-Birth Cohort data to address Question 1. For Question 2, we used simulations to compare two methods-matching and regression adjustment-to account for preexisting differences between intervention and comparison groups. A broad range of potential baseline variables explained relatively little of the variation in child and family outcomes. This suggests the potential for bias even after accounting for these variables, highlighting the need for systematic reviews to provide appropriate cautions about interpreting the results of moderately rated, nonexperimental studies. Our simulations showed that regression adjustment can yield unbiased estimates if all relevant covariates are used, even when the model is misspecified, and preexisting differences between the intervention and the comparison groups exist.
21 CFR 868.1900 - Diagnostic pulmonary-function interpretation calculator.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Diagnostic pulmonary-function interpretation calculator. 868.1900 Section 868.1900 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND... pulmonary-function values. (b) Classification. Class II (performance standards). ...
21 CFR 868.1900 - Diagnostic pulmonary-function interpretation calculator.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Diagnostic pulmonary-function interpretation calculator. 868.1900 Section 868.1900 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND... pulmonary-function values. (b) Classification. Class II (performance standards). ...
ERIC Educational Resources Information Center
Stebbins, G. Ledyard; Ayala, Francisco J.
1985-01-01
Recent developments in molecular biology and new interpretations of the fossil record are gradually altering and adding to Charles Darwin's theory, which has been the standard view of the process of evolution for 40 years. Several of these developments and interpretations are identified and discussed. (JN)
Confidence of compliance: a Bayesian approach for percentile standards.
McBride, G B; Ellis, J C
2001-04-01
Rules for assessing compliance with percentile standards commonly limit the number of exceedances permitted in a batch of samples taken over a defined assessment period. Such rules are commonly developed using classical statistical methods. Results from alternative Bayesian methods are presented (using beta-distributed prior information and a binomial likelihood), resulting in "confidence of compliance" graphs. These allow simple reading of the consumer's risk and the supplier's risks for any proposed rule. The influence of the prior assumptions required by the Bayesian technique on the confidence results is demonstrated, using two reference priors (uniform and Jeffreys') and also using optimistic and pessimistic user-defined priors. All four give less pessimistic results than does the classical technique, because interpreting classical results as "confidence of compliance" actually invokes a Bayesian approach with an extreme prior distribution. Jeffreys' prior is shown to be the most generally appropriate choice of prior distribution. Cost savings can be expected using rules based on this approach.
Imaging biomarkers in liver fibrosis.
Berzigotti, A; França, M; Martí-Aguado, D; Martí-Bonmatí, L
There is a need for early identification of patients with chronic liver diseases due to their increasing prevalence and morbidity-mortality. The degree of liver fibrosis determines the prognosis and therapeutic options in this population. Liver biopsy represents the reference standard for fibrosis staging. However, given its limitations and complications, different non-invasive methods have been developed recently for the in vivo quantification of fibrosis. Due to their precision and reliability, biomarkers' measurements derived from Ultrasound and Magnetic Resonance stand out. This article reviews the different acquisition techniques and image processing methods currently used in the evaluation of liver fibrosis, focusing on their diagnostic performance, applicability and clinical value. In order to properly interpret their results in the appropriate clinical context, it seems necessary to understand the techniques and their quality parameters, the standardization and validation of the measurement units and the quality control of the methodological problems. Copyright © 2017 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.
Olmo, Rocío; Silva, Ana Cláudia; Díaz-Manzano, Fernando E; Cabrera, Javier; Fenoll, Carmen; Escobar, Carolina
2017-01-01
Plant parasitic nematodes cause a great impact in agricultural systems. The search for effective control methods is partly based on the understanding of underlying molecular mechanisms leading to the formation of nematode feeding sites. In this respect, crosstalk of hormones such as auxins and cytokinins (IAA, CK) between the plant and the nematode seems to be crucial. Thence, the study of loss of function or overexpressing lines with altered IAA and CK functioning is entailed. Those lines frequently show developmental defects in the number, position and/or length of the lateral roots what could generate a bias in the interpretation of the nematode infection parameters. Here we present a protocol to assess differences in nematode infectivity with the lowest interference of root architecture phenotypes in the results. Thus, tailored growth conditions and normalization parameters facilitate the standardized phenotyping of nematode infection.
The epidemiology of premature ejaculation.
Saitz, Theodore Robert; Serefoglu, Ege Can
2016-08-01
Vast advances have occurred over the past decade with regards to understanding the epidemiology, pathophysiology and management of premature ejaculation (PE); however, we still have much to learn about this common sexual problem. As a standardized evidence-based definition of PE has only recently been established, the reported prevalence rates of PE prior to this definition have been difficult to interpret. As a result, a large range of conflicting prevalence rates have been reported. In addition to the lack of a standardized definition and operational criteria, the method of recruitment for study participation and method of data collection have obviously contributed to the broad range of reported prevalence rates. The new criteria and classification of PE will allow for continued research into the diverse phenomenology, etiology and pathogenesis of the disease to be conducted. While the absolute pathophysiology and true prevalence of PE remains unclear, developing a better understanding of the true prevalence of the disease will allow for the completion of more accurate analysis and treatment of the disease.
Comparing biomarker measurements to a normal range: when ...
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results. The National Exposure Research Laboratory’s (NERL’s) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA’s mission to protect human health and the environment. HEASD’s research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA’s strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.
Sousa, Marlos G.; Carareto, Roberta; Pereira-Junior, Valdo A.; Aquino, Monally C.C.
2011-01-01
Although the rectal mucosa remains the traditional site for measuring body temperature in dogs, an increasing number of clinicians have been using auricular temperature to estimate core body temperature. In this study, 88 mature healthy dogs had body temperatures measured with auricular and rectal thermometers. The mean temperature and confidence intervals were similar for each method, but Bland-Altman plots showed high biases and limits of agreement unacceptable for clinical purposes. The results indicate that auricular and rectal temperatures should not be interpreted interchangeably. PMID:21731094
Sousa, Marlos G; Carareto, Roberta; Pereira-Junior, Valdo A; Aquino, Monally C C
2011-04-01
Although the rectal mucosa remains the traditional site for measuring body temperature in dogs, an increasing number of clinicians have been using auricular temperature to estimate core body temperature. In this study, 88 mature healthy dogs had body temperatures measured with auricular and rectal thermometers. The mean temperature and confidence intervals were similar for each method, but Bland-Altman plots showed high biases and limits of agreement unacceptable for clinical purposes. The results indicate that auricular and rectal temperatures should not be interpreted interchangeably.
Entanglement entropy of electromagnetic edge modes.
Donnelly, William; Wall, Aron C
2015-03-20
The vacuum entanglement entropy of Maxwell theory, when evaluated by standard methods, contains an unexpected term with no known statistical interpretation. We resolve this two-decades old puzzle by showing that this term is the entanglement entropy of edge modes: classical solutions determined by the electric field normal to the entangling surface. We explain how the heat kernel regularization applied to this term leads to the negative divergent expression found by Kabat. This calculation also resolves a recent puzzle concerning the logarithmic divergences of gauge fields in 3+1 dimensions.
A practical overview of how to conduct a systematic review.
Davis, Dilla
2016-11-16
With an increasing focus on evidence-based practice in health care, it is important that nurses understand the principles underlying systematic reviews. Systematic reviews are used in healthcare to present a comprehensive, policy-neutral, transparent and reproducible synthesis of evidence. This article provides a practical overview of the process of undertaking systematic reviews, explaining the rationale for each stage. It provides guidance on the standard methods applicable to every systematic review: writing and registering a protocol; planning a review; searching and selecting studies; data collection; assessing the risk of bias; and interpreting results.
Cell-based Assays for Assessing Toxicity: A Basic Guide.
Parboosing, Raveen; Mzobe, Gugulethu; Chonco, Louis; Moodley, Indres
2016-01-01
Assessment of toxicity is an important component of the drug discovery process. Cellbased assays are a popular choice for assessing cytotoxicity. However, these assays are complex because of the wide variety of formats and methods that are available, lack of standardization, confusing terminology and the inherent variability of biological systems and measurement. This review is intended as a guide on how to take these factors into account when planning, conducting and/or interpreting cell based toxicity assays. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Diagnosing pulmonary edema: lung ultrasound versus chest radiography.
Martindale, Jennifer L; Noble, Vicki E; Liteplo, Andrew
2013-10-01
Diagnosing the underlying cause of acute dyspnea can be challenging. Lung ultrasound may help to identify pulmonary edema as a possible cause. To evaluate the ability of residents to recognize pulmonary edema on lung ultrasound using chest radiographs as a comparison standard. This is a prospective, blinded, observational study of a convenience sample of resident physicians in the Departments of Emergency Medicine (EM), Internal Medicine (IM), and Radiology. Residents were given a tutorial on interpreting pulmonary edema on both chest radiograph and lung ultrasound. They were then shown both ultrasounds and chest radiographs from 20 patients who had presented to the emergency department with dyspnea, 10 with a primary diagnosis of pulmonary edema, and 10 with alternative diagnoses. Cohen's κ values were calculated to describe the strength of the correlation between resident and gold standard interpretations. Participants included 20 EM, 20 IM, and 20 Radiology residents. The overall agreement with gold standard interpretation of pulmonary edema on lung ultrasound (74%, κ = 0.51, 95% confidence interval 0.46-0.55) was superior to chest radiographs (58%, κ = 0.25, 95% confidence interval 0.20-0.30) (P < 0.0001). EM residents interpreted lung ultrasounds more accurately than IM residents. Radiology residents interpreted chest radiographs more accurately than did EM and IM residents. Residents were able to more accurately identify pulmonary edema with lung ultrasound than with chest radiograph. Physicians with minimal exposure to lung ultrasound may be able to correctly recognize pulmonary edema on lung ultrasound.
Pfeiffer, Christine M; Looker, Anne C
2017-12-01
Biochemical assessment of iron status relies on serum-based indicators, such as serum ferritin (SF), transferrin saturation, and soluble transferrin receptor (sTfR), as well as erythrocyte protoporphyrin. These indicators present challenges for clinical practice and national nutrition surveys, and often iron status interpretation is based on the combination of several indicators. The diagnosis of iron deficiency (ID) through SF concentration, the most commonly used indicator, is complicated by concomitant inflammation. sTfR concentration is an indicator of functional ID that is not an acute-phase reactant, but challenges in its interpretation arise because of the lack of assay standardization, common reference ranges, and common cutoffs. It is unclear which indicators are best suited to assess excess iron status. The value of hepcidin, non-transferrin-bound iron, and reticulocyte indexes is being explored in research settings. Serum-based indicators are generally measured on fully automated clinical analyzers available in most hospitals. Although international reference materials have been available for years, the standardization of immunoassays is complicated by the heterogeneity of antibodies used and the absence of physicochemical reference methods to establish "true" concentrations. From 1988 to 2006, the assessment of iron status in NHANES was based on the multi-indicator ferritin model. However, the model did not indicate the severity of ID and produced categorical estimates. More recently, iron status assessment in NHANES has used the total body iron stores (TBI) model, in which the log ratio of sTfR to SF is assessed. Together, sTfR and SF concentrations cover the full range of iron status. The TBI model better predicts the absence of bone marrow iron than SF concentration alone, and TBI can be analyzed as a continuous variable. Additional consideration of methodologies, interpretation of indicators, and analytic standardization is important for further improvements in iron status assessment. © 2017 American Society for Nutrition.
Wells, David B; Bhattacharya, Swati; Carr, Rogan; Maffeo, Christopher; Ho, Anthony; Comer, Jeffrey; Aksimentiev, Aleksei
2012-01-01
Molecular dynamics (MD) simulations have become a standard method for the rational design and interpretation of experimental studies of DNA translocation through nanopores. The MD method, however, offers a multitude of algorithms, parameters, and other protocol choices that can affect the accuracy of the resulting data as well as computational efficiency. In this chapter, we examine the most popular choices offered by the MD method, seeking an optimal set of parameters that enable the most computationally efficient and accurate simulations of DNA and ion transport through biological nanopores. In particular, we examine the influence of short-range cutoff, integration timestep and force field parameters on the temperature and concentration dependence of bulk ion conductivity, ion pairing, ion solvation energy, DNA structure, DNA-ion interactions, and the ionic current through a nanopore.
Geophysical Analysis of an Urban Region in Southwestern Pennsylvania
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harbert, W.P.; Lipinski, B.A.; Kaminski, V.
2006-12-01
The goal of this project was to categorize the subsurface beneath an urban region of Southwestern Pennsylvania and to determine geological structure and attempt to image pathways for gas migration in this area. Natural gas had been commercially produced from this region at the turn of the century but this field, with more than 100 wells drilled, was closed approximately eighty years ago. There are surface expressions of gas migration visible in the study region. We applied geophysical methods to determine geological structure in this region, which included multi frequency electromagnetic survey performed using Geophex Gem-2 system, portable reflection seismicmore » and a System I/O-based reflection seismic survey. Processing and interpretation of EM data included filtering 10 raw channels (inphase and quadrature components measured at 5 frequencies), inverting the data for apparent conductivity using EM1DFM software by University of British Columbia, Canada and further interpretation in terms of nearsurface features at a maximum depth of up to 20 meters. Analysis of the collected seismic data included standard seismic processing and the use of the SurfSeis software package developed by the Kansas Geological Survey. Standard reflection processing of these data were completed using the LandMark ProMAX 2D/3D and Parallel Geoscience Corporations software. Final stacked sections were then imported into a Seismic Micro Technologies Kingdom Suite+ geodatabase for visualization and analysis. Interpretation of these data was successful in identifying and confirming a region of unmined Freeport coal, determining regional stratigraphic structure and identifying possible S-wave lower velocity anomalies in the shallow subsurface.« less
Fancourt, Nicholas; Deloria Knoll, Maria; Barger-Kamate, Breanna; de Campo, John; de Campo, Margaret; Diallo, Mahamadou; Ebruke, Bernard E; Feikin, Daniel R; Gleeson, Fergus; Gong, Wenfeng; Hammitt, Laura L; Izadnegahdar, Rasa; Kruatrachue, Anchalee; Madhi, Shabir A; Manduku, Veronica; Matin, Fariha Bushra; Mahomed, Nasreen; Moore, David P; Mwenechanya, Musaku; Nahar, Kamrun; Oluwalana, Claire; Ominde, Micah Silaba; Prosperi, Christine; Sande, Joyce; Suntarattiwong, Piyarat; O'Brien, Katherine L
2017-06-15
Chest radiographs (CXRs) are a valuable diagnostic tool in epidemiologic studies of pneumonia. The World Health Organization (WHO) methodology for the interpretation of pediatric CXRs has not been evaluated beyond its intended application as an endpoint measure for bacterial vaccine trials. The Pneumonia Etiology Research for Child Health (PERCH) study enrolled children aged 1-59 months hospitalized with WHO-defined severe and very severe pneumonia from 7 low- and middle-income countries. An interpretation process categorized each CXR into 1 of 5 conclusions: consolidation, other infiltrate, both consolidation and other infiltrate, normal, or uninterpretable. Two members of a 14-person reading panel, who had undertaken training and standardization in CXR interpretation, interpreted each CXR. Two members of an arbitration panel provided additional independent reviews of CXRs with discordant interpretations at the primary reading, blinded to previous reports. Further discordance was resolved with consensus discussion. A total of 4172 CXRs were obtained from 4232 cases. Observed agreement for detecting consolidation (with or without other infiltrate) between primary readers was 78% (κ = 0.50) and between arbitrators was 84% (κ = 0.61); agreement for primary readers and arbitrators across 5 conclusion categories was 43.5% (κ = 0.25) and 48.5% (κ = 0.32), respectively. Disagreement was most frequent between conclusions of other infiltrate and normal for both the reading panel and the arbitration panel (32% and 30% of discordant CXRs, respectively). Agreement was similar to that of previous evaluations using the WHO methodology for detecting consolidation, but poor for other infiltrates despite attempts at a rigorous standardization process. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.
Weiss, K; Laverdière, M; Rivest, R
1996-01-01
Corynebacterium species are increasingly being implicated in foreign-body infections and in immunocompromised-host infections. However, there are no specific recommendations on the method or the criteria to use in order to determine the in vitro activities of the antibiotics commonly used to treat Corynebacterium infections. The first aim of our study was to compare the susceptibilities of various species of Corynebacterium to vancomycin, erythromycin, and penicillin by using a broth microdilution method and a disk diffusion method. Second, the activity of penicillin against our isolates was assessed by using the interpretative criteria recommended by the National Committee for Clinical Laboratory Standards for the determination of the susceptibility of streptococci and Listeria monocytogenes to penicillin. Overall, 100% of the isolates were susceptible to vancomycin, while considerable variations in the activities of erythromycin and penicillin were noted for the different species tested, including the non-Corynebacterium jeikeium species. A good correlation in the susceptibilities of vancomycin and erythromycin between the disk diffusion and the microdilution methods was observed. However, a 5% rate of major or very major errors was detected with the Listeria criteria, while a high rate of minor errors (18%) was noted when the streptococcus criteria were used. Our findings indicate considerable variations in the activities of erythromycin and penicillin against the various species of Corynebacterium. Because of the absence of definite recommendations, important discrepancies were observed between the methods and the interpretations of the penicillin activity. PMID:8849254
Robson, Michael; Murphy, Martina; Byrne, Fionnuala
2015-10-01
Quality assurance in labor and delivery is needed. The method must be simple and consistent, and be of universal value. It needs to be clinically relevant, robust, and prospective, and must incorporate epidemiological variables. The 10-Group Classification System (TGCS) is a simple method providing a common starting point for further detailed analysis within which all perinatal events and outcomes can be measured and compared. The system is demonstrated in the present paper using data for 2013 from the National Maternity Hospital in Dublin, Ireland. Interpretation of the classification can be easily taught. The standard table can provide much insight into the philosophy of care in the population of women studied and also provide information on data quality. With standardization of audit of events and outcomes, any differences in either sizes of groups, events or outcomes can be explained only by poor data collection, significant epidemiological variables, or differences in practice. In April 2015, WHO proposed that the TGCS (also known as the Robson classification) is used as a global standard for assessing, monitoring, and comparing cesarean delivery rates within and between healthcare facilities. Copyright © 2015. Published by Elsevier Ireland Ltd.
Dawes, Sharron E.; Palmer, Barton W.; Jeste, Dilip V.
2008-01-01
Purpose of review Although the basic standards of adjudicative competence were specified by the U.S. Supreme Court in 1960, there remain a number of complex conceptual and practical issues in interpreting and applying these standards. In this report we provide a brief overview regarding the general concept of adjudicative competence and its assessment, as well as some highlights of recent empirical studies on this topic. Findings Most adjudicative competence assessments are conducted by psychiatrists or psychologists. There are no universal certification requirements, but some states are moving toward required certification of forensic expertise for those conducting such assessments. Recent data indicate inconsistencies in application of the existing standards even among forensic experts, but the recent publication of consensus guidelines may foster improvements in this arena. There are also ongoing efforts to develop and validate structured instruments to aid competency evaluations. Telemedicine-based competency interviews may facilitate evaluation by those with specific expertise for evaluation of complex cases. There is also interest in empirical development of educational methods to enhance adjudicative competence. Summary Adjudicative competence may be difficult to measure accurately, but the assessments and tools available are advancing. More research is needed on methods of enhancing decisional capacity among those with impaired competence. PMID:18650693
Unregulated Autonomy: Uncredentialed Educational Interpreters in Rural Schools.
Fitzmaurice, Stephen
2017-01-01
Although many rural Deaf and Hard of Hearing students attend public schools most of the day and use the services of educational interpreters to gain access to the school environment, little information exists on what interpreters are doing in rural school systems in the absence of credentialing requirements. The researcher used ethnographic interviews and field observations of three educational interpreters with no certification or professional assessment to explore how uncredentialed interpreters were enacting their role in a rural high school. The findings indicate that uncredentialed interpreters in rural settings perform four major functions during their school day: preparing the environment, staff, and materials; interpreting a variety of content; interacting with numerous stakeholders; and directly instructing Deaf and Hard of Hearing students. Generally, educational interpreters in rural districts operate with unregulated autonomy, a situation that warrants further research and a national standard for all educational interpreters.
Instrumental variable methods in comparative safety and effectiveness research.
Brookhart, M Alan; Rassen, Jeremy A; Schneeweiss, Sebastian
2010-06-01
Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial.
Effective use of interpreters by family nurse practitioner students: is didactic curriculum enough?
Phillips, Susanne J; Lie, Desiree; Encinas, Jennifer; Ahearn, Carol Sue; Tiso, Susan
2011-05-01
Nurse practitioners (NPs) care for patients with limited English proficiency (LEP). However, NP education for improving communication in interpreted encounters is not well reported. We report a single school study using standardized encounters within a clinical practice examination (CPX) to assess the adequacy of current curriculum. Entering family NP (FNP) students (n=26) participated in a baseline CPX case. They were assessed by standardized patients using the validated Interpreter Impact Rating Scale (IIRS) and Physician-Patient Interaction (PPI) scale, and by interpreters using the Interpreter Scale (IS).The case was re-administered to 31 graduating students following completion of existing curriculum. Primary outcome was aggregate change in skills comprising global IIRS, PPI and IS scores. Pre- and post-performance data were available for one class of 10 students. Secondary outcome was change in skill scores for this class. Mean aggregate global scores showed no significant improvement between scores at entry and graduation. For 10 students with pre- and post-performance data, there was no improvement in skill scores for any measure. Skill assessed on one measure worsened. FNP students show no improvement in skills in working with interpreters with the current curriculum. An enhanced curriculum is needed. ©2011 The Author(s) Journal compilation ©2011 American Academy of Nurse Practitioners.
Promoted Combustion Test Data Re-Examined
NASA Technical Reports Server (NTRS)
Lewis, Michelle; Jeffers, Nathan; Stoltzfus, Joel
2010-01-01
Promoted combustion testing of metallic materials has been performed by NASA since the mid-1980s to determine the burn resistance of materials in oxygen-enriched environments. As the technolo gy has advanced, the method of interpreting, presenting, and applying the promoted combustion data has advanced as well. Recently NASA changed the bum criterion from 15 cm (6 in.) to 3 cm (1.2 in.). This new burn criterion was adopted for ASTM G 124, Standard Test Method for Determining the Combustion Behavior- of Metallic Materials in Oxygen-Enriched Atmospheres. Its effect on the test data and the latest method to display the test data will be discussed. Two specific examples that illustrate how this new criterion affects the burn/no-bum thresholds of metal alloys will also be presented.
Variability of the QuantiFERON®-TB gold in-tube test using automated and manual methods.
Whitworth, William C; Goodwin, Donald J; Racster, Laura; West, Kevin B; Chuke, Stella O; Daniels, Laura J; Campbell, Brandon H; Bohanon, Jamaria; Jaffar, Atheer T; Drane, Wanzer; Sjoberg, Paul A; Mazurek, Gerald H
2014-01-01
The QuantiFERON®-TB Gold In-Tube test (QFT-GIT) detects Mycobacterium tuberculosis (Mtb) infection by measuring release of interferon gamma (IFN-γ) when T-cells (in heparinized whole blood) are stimulated with specific Mtb antigens. The amount of IFN-γ is determined by enzyme-linked immunosorbent assay (ELISA). Automation of the ELISA method may reduce variability. To assess the impact of ELISA automation, we compared QFT-GIT results and variability when ELISAs were performed manually and with automation. Blood was collected into two sets of QFT-GIT tubes and processed at the same time. For each set, IFN-γ was measured in automated and manual ELISAs. Variability in interpretations and IFN-γ measurements was assessed between automated (A1 vs. A2) and manual (M1 vs. M2) ELISAs. Variability in IFN-γ measurements was also assessed on separate groups stratified by the mean of the four ELISAs. Subjects (N = 146) had two automated and two manual ELISAs completed. Overall, interpretations were discordant for 16 (11%) subjects. Excluding one subject with indeterminate results, 7 (4.8%) subjects had discordant automated interpretations and 10 (6.9%) subjects had discordant manual interpretations (p = 0.17). Quantitative variability was not uniform; within-subject variability was greater with higher IFN-γ measurements and with manual ELISAs. For subjects with mean TB Responses ±0.25 IU/mL of the 0.35 IU/mL cutoff, the within-subject standard deviation for two manual tests was 0.27 (CI95 = 0.22-0.37) IU/mL vs. 0.09 (CI95 = 0.07-0.12) IU/mL for two automated tests. QFT-GIT ELISA automation may reduce variability near the test cutoff. Methodological differences should be considered when interpreting and using IFN-γ release assays (IGRAs).
Lessons from Sociocultural Writing Research for Implementing the Common Core State Standards
ERIC Educational Resources Information Center
Woodard, Rebecca; Kline, Sonia
2016-01-01
The Common Core State Standards advocate more writing than previous standards; however, in taking a college and career readiness perspective, the Standards neglect to emphasize the role of context and culture in learning to write. We argue that sociocultural perspectives that pay attention to these factors offer insights into how to interpret and…
A systematic review of methods to diagnose oral dryness and salivary gland function
2012-01-01
Background The most advocated clinical method for diagnosing salivary dysfunction is to quantitate unstimulated and stimulated whole saliva (sialometry). Since there is an expected and wide variation in salivary flow rates among individuals, the assessment of dysfunction can be difficult. The aim of this systematic review is to evaluate the quality of the evidence for the efficacy of diagnostic methods used to identify oral dryness. Methods A literature search, with specific indexing terms and a hand search, was conducted for publications that described a method to diagnose oral dryness. The electronic databases of PubMed, Cochrane Library, and Web of Science were used as data sources. Four reviewers selected publications on the basis of predetermined inclusion and exclusion criteria. Data were extracted from the selected publications using a protocol. Original studies were interpreted with the aid of Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool. Results The database searches resulted in 224 titles and abstracts. Of these abstracts, 80 publications were judged to meet the inclusion criteria and read in full. A total of 18 original studies were judged relevant and interpreted for this review. In all studies, the results of the test method were compared to those of a reference method. Based on the interpretation (with the aid of the QUADAS tool) it can be reported that the patient selection criteria were not clearly described and the test or reference methods were not described in sufficient detail for it to be reproduced. None of the included studies reported information on uninterpretable/intermediate results nor data on observer or instrument variation. Seven of the studies presented their results as a percentage of correct diagnoses. Conclusions The evidence for the efficacy of clinical methods to assess oral dryness is sparse and it can be stated that improved standards for the reporting of diagnostic accuracy are needed in order to assure the methodological quality of studies. There is need for effective diagnostic criteria and functional tests in order to detect those individuals with oral dryness who may require oral treatment, such as alleviation of discomfort and/or prevention of diseases. PMID:22870895
77 FR 47831 - Combined Notice of Filings #1
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-10
... to Reliability Standard CIP-002-4--Critical Cyber Asset Identification. Filed Date: 8/1/12. Accession... Corporation for Approval of an Interpretation to Reliability Standard CIP-004-4--Personnel and Training. Filed...
Denman, A R; Crockett, R G M; Groves-Kirkby, C J; Phillips, P S
2016-10-01
Radon gas is naturally occurring, and can concentrate in the built environment. It is radioactive and high concentration levels within buildings, including homes, have been shown to increase the risk of lung cancer in the occupants. As a result, several methods have been developed to measure radon. The long-term average radon level determines the risk to occupants, but there is always pressure to complete measurements more quickly, particularly when buying and selling the home. For many years, the three-month exposure using etched-track detectors has been the de facto standard, but a decade ago, Phillips et al. (2003), in a DEFRA funded project, evaluated the use of 1-week and 1-month measurements. They found that the measurement methods were accurate, but the challenge lay in the wide variation in radon levels - with diurnal, seasonal, and other patterns due to climatic factors and room use. In the report on this work, and in subsequent papers, the group proposed methodologies for 1-week, 1-month and 3-month measurements and their interpretation. Other work, however, has suggested that 2-week exposures were preferable to 1-week ones. In practice, the radon remediation industry uses a range of exposure times, and further guidance is required to help interpret these results. This paper reviews the data from this study and a subsequent 4-year study of 4 houses, re-analysing the results and extending them to other exposures, particularly for 2-week and 2-month exposures, and provides comprehensive guidance for the use of etched-track detectors, the value and use of Seasonal Correction Factors (SCFs), the uncertainties in short and medium term exposures and the interpretation of results. Copyright © 2016 Elsevier Ltd. All rights reserved.
An empirical approach to inversion of an unconventional helicopter electromagnetic dataset
Pellerin, L.; Labson, V.F.
2003-01-01
A helicopter electromagnetic (HEM) survey acquired at the U.S. Idaho National Engineering and Environmental Laboratory (INEEL) used a modification of a traditional mining airborne method flown at low levels for detailed characterization of shallow waste sites. The low sensor height, used to increase resolution, invalidates standard assumptions used in processing HEM data. Although the survey design strategy was sound, traditional interpretation techniques, routinely used in industry, proved ineffective. Processed data and apparent resistivity maps were severely distorted, and hence unusable, due to low flight height effects, high magnetic permeability of the basalt host, and the conductive, three-dimensional nature of the waste site targets.To accommodate these interpretation challenges, we modified a one-dimensional inversion routine to include a linear term in the objective function that allows for the magnetic and three-dimensional electromagnetic responses in the in-phase data. Although somewhat ad hoc, the use of this term in the inverse routine, referred to as the shift factor, was successful in defining the waste sites and reducing noise due to the low flight height and magnetic characteristics of the host rock. Many inversion scenarios were applied to the data and careful analysis was necessary to determine the parameters appropriate for interpretation, hence the approach was empirical. Data from three areas were processed with this scheme to highlight different interpretational aspects of the method. Wastes sites were delineated with the shift terms in two of the areas, allowing for separation of the anthropomorphic targets from the natural one-dimensional host. In the third area, the estimated resistivity and the shift factor were used for geological mapping. The high magnetic content of the native soil enabled the mapping of disturbed soil with the shift term. Published by Elsevier Science B.V.
29 CFR 1902.37 - Factors for determination.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... timely developed and promulgated standards which are at least as effective as the comparable Federal... interpreted and applied in a manner which is at least as effective as the interpretation and application of...
NASA Astrophysics Data System (ADS)
Wilcox, Dawn Renee
This dissertation examined elementary teachers' beliefs and perceptions of effective science instruction and documents how these teachers interpret and implement a model for Inquiry-Based (I-B) science in their classrooms. The study chronicles a group of teachers working in a large public school division and documents how these teachers interpret and implement reform-based science methods after participating in a professional development course on I-B science methods administered by the researcher. I-B science teaching and its implementation is discussed as an example of one potential method to address the current call for national education reform to meet the increasing needs of all students to achieve scientific literacy and the role of teachers in that effort. The conviction in science reform efforts is that all students are able to learn science and consequently must be given the crucial opportunities in the right environment that permits optimal science learning in our nation's schools. Following this group of teachers as they attempted to deliver I-B science teaching revealed challenges elementary science teachers face and the professional supports necessary for them to effectively meet science standards. This dissertation serves as partial fulfillment of the requirements for the degree of Doctor of Philosophy in Education at George Mason University.
Standards for Clinical Grade Genomic Databases.
Yohe, Sophia L; Carter, Alexis B; Pfeifer, John D; Crawford, James M; Cushman-Vokoun, Allison; Caughron, Samuel; Leonard, Debra G B
2015-11-01
Next-generation sequencing performed in a clinical environment must meet clinical standards, which requires reproducibility of all aspects of the testing. Clinical-grade genomic databases (CGGDs) are required to classify a variant and to assist in the professional interpretation of clinical next-generation sequencing. Applying quality laboratory standards to the reference databases used for sequence-variant interpretation presents a new challenge for validation and curation. To define CGGD and the categories of information contained in CGGDs and to frame recommendations for the structure and use of these databases in clinical patient care. Members of the College of American Pathologists Personalized Health Care Committee reviewed the literature and existing state of genomic databases and developed a framework for guiding CGGD development in the future. Clinical-grade genomic databases may provide different types of information. This work group defined 3 layers of information in CGGDs: clinical genomic variant repositories, genomic medical data repositories, and genomic medicine evidence databases. The layers are differentiated by the types of genomic and medical information contained and the utility in assisting with clinical interpretation of genomic variants. Clinical-grade genomic databases must meet specific standards regarding submission, curation, and retrieval of data, as well as the maintenance of privacy and security. These organizing principles for CGGDs should serve as a foundation for future development of specific standards that support the use of such databases for patient care.
Vezzosi, T; Buralli, C; Marchesotti, F; Porporato, F; Tognetti, R; Zini, E; Domenech, O
2016-10-01
The diagnostic accuracy of a smartphone electrocardiograph (ECG) in evaluating heart rhythm and ECG measurements was evaluated in 166 dogs. A standard 6-lead ECG was acquired for 1 min in each dog. A smartphone ECG tracing was simultaneously recorded using a single-lead bipolar ECG recorder. All ECGs were reviewed by one blinded operator, who judged if tracings were acceptable for interpretation and assigned an electrocardiographic diagnosis. Agreement between smartphone and standard ECG in the interpretation of tracings was evaluated. Sensitivity and specificity for the detection of arrhythmia were calculated for the smartphone ECG. Smartphone ECG tracings were interpretable in 162/166 (97.6%) tracings. A perfect agreement between the smartphone and standard ECG was found in detecting bradycardia, tachycardia, ectopic beats and atrioventricular blocks. A very good agreement was found in detecting sinus rhythm versus non-sinus rhythm (100% sensitivity and 97.9% specificity). The smartphone ECG provided tracings that were adequate for analysis in most dogs, with an accurate assessment of heart rate, rhythm and common arrhythmias. The smartphone ECG represents an additional tool in the diagnosis of arrhythmias in dogs, but is not a substitute for a 6-lead ECG. Arrhythmias identified by the smartphone ECG should be followed up with a standard ECG before making clinical decisions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Standardized Evaluation for Multi-National Development Programs.
ERIC Educational Resources Information Center
Farrell, W. Timothy
This paper takes the position that standardized evaluation formats and procedures for multi-national development programs are not only desirable but possible in diverse settings. The key is the localization of standard systems, which involves not only the technical manipulation of items and scales, but also the contextual interpretation of…
Teachers' Knowledge about and Views of the National Standards for Physical Education
ERIC Educational Resources Information Center
Chen, Weiyun
2006-01-01
This study investigated the current levels of teachers' knowledge about and views of the National Standards for Physical Education (NASPE, 1995) and factors that influenced the teachers' understandings and interpretations of the standards. Twenty-five elementary and secondary physical education teachers voluntarily participated in this study. Data…
42 CFR 37.51 - Interpreting and classifying chest radiographs-digital radiography systems.
Code of Federal Regulations, 2013 CFR
2013-10-01
... standard digital chest radiographic images provided for use with the Guidelines for the Use of the ILO... NIOSH-approved standard digital images may be used for classifying digital chest images for pneumoconiosis. Modification of the appearance of the standard images using software tools is not permitted. (d...
Three Dimensional Optical Coherence Tomography Imaging: Advantages and Advances
Gabriele, Michelle L; Wollstein, Gadi; Ishikawa, Hiroshi; Xu, Juan; Kim, Jongsick; Kagemann, Larry; Folio, Lindsey S; Schuman, Joel S.
2010-01-01
Three dimensional (3D) ophthalmic imaging using optical coherence tomography (OCT) has revolutionized assessment of the eye, the retina in particular. Recent technological improvements have made the acquisition of 3D-OCT datasets feasible. However, while volumetric data can improve disease diagnosis and follow-up, novel image analysis techniques are now necessary in order to process the dense 3D-OCT dataset. Fundamental software improvements include methods for correcting subject eye motion, segmenting structures or volumes of interest, extracting relevant data post hoc and signal averaging to improve delineation of retinal layers. In addition, innovative methods for image display, such as C-mode sectioning, provide a unique viewing perspective and may improve interpretation of OCT images of pathologic structures. While all of these methods are being developed, most remain in an immature state. This review describes the current status of 3D-OCT scanning and interpretation, and discusses the need for standardization of clinical protocols as well as the potential benefits of 3D-OCT scanning that could come when software methods for fully exploiting these rich data sets are available clinically. The implications of new image analysis approaches include improved reproducibility of measurements garnered from 3D-OCT, which may then help improve disease discrimination and progression detection. In addition, 3D-OCT offers the potential for preoperative surgical planning and intraoperative surgical guidance. PMID:20542136
Barthassat, Emilienne; Afifi, Faik; Konala, Praveen; Rasch, Helmut; Hirschmann, Michael T
2017-05-08
It was the primary purpose of our study to evaluate the inter- and intra-observer reliability of a standardized SPECT/CT algorithm for evaluating patients with painful primary total hip arthroplasty (THA). The secondary purpose was a comparison of semi-quantitative and 3D volumetric quantification method for assessment of bone tracer uptake (BTU) in those patients. A novel SPECT/CT localization scheme consisting of 14 femoral and 4 acetabular regions on standardized axial and coronal slices was introduced and evaluated in terms of inter- and intra-observer reliability in 37 consecutive patients with hip pain after THA. BTU for each anatomical region was assessed semi-quantitatively using a color-coded Likert type scale (0-10) and volumetrically quantified using a validated software. Two observers interpreted the SPECT/CT findings in all patients two times with six weeks interval between interpretations in random order. Semi-quantitative and quantitative measurements were compared in terms of reliability. In addition, the values were correlated using Pearson`s correlation. A factorial cluster analysis of BTU was performed to identify clinically relevant regions, which should be grouped and analysed together. The localization scheme showed high inter- and intra-observer reliabilities for all femoral and acetabular regions independent of the measurement method used (semiquantitative versus 3D volumetric quantitative measurements). A high to moderate correlation between both measurement methods was shown for the distal femur, the proximal femur and the acetabular cup. The factorial cluster analysis showed that the anatomical regions might be summarized into three distinct anatomical regions. These were the proximal femur, the distal femur and the acetabular cup region. The SPECT/CT algorithm for assessment of patients with pain after THA is highly reliable independent from the measurement method used. Three clinically relevant anatomical regions (proximal femoral, distal femoral, acetabular) were identified.
21 CFR 130.3 - Definitions and interpretations.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 2 2011-04-01 2011-04-01 false Definitions and interpretations. 130.3 Section 130.3 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION FOOD STANDARDS: GENERAL General Provisions § 130.3 Definitions and...
21 CFR 130.3 - Definitions and interpretations.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Definitions and interpretations. 130.3 Section 130.3 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION FOOD STANDARDS: GENERAL General Provisions § 130.3 Definitions and...
ERIC Educational Resources Information Center
DeCesare, Tony
2016-01-01
One of Amy Gutmann's important achievements in "Democratic Education" is her development of a "democratic interpretation of equal educational opportunity." This standard of equality demands that "all educable children learn enough to participate effectively in the democratic process." In other words, Gutmann demands…
A methodology to evaluate occupational internal exposure to fluorine-18.
Oliveira, C M; Dantas, A L A; Dantas, B M
2009-11-15
The objective of this work is to develop procedures for internal monitoring of (18)F to be applied in cases of possible incorporation of fluoride and (18)FDG, using in vivo and in vitro methods of measurements. The Na I (Tl) 8" x 4" scintillation detector installed at IRD-Whole Body Counter was calibrated for measurements with a whole body anthropomorphic phantom, simulating homogeneous distribution of (18)F in the body. The NaI(Tl) 3"x 3" scintillation detector installed at the IRD-Whole Body Counter was calibrated for in vivo measurements with a brain phantom inserted in an artificial skull, simulating (18)FDG incorporation. The HPGe detection system installed at the IRD-Bioassay Laboratory was calibrated for in vitro measurements of urine samples with 1 liter plastic bottles containing a standard liquid source. A methodology for bioassay data interpretation, based on standard ICRP models edited with the software AIDE-version 6, was established. It is concluded that in vivo measurements have sufficient sensitivity for monitoring (18)F in the forms of fluoride and (18)FDG. The use of both in vitro and in vivo bioassay data can provide useful information for the interpretation of bioassay data in cases of accidental incorporation in order to identify the chemical form of (18)F incorporated.
Emerging interpretations of quantum mechanics and recent progress in quantum measurement
NASA Astrophysics Data System (ADS)
Clarke, M. L.
2014-01-01
The focus of this paper is to provide a brief discussion on the quantum measurement process, by reviewing select examples highlighting recent progress towards its understanding. The areas explored include an outline of the measurement problem, the standard interpretation of quantum mechanics, quantum to classical transition, types of measurement (including weak and projective measurements) and newly emerging interpretations of quantum mechanics (decoherence theory, objective reality, quantum Darwinism and quantum Bayesianism).
NASA Astrophysics Data System (ADS)
Lucido, J. M.
2013-12-01
Scientists in the fields of hydrology, geophysics, and climatology are increasingly using the vast quantity of publicly-available data to address broadly-scoped scientific questions. For example, researchers studying contamination of nearshore waters could use a combination of radar indicated precipitation, modeled water currents, and various sources of in-situ monitoring data to predict water quality near a beach. In discovering, gathering, visualizing and analyzing potentially useful data sets, data portals have become invaluable tools. The most effective data portals often aggregate distributed data sets seamlessly and allow multiple avenues for accessing the underlying data, facilitated by the use of open standards. Additionally, adequate metadata are necessary for attribution, documentation of provenance and relating data sets to one another. Metadata also enable thematic, geospatial and temporal indexing of data sets and entities. Furthermore, effective portals make use of common vocabularies for scientific methods, units of measure, geologic features, chemical, and biological constituents as they allow investigators to correctly interpret and utilize data from external sources. One application that employs these principles is the National Ground Water Monitoring Network (NGWMN) Data Portal (http://cida.usgs.gov/ngwmn), which makes groundwater data from distributed data providers available through a single, publicly accessible web application by mediating and aggregating native data exposed via web services on-the-fly into Open Geospatial Consortium (OGC) compliant service output. That output may be accessed either through the map-based user interface or through the aforementioned OGC web services. Furthermore, the Geo Data Portal (http://cida.usgs.gov/climate/gdp/), which is a system that provides users with data access, subsetting and geospatial processing of large and complex climate and land use data, exemplifies the application of International Standards Organization (ISO) metadata records to enhance data discovery for both human and machine interpretation. Lastly, the Water Quality Portal (http://www.waterqualitydata.us/) achieves interoperable dissemination of water quality data by referencing a vocabulary service for mapping constituents and methods between the USGS and USEPA. The NGWMN Data Portal, Geo Data Portal and Water Quality Portal are three examples of best practices when implementing data portals that provide distributed scientific data in an integrated, standards-based approach.
Caudle, Kelly E; Dunnenberger, Henry M; Freimuth, Robert R; Peterson, Josh F; Burlison, Jonathan D; Whirl-Carrillo, Michelle; Scott, Stuart A; Rehm, Heidi L; Williams, Marc S; Klein, Teri E; Relling, Mary V; Hoffman, James M
2017-02-01
Reporting and sharing pharmacogenetic test results across clinical laboratories and electronic health records is a crucial step toward the implementation of clinical pharmacogenetics, but allele function and phenotype terms are not standardized. Our goal was to develop terms that can be broadly applied to characterize pharmacogenetic allele function and inferred phenotypes. Terms currently used by genetic testing laboratories and in the literature were identified. The Clinical Pharmacogenetics Implementation Consortium (CPIC) used the Delphi method to obtain a consensus and agree on uniform terms among pharmacogenetic experts. Experts with diverse involvement in at least one area of pharmacogenetics (clinicians, researchers, genetic testing laboratorians, pharmacogenetics implementers, and clinical informaticians; n = 58) participated. After completion of five surveys, a consensus (>70%) was reached with 90% of experts agreeing to the final sets of pharmacogenetic terms. The proposed standardized pharmacogenetic terms will improve the understanding and interpretation of pharmacogenetic tests and reduce confusion by maintaining consistent nomenclature. These standard terms can also facilitate pharmacogenetic data sharing across diverse electronic health care record systems with clinical decision support.Genet Med 19 2, 215-223.
Anderson, Ericka L.; Li, Weizhong; Klitgord, Niels; Highlander, Sarah K.; Dayrit, Mark; Seguritan, Victor; Yooseph, Shibu; Biggs, William; Venter, J. Craig; Nelson, Karen E.; Jones, Marcus B.
2016-01-01
As reports on possible associations between microbes and the host increase in number, more meaningful interpretations of this information require an ability to compare data sets across studies. This is dependent upon standardization of workflows to ensure comparability both within and between studies. Here we propose the standard use of an alternate collection and stabilization method that would facilitate such comparisons. The DNA Genotek OMNIgene∙Gut Stool Microbiome Kit was compared to the currently accepted community standard of freezing to store human stool samples prior to whole genome sequencing (WGS) for microbiome studies. This stabilization and collection device allows for ambient temperature storage, automation, and ease of shipping/transfer of samples. The device permitted the same data reproducibility as with frozen samples, and yielded higher recovery of nucleic acids. Collection and stabilization of stool microbiome samples with the DNA Genotek collection device, combined with our extraction and WGS, provides a robust, reproducible workflow that enables standardized global collection, storage, and analysis of stool for microbiome studies. PMID:27558918
How people interpret healthy eating: contributions of qualitative research.
Bisogni, Carole A; Jastran, Margaret; Seligson, Marc; Thompson, Alyssa
2012-01-01
To identify how qualitative research has contributed to understanding the ways people in developed countries interpret healthy eating. Bibliographic database searches identified reports of qualitative, empirical studies published in English, peer-reviewed journals since 1995. Authors coded, discussed, recoded, and analyzed papers reporting qualitative research studies related to participants' interpretations of healthy eating. Studies emphasized a social constructionist approach, and most used focus groups and/or individual, in-depth interviews to collect data. Study participants explained healthy eating in terms of food, food components, food production methods, physical outcomes, psychosocial outcomes, standards, personal goals, and as requiring restriction. Researchers described meanings as specific to life stages and different life experiences, such as parenting and disease onset. Identity (self-concept), social settings, resources, food availability, and conflicting considerations were themes in participants' explanations for not eating according to their ideals for healthy eating. People interpret healthy eating in complex and diverse ways that reflect their personal, social, and cultural experiences, as well as their environments. Their meanings include but are broader than the food composition and health outcomes considered by scientists. The rich descriptions and concepts generated by qualitative research can help practitioners and researchers think beyond their own experiences and be open to audience members' perspectives as they seek to promote healthy ways of eating. Copyright © 2012 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Caughlan, Samantha; Beach, Richard
2007-01-01
An analysis of English/language arts standards development in Wisconsin and Minnesota in the late 1990s and early 2000s shows a process of compromise between neoliberal and neoconservative factions involved in promoting and writing standards, with the voices of educators conspicuously absent. Interpretive and critical discourse analyses of…
ERIC Educational Resources Information Center
Bloxham, Sue
2012-01-01
This article considers the failure of theory to provide a workable model for academic standards in use. Examining the contrast between theoretical perspectives, it argues that there are four dimensions for which the academy has failed to provide an adequate theoretical account of standards: documented or tacit knowledge of standards; norm or…
Luce, T. C.; Petty, C. C.; Meyer, W. H.; ...
2016-11-02
An approximate method to correct the motional Stark effect (MSE) spectroscopy for the effects of intrinsic plasma electric fields has been developed. The motivation for using an approximate method is to incorporate electric field effects for between-pulse or real-time analysis of the current density or safety factor profile. The toroidal velocity term in the momentum balance equation is normally the dominant contribution to the electric field orthogonal to the flux surface over most of the plasma. When this approximation is valid, the correction to the MSE data can be included in a form like that used when electric field effectsmore » are neglected. This allows measurements of the toroidal velocity to be integrated into the interpretation of the MSE polarization angles without changing how the data is treated in existing codes. In some cases, such as the DIII-D system, the correction is especially simple, due to the details of the neutral beam and MSE viewing geometry. The correction method is compared using DIII-D data in a variety of plasma conditions to analysis that assumes no radial electric field is present and to analysis that uses the standard correction method, which involves significant human intervention for profile fitting. The comparison shows that the new correction method is close to the standard one, and in all cases appears to offer a better result than use of the uncorrected data. Lastly, the method has been integrated into the standard DIII-D equilibrium reconstruction code in use for analysis between plasma pulses and is sufficiently fast that it will be implemented in real-time equilibrium analysis for control applications.« less
Is the phonological similarity effect in working memory due to proactive interference?
Baddeley, Alan D; Hitch, Graham J; Quinlan, Philip T
2018-04-12
Immediate serial recall of verbal material is highly sensitive to impairment attributable to phonological similarity. Although this has traditionally been interpreted as a within-sequence similarity effect, Engle (2007) proposed an interpretation based on interference from prior sequences, a phenomenon analogous to that found in the Peterson short-term memory (STM) task. We use the method of serial reconstruction to test this in an experiment contrasting the standard paradigm in which successive sequences are drawn from the same set of phonologically similar or dissimilar words and one in which the vowel sound on which similarity is based is switched from trial to trial, a manipulation analogous to that producing release from PI in the Peterson task. A substantial similarity effect occurs under both conditions although there is a small advantage from switching across similar sequences. There is, however, no evidence for the suggestion that the similarity effect will be absent from the very first sequence tested. Our results support the within-sequence similarity rather than a between-list PI interpretation. Reasons for the contrast with the classic Peterson short-term forgetting task are briefly discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Weighing Evidence "Steampunk" Style via the Meta-Analyser.
Bowden, Jack; Jackson, Chris
2016-10-01
The funnel plot is a graphical visualization of summary data estimates from a meta-analysis, and is a useful tool for detecting departures from the standard modeling assumptions. Although perhaps not widely appreciated, a simple extension of the funnel plot can help to facilitate an intuitive interpretation of the mathematics underlying a meta-analysis at a more fundamental level, by equating it to determining the center of mass of a physical system. We used this analogy to explain the concepts of weighing evidence and of biased evidence to a young audience at the Cambridge Science Festival, without recourse to precise definitions or statistical formulas and with a little help from Sherlock Holmes! Following on from the science fair, we have developed an interactive web-application (named the Meta-Analyser) to bring these ideas to a wider audience. We envisage that our application will be a useful tool for researchers when interpreting their data. First, to facilitate a simple understanding of fixed and random effects modeling approaches; second, to assess the importance of outliers; and third, to show the impact of adjusting for small study bias. This final aim is realized by introducing a novel graphical interpretation of the well-known method of Egger regression.
Feline dental radiography and radiology: A primer.
Niemiec, Brook A
2014-11-01
Information crucial to the diagnosis and treatment of feline oral diseases can be ascertained using dental radiography and the inclusion of this technology has been shown to be the best way to improve a dental practice. Becoming familar with the techniques required for dental radiology and radiography can, therefore, be greatly beneficial. Novices to dental radiography may need some time to adjust and become comfortable with the techniques. If using dental radiographic film, the generally recommended 'E' or 'F' speeds may be frustrating at first, due to their more specific exposure and image development requirements. Although interpreting dental radiographs is similar to interpreting a standard bony radiograph, there are pathologic states that are unique to the oral cavity and several normal anatomic structures that may mimic pathologic changes. Determining which teeth have been imaged also requires a firm knowledge of oral anatomy as well as the architecture of dental films/digital systems. This article draws on a range of dental radiography and radiology resources, and the benefit of the author's own experience, to review the basics of taking and interpreting intraoral dental radiographs. A simplified method for positioning the tubehead is explained and classic examples of some common oral pathologies are provided. © ISFM and AAFP 2014.
Shallow Refraction and Rg Analysis at the Source Physics Experiment Site
NASA Astrophysics Data System (ADS)
Rowe, C. A.; Carmichael, J. D.; Patton, H. J.; Snelson, C. M.; Coblentz, D. D.; Larmat, C. S.; Yang, X.
2014-12-01
We present analyses of the two-dimensional (2D) seismic structure beneath Source Physics Experiments (SPE) geophone lines that extended 100 to 2000 m from the source borehole with 100 m spacing. With seismic sources provided only at one end of the geophone lines, standard refraction profiling methods are unable to resolve the seismic velocity structures unambiguously. In previous work we have shown overall agreement between body-wave refraction modeling and Rg dispersion curves for the least complex of the five lines, Line 2, leading us to offer a simplified1D model for this line. A more detailed inspection of Line 2 supports a 2D re-interpretation of the structure on this line. We observe variation along the length of the line, as evidenced by abrupt and consistent changes in the behavior of surface waves at higher frequencies. We interpret this as a manifestation of significant material or structural heterogeneity in the shallowest strata. This interpretation is consistent with P-wave and Rg attenuation observations. Planned additional sources, both at the distal ends of the profiles and intermittently within their lengths, will provide significant enhancement to our ability to resolve this complicated shallow structure.
Interobserver Variation in Response Evaluation Criteria in Solid Tumors 1.1.
Karmakar, Arunabha; Kumtakar, Apeksha; Sehgal, Himanshu; Kumar, Savith; Kalyanpur, Arjun
2018-06-19
Response Evaluation Criteria in Solid Tumors (RECIST 1.1) is the gold standard for imaging response evaluation in cancer trials. We sought to evaluate consistency of applying RECIST 1.1 between 2 conventionally trained radiologists, designated as A and B; identify reasons for variation; and reconcile these differences for future studies. The study was approved as an institutional quality check exercise. Since no identifiable patient data was collected or used, a waiver of informed consent was granted. Imaging case report forms of a concluded multicentric breast cancer trial were retrospectively reviewed. Cohen's kappa was used to rate interobserver agreement in Response Evaluation Data (target response, nontarget response, new lesions, overall response). Significant variations were reassessed by a senior radiologist to extrapolate reasons for disagreement. Methods to improve agreement were similarly ascertained. Sixty one cases with total of 82 data-pairs were evaluated (35 data-pairs in visit 5, 47 in visit 9). Both radiologists showed moderate agreement in target response (n = 82; ĸ = 0.477; 95% confidence interval [CI]: 0.314-0.640-), nontarget response (n = 82; ĸ = 0.578; 95% CI: 0.213-0.944) and overall response evaluation in both visits (n = 82; ĸ = 0.510; 95% CI: 0.344-0.676). Further assessment demonstrated "Prevalence effect" of Kappa in some cases which led to underestimation of agreement. Percent agreement of overall response was 74.39% while percent variation was 25.6%. Differences in interpreting RECIST 1.1 and in radiological image interpretation were the primary sources of variation. The commonest overall response was "Partial Response" (Rad A:45/82; Rad B:63/82). Inspite of moderate interobserver agreement, qualitative interpretation differences in some cases increased interobserver variability. Protocols such as Adjudication, to reduce easily avoidable inconsistencies are or should be a part of the Standard Operating Procedure in imaging institutions. Based on our findings, a standard checklist has been developed to help reduce the interpretation error-margin for future studies. Such check-lists may improve interobserver agreement in the preadjudication phase thereby improving quality of results and reducing adjudication per case ratio. Improving data reliability when using RECIST 1.1 will reflect in better cancer clinical trial outcomes. A checklist can be of use to imaging centers to assess and improve their own processes. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Laxton, Katherine E.
This dissertation takes a close look at how district-level instructional coaches support teachers in learning to shifting their instructional practice, related to the Next Generation Science Standards. This dissertation aims to address how re-structuring professional development to a job-embedded coaching model supports individual teacher learning of new reform-related instructional practice. Implementing the NGSS is a problem of supporting professional learning in a way that will enable educators to make fundamental changes to their teaching practice. However, there are few examples in the literature that explain how coaches interact with teachers to improve teacher learning of reform-related instructional practice. There are also few examples in the literature that specifically address how supporting teachers with extended professional learning opportunities, aligned with high-leverage practices, tools and curriculum, impacts how teachers make sense of new standards-based educational reforms and what manifests in classroom instruction. This dissertation proposes four conceptual categories of sense-making that influence how instructional coaches interpret the nature of reform, their roles and in instructional improvement and how to work with teachers. It is important to understand how coaches interpret reform because their interpretations may have unintended consequences related to privileging certain views about instruction, or establishing priorities for how to work with teachers. In this dissertation, we found that re-structuring professional development to a job-embedded coaching model supported teachers in learning new reform-related instructional practice. However, individual teacher interpretations of reform emerged and seemed to be linked to how instructional coaches supported teacher learning.
A method to assess social sustainability of capture fisheries: An application to a Norwegian trawler
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veldhuizen, L.J.L., E-mail: linda.veldhuizen@wur.nl; Berentsen, P.B.M.; Bokkers, E.A.M.
Social sustainability assessment of capture fisheries is, both in terms of method development and measurement, not well developed. The objective of this study, therefore, was to develop a method consisting of indicators and rubrics (i.e. categories that articulate levels of performance) to assess social sustainability of capture fisheries. This method was applied to a Norwegian trawler that targets cod and haddock in the northeast Atlantic. Based on previous research, 13 social sustainability issues were selected. To measure the state of these issues, 17 process and outcome indicators were determined. To interpret indicator values, rubrics were developed for each indicator, usingmore » standards set by international conventions or data retrieved from national statistics, industry agreements or scientific publications that explore rubric scales. The indicators and rubrics were subsequently used in a social sustainability assessment of a Norwegian trawler. This assessment indicated that overall, social sustainability of this trawler is relatively high, with high rubric scores, for example, for worker safety, provisions aboard for the crew and companies' salary levels. The assessment also indicated that the trawler could improve on healthy working environment, product freshness and fish welfare during capture. This application demonstrated that our method provides insight into social sustainability at the level of the vessel and can be used to identify potential room for improvement. This method is also promising for social sustainability assessment of other capture fisheries. - Highlights: • A method was developed for social sustainability assessment of capture fisheries. • This method entailed determining outcome and process indicators for important issues. • To interpret indicator values, a rubric was developed for each indicator. • Use of this method gives insight into social sustainability and improvement options. • This method is promising for social sustainability assessment of capture fisheries.« less
Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation
NASA Technical Reports Server (NTRS)
Mandrake, Lukas
2013-01-01
Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.
NASA Astrophysics Data System (ADS)
Brinker, R.; Cory, R. M.
2014-12-01
Next Generation Science Standards (NGSS) calls for students across grade levels to understand climate change and its impacts. To achieve this goal, the NSF-sponsored PolarTREC program paired an educator with scientists studying carbon cycling in the Arctic. The data collection and fieldwork performed by the team will form the basis of hands-on science learning in the classroom and will be incorporated into informal outreach sessions in the community. Over a 16-day period, the educator was stationed at Toolik Field Station in the High Arctic. (Toolik is run by the University of Alaska, Fairbanks, Institute of Arctic Biology.) She participated in a project that analyzed the effects of sunlight and microbial content on carbon production in Artic watersheds. Data collected will be used to introduce the following NGSS standards into the middle-school science curriculum: 1) Construct a scientific explanation based on evidence. 2) Develop a model to explain cycling of water. 3) Develop and use a model to describe phenomena. 4) Analyze and interpret data. 5) A change in one system causes and effect in other systems. Lessons can be telescoped to meet the needs of classrooms in higher or lower grades. Through these activities, students will learn strategies to model an aspect of carbon cycling, interpret authentic scientific data collected in the field, and conduct geoscience research on carbon cycling. Community outreach sessions are also an effective method to introduce and discuss the importance of geoscience education. Informal discussions of firsthand experience gained during fieldwork can help communicate to a lay audience the biological, physical, and chemical aspects of the arctic carbon cycle and the impacts of climate change on these features. Outreach methods will also include novel use of online tools to directly connect audiences with scientists in an effective and time-efficient manner.
29 CFR 780.606 - Interpretation of term “agriculture.”
Code of Federal Regulations, 2010 CFR
2010-07-01
... AGRICULTURE, PROCESSING OF AGRICULTURAL COMMODITIES, AND RELATED SUBJECTS UNDER THE FAIR LABOR STANDARDS ACT Employment in Agriculture and Livestock Auction Operations Under the Section 13(b)(13) Exemption Requirements for Exemption § 780.606 Interpretation of term “agriculture.” Section 3(f) of the Act, which defines...
29 CFR 780.606 - Interpretation of term “agriculture.”
Code of Federal Regulations, 2011 CFR
2011-07-01
... AGRICULTURE, PROCESSING OF AGRICULTURAL COMMODITIES, AND RELATED SUBJECTS UNDER THE FAIR LABOR STANDARDS ACT Employment in Agriculture and Livestock Auction Operations Under the Section 13(b)(13) Exemption Requirements for Exemption § 780.606 Interpretation of term “agriculture.” Section 3(f) of the Act, which defines...
29 CFR 780.606 - Interpretation of term “agriculture.”
Code of Federal Regulations, 2012 CFR
2012-07-01
... AGRICULTURE, PROCESSING OF AGRICULTURAL COMMODITIES, AND RELATED SUBJECTS UNDER THE FAIR LABOR STANDARDS ACT Employment in Agriculture and Livestock Auction Operations Under the Section 13(b)(13) Exemption Requirements for Exemption § 780.606 Interpretation of term “agriculture.” Section 3(f) of the Act, which defines...
29 CFR 780.606 - Interpretation of term “agriculture.”
Code of Federal Regulations, 2014 CFR
2014-07-01
... AGRICULTURE, PROCESSING OF AGRICULTURAL COMMODITIES, AND RELATED SUBJECTS UNDER THE FAIR LABOR STANDARDS ACT Employment in Agriculture and Livestock Auction Operations Under the Section 13(b)(13) Exemption Requirements for Exemption § 780.606 Interpretation of term “agriculture.” Section 3(f) of the Act, which defines...
29 CFR 780.606 - Interpretation of term “agriculture.”
Code of Federal Regulations, 2013 CFR
2013-07-01
... AGRICULTURE, PROCESSING OF AGRICULTURAL COMMODITIES, AND RELATED SUBJECTS UNDER THE FAIR LABOR STANDARDS ACT Employment in Agriculture and Livestock Auction Operations Under the Section 13(b)(13) Exemption Requirements for Exemption § 780.606 Interpretation of term “agriculture.” Section 3(f) of the Act, which defines...
29 CFR 570.103 - Comparison with wage and hour provisions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... REGULATIONS CHILD LABOR REGULATIONS, ORDERS AND STATEMENTS OF INTERPRETATION General Statements of Interpretation of the Child Labor Provisions of the Fair Labor Standards Act of 1938, as Amended General § 570.103 Comparison with wage and hour provisions. A comparison of the child labor provisions with the so...
29 CFR 570.103 - Comparison with wage and hour provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... REGULATIONS CHILD LABOR REGULATIONS, ORDERS AND STATEMENTS OF INTERPRETATION General Statements of Interpretation of the Child Labor Provisions of the Fair Labor Standards Act of 1938, as Amended General § 570.103 Comparison with wage and hour provisions. A comparison of the child labor provisions with the so...
29 CFR 570.103 - Comparison with wage and hour provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... provisions, regardless of their age or sex. The fact therefore, that the employment of a particular child is... REGULATIONS CHILD LABOR REGULATIONS, ORDERS AND STATEMENTS OF INTERPRETATION General Statements of Interpretation of the Child Labor Provisions of the Fair Labor Standards Act of 1938, as Amended General § 570...
29 CFR 570.103 - Comparison with wage and hour provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... REGULATIONS CHILD LABOR REGULATIONS, ORDERS AND STATEMENTS OF INTERPRETATION General Statements of Interpretation of the Child Labor Provisions of the Fair Labor Standards Act of 1938, as Amended General § 570.103 Comparison with wage and hour provisions. A comparison of the child labor provisions with the so...
29 CFR 570.103 - Comparison with wage and hour provisions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... REGULATIONS CHILD LABOR REGULATIONS, ORDERS AND STATEMENTS OF INTERPRETATION General Statements of Interpretation of the Child Labor Provisions of the Fair Labor Standards Act of 1938, as Amended General § 570.103 Comparison with wage and hour provisions. A comparison of the child labor provisions with the so...
Evaluating Test Validity: Reprise and Progress
ERIC Educational Resources Information Center
Shepard, Lorrie A.
2016-01-01
The AERA, APA, NCME Standards define validity as "the degree to which evidence and theory support the interpretations of test scores for proposed uses of tests". A century of disagreement about validity does not mean that there has not been substantial progress. This consensus definition brings together interpretations and use so that it…
Recommendations for standardized reporting of protein electrophoresis in Australia and New Zealand.
Tate, Jillian; Caldwell, Grahame; Daly, James; Gillis, David; Jenkins, Margaret; Jovanovich, Sue; Martin, Helen; Steele, Richard; Wienholt, Louise; Mollee, Peter
2012-05-01
Although protein electrophoresis of serum (SPEP) and urine (UPEP) specimens is a well-established laboratory technique, the reporting of results using this important method varies considerably between laboratories. The Australasian Association of Clinical Biochemists recognized a need to adopt a standardized approach to reporting SPEP and UPEP by clinical laboratories. A Working Party considered available data including published literature and clinical studies, together with expert opinion in order to establish optimal reporting practices. A position paper was produced, which was subsequently revised through a consensus process involving scientists and pathologists with expertise in the field throughout Australia and New Zealand. Recommendations for standardized reporting of protein electrophoresis have been produced. These cover analytical requirements: detection systems; serum protein and albumin quantification; fractionation into alpha-1, alpha-2, beta and gamma fractions; paraprotein quantification; urine Bence Jones protein quantification; paraprotein characterization; and laboratory performance, expertise and staffing. The recommendations also include general interpretive commenting and commenting for specimens with paraproteins and small bands together with illustrative examples of reports. Recommendations are provided for standardized reporting of protein electrophoresis in Australia and New Zealand. It is expected that such standardized reporting formats will reduce both variation between laboratories and the risk of misinterpretation of results.
Pearls and pitfalls in neural CGRP immunohistochemistry.
Warfvinge, Karin; Edvinsson, Lars
2013-06-01
This review outlines the pearls and pitfalls of calcitonin-gene related protein (CGRP) immunohistochemistry of the brain. In 1985, CGRP was first described in cerebral arteries using immunohistochemistry. Since then, cerebral CGRP (and, using novel antibodies, its receptor components) has been widely scrutinized. Here, we describe the distribution of cerebral CGRP and pay special attention to the surprising reliability of results over time. Pitfalls might include a fixation procedure, antibody clone and dilution, and interpretation of results. Standardization of staining protocols and true quantitative methods are lacking. The use of computerized image analysis has led us to believe that our examination is objective. However, in the steps of performing such an analysis, we make subjective choices. By pointing out these pitfalls, we aim to further improve immunohistochemical quality. Having a clear picture of the tissue/cell morphology is a necessity. A primary morphological evaluation with, for example, hematoxylin-eosin, helps to ensure that small changes are not missed and that background and artifactual changes, which may include vacuoles, pigments, and dark neurons, are not over-interpreted as compound-related changes. The antigen-antibody reaction appears simple and clear in theory, but many steps might go wrong. Remember that methods including the antigen-antibody complex rely on handling/fixation of tissues or cells, antibody shipping/storing issues, antibody titration, temperature/duration of antibody incubation, visualization of the antibody and interpretation of the results. Optimize staining protocols to the material you are using.
Dweik, Raed A.; Boggs, Peter B.; Erzurum, Serpil C.; Irvin, Charles G.; Leigh, Margaret W.; Lundberg, Jon O.; Olin, Anna-Carin; Plummer, Alan L.; Taylor, D. Robin
2011-01-01
Background: Measurement of fractional nitric oxide (NO) concentration in exhaled breath (FeNO) is a quantitative, noninvasive, simple, and safe method of measuring airway inflammation that provides a complementary tool to other ways of assessing airways disease, including asthma. While FeNO measurement has been standardized, there is currently no reference guideline for practicing health care providers to guide them in the appropriate use and interpretation of FeNO in clinical practice. Purpose: To develop evidence-based guidelines for the interpretation of FeNO measurements that incorporate evidence that has accumulated over the past decade. Methods: We created a multidisciplinary committee with expertise in the clinical care, clinical science, or basic science of airway disease and/or NO. The committee identified important clinical questions, synthesized the evidence, and formulated recommendations. Recommendations were developed using pragmatic systematic reviews of the literature and the GRADE approach. Results: The evidence related to the use of FeNO measurements is reviewed and clinical practice recommendations are provided. Conclusions: In the setting of chronic inflammatory airway disease including asthma, conventional tests such as FEV1 reversibility or provocation tests are only indirectly associated with airway inflammation. FeNO offers added advantages for patient care including, but not limited to (1) detecting of eosinophilic airway inflammation, (2) determining the likelihood of corticosteroid responsiveness, (3) monitoring of airway inflammation to determine the potential need for corticosteroid, and (4) unmasking of otherwise unsuspected nonadherence to corticosteroid therapy. PMID:21885636
Jurkojć, Jacek; Wodarski, Piotr; Michnik, Robert A; Bieniek, Andrzej; Gzik, Marek; Granek, Arkadiusz
2017-01-01
Indexing methods are very popular in terms of determining the degree of disability associated with motor dysfunctions. Currently, indexing methods dedicated to the upper limbs are not very popular, probably due to difficulties in their interpretation. This work presents the calculation algorithm of new SDDI index and the attempt is made to determine the level of physical dysfunction along with description of its kind, based on the interpretation of the calculation results of SDDI and PULMI indices. 23 healthy people (10 women and 13 men), which constituted a reference group, and a group of 3 people with mobility impairments participated in the tests. In order to examine possibilities of the utilization of the SDDI index the participants had to repetitively perform two selected rehabilitation movements of upper extremities. During the tests the kinematic value was registered using inertial motion analysis system MVN BIOMECH. The results of the test were collected in waveforms of 9 anatomical angles in 4 joints of upper extremities. Then, SDDI and PULMI indices were calculated for each person with mobility impairments. Next, the analysis was performed to check which abnormalities in upper extremity motion can influence the value of both indexes and interpretation of those indexes was shown. Joint analysis of the both indices provides information on whether the patient has correctly performed the set sequence of movement and enables the determination of possible irregularities in the performance of movement given.
Comments on the New International Criteria for Electrocardiographic Interpretation in Athletes.
Serratosa-Fernández, Luis; Pascual-Figal, Domingo; Masiá-Mondéjar, María Dolores; Sanz-de la Garza, María; Madaria-Marijuan, Zigor; Gimeno-Blanes, Juan Ramón; Adamuz, Carmen
2017-11-01
Sudden cardiac death is the most common medical cause of death during the practice of sports. Several structural and electrical cardiac conditions are associated with sudden cardiac death in athletes, most of them showing abnormal findings on resting electrocardiogram (ECG). However, because of the similarity between some ECG findings associated with physiological adaptations to exercise training and those of certain cardiac conditions, ECG interpretation in athletes is often challenging. Other factors related to ECG findings are race, age, sex, sports discipline, training intensity, and athletic background. Specific training and experience in ECG interpretation in athletes are therefore necessary. Since 2005, when the first recommendations of the European Society of Cardiology were published, growing scientific evidence has increased the specificity of ECG standards, thus lowering the false-positive rate while maintaining sensitivity. New international consensus guidelines have recently been published on ECG interpretation in athletes, which are the result of consensus among a group of experts in cardiology and sports medicine who gathered for the first time in February 2015 in Seattle, in the United States. The document is an important milestone because, in addition to updating the standards for ECG interpretation, it includes recommendations on appropriate assessment of athletes with abnormal ECG findings. The present article reports and discusses the most novel and relevant aspects of the new standards. Nevertheless, a complete reading of the original consensus document is highly recommended. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
Informatics in radiology: an information model of the DICOM standard.
Kahn, Charles E; Langlotz, Curtis P; Channin, David S; Rubin, Daniel L
2011-01-01
The Digital Imaging and Communications in Medicine (DICOM) Standard is a key foundational technology for radiology. However, its complexity creates challenges for information system developers because the current DICOM specification requires human interpretation and is subject to nonstandard implementation. To address this problem, a formally sound and computationally accessible information model of the DICOM Standard was created. The DICOM Standard was modeled as an ontology, a machine-accessible and human-interpretable representation that may be viewed and manipulated by information-modeling tools. The DICOM Ontology includes a real-world model and a DICOM entity model. The real-world model describes patients, studies, images, and other features of medical imaging. The DICOM entity model describes connections between real-world entities and the classes that model the corresponding DICOM information entities. The DICOM Ontology was created to support the Cancer Biomedical Informatics Grid (caBIG) initiative, and it may be extended to encompass the entire DICOM Standard and serve as a foundation of medical imaging systems for research and patient care. RSNA, 2010
Magness, Scott T.; Puthoff, Brent J.; Crissey, Mary Ann; Dunn, James; Henning, Susan J.; Houchen, Courtney; Kaddis, John S.; Kuo, Calvin J.; Li, Linheng; Lynch, John; Martin, Martin G.; May, Randal; Niland, Joyce C.; Olack, Barbara; Qian, Dajun; Stelzner, Matthias; Swain, John R.; Wang, Fengchao; Wang, Jiafang; Wang, Xinwei; Yan, Kelley; Yu, Jian
2013-01-01
Fluorescence-activated cell sorting (FACS) is an essential tool for studies requiring isolation of distinct intestinal epithelial cell populations. Inconsistent or lack of reporting of the critical parameters associated with FACS methodologies has complicated interpretation, comparison, and reproduction of important findings. To address this problem a comprehensive multicenter study was designed to develop guidelines that limit experimental and data reporting variability and provide a foundation for accurate comparison of data between studies. Common methodologies and data reporting protocols for tissue dissociation, cell yield, cell viability, FACS, and postsort purity were established. Seven centers tested the standardized methods by FACS-isolating a specific crypt-based epithelial population (EpCAM+/CD44+) from murine small intestine. Genetic biomarkers for stem/progenitor (Lgr5 and Atoh 1) and differentiated cell lineages (lysozyme, mucin2, chromogranin A, and sucrase isomaltase) were interrogated in target and control populations to assess intra- and intercenter variability. Wilcoxon's rank sum test on gene expression levels showed limited intracenter variability between biological replicates. Principal component analysis demonstrated significant intercenter reproducibility among four centers. Analysis of data collected by standardized cell isolation methods and data reporting requirements readily identified methodological problems, indicating that standard reporting parameters facilitate post hoc error identification. These results indicate that the complexity of FACS isolation of target intestinal epithelial populations can be highly reproducible between biological replicates and different institutions by adherence to common cell isolation methods and FACS gating strategies. This study can be considered a foundation for continued method development and a starting point for investigators that are developing cell isolation expertise to study physiology and pathophysiology of the intestinal epithelium. PMID:23928185
Tracking the hyoid bone in videofluoroscopic swallowing studies
NASA Astrophysics Data System (ADS)
Kellen, Patrick M.; Becker, Darci; Reinhardt, Joseph M.; van Daele, Douglas
2008-03-01
Difficulty swallowing, or dysphagia, has become a growing problem. Swallowing complications can lead to malnutrition, dehydration, respiratory infection, and even death. The current gold standard for analyzing and diagnosing dysphagia is the videofluoroscopic barium swallow study. In these studies, a fluoroscope is used to image the patient ingesting barium solutions of different volumes and viscosities. The hyoid bone anchors many key muscles involved in swallowing and plays a key role in the process. Abnormal hyoid bone motion during a swallow can indicate swallowing dysfunction. Currently in clinical settings, hyoid bone motion is assessed qualitatively, which can be subject to intra-rater and inter-rater bias. This paper presents a semi-automatic method for tracking the hyoid bone that makes quantitative analysis feasible. The user defines a template of the hyoid on one frame, and this template is tracked across subsequent frames. The matching phase is optimized by predicting the position of the template based on kinematics. An expert speech pathologist marked the position of the hyoid on each frame of ten studies to serve as the gold standard. Results from performing Bland-Altman analysis at a 95% confidence interval showed a bias of 0.0+/-0.08 pixels in x and -0.08+/-0.09 pixels in y between the manually-defined gold standard and the proposed method. The average Pearson's correlation between the gold standard and the proposed method was 0.987 in x and 0.980 in y. This paper also presents a method for automatically establishing a patient-centric coordinate system for the interpretation of hyoid motion. This coordinate system corrects for upper body patient motion during the study and identifies superior-inferior and anterior-posterior motion components. These tools make the use of quantitative hyoid motion analysis feasible in clinical and research settings.
Algorithms of maximum likelihood data clustering with applications
NASA Astrophysics Data System (ADS)
Giada, Lorenzo; Marsili, Matteo
2002-12-01
We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.
Novel method for quantitative ANA measurement using near-infrared imaging.
Peterson, Lisa K; Wells, Daniel; Shaw, Laura; Velez, Maria-Gabriela; Harbeck, Ronald; Dragone, Leonard L
2009-09-30
Antinuclear antibodies (ANA) have been detected in patients with systemic rheumatic diseases and are used in the screening and/or diagnosis of autoimmunity in patients as well as mouse models of systemic autoimmunity. Indirect immunofluorescence (IIF) on HEp-2 cells is the gold standard for ANA screening. However, its usefulness is limited in diagnosis, prognosis and monitoring of disease activity due to the lack of standardization in performing the technique, subjectivity in interpreting the results and the fact that it is only semi-quantitative. Various immunological techniques have been developed in an attempt to improve upon the method to quantify ANA, including enzyme-linked immunosorbent assays (ELISAs), line immunoassays (LIAs), multiplexed bead immunoassays and IIF on substrates other than HEp-2 cells. Yet IIF on HEp-2 cells remains the most common screening method for ANA. In this study, we describe a simple quantitative method to detect ANA which combines IIF on HEp-2 coated slides with analysis using a near-infrared imaging (NII) system. Using NII to determine ANA titer, 86.5% (32 of 37) of the titers for human patient samples were within 2 dilutions of those determined by IIF, which is the acceptable range for proficiency testing. Combining an initial screening for nuclear staining using microscopy with titration by NII resulted in 97.3% (36 of 37) of the titers detected to be within two dilutions of those determined by IIF. The NII method for quantitative ANA measurements using serum from both patients and mice with autoimmunity provides a fast, relatively simple, objective, sensitive and reproducible assay, which could easily be standardized for comparison between laboratories.
Manual of praying mantis morphology, nomenclature, and practices (Insecta, Mantodea)
Brannoch, Sydney K.; Wieland, Frank; Rivera, Julio; Klass, Klaus-Dieter; Olivier Béthoux; Svenson, Gavin J.
2017-01-01
Abstract This study provides a comprehensive review of historical morphological nomenclature used for praying mantis (Mantodea) morphology, which includes citations, original use, and assignment of homology. All referenced structures across historical works correspond to a proposed standard term for use in all subsequent works pertaining to praying mantis morphology and systematics. The new standards are presented with a verbal description in a glossary as well as indicated on illustrations and images. In the vast majority of cases, originally used terms were adopted as the new standard. In addition, historical morphological topographical homology conjectures are considered with discussion on modern interpretations. A new standardized formulation to present foreleg femoral and tibial spines is proposed for clarity based on previous works. In addition, descriptions for methods of collection, curation, genital complex dissection, and labeling are provided to aid in the proper preservation and storage of specimens for longevity and ease of study. Due to the lack of consistent linear morphometric measurement practices in the literature, we have proposed a series of measurements for taxonomic and morphological research. These measurements are presented with figures to provide visual aids with homologous landmarks to ensure compatibility and comparability across the Order. Finally, our proposed method of pinning mantises is presented with a photographical example as well as a video tutorial available at http://mantodearesearch.com. PMID:29200926
NASA Technical Reports Server (NTRS)
Nguyen, Huy H.; Martin, Michael A.
2004-01-01
The two most common approaches used to formulate thermodynamic properties of pure substances are fundamental (or characteristic) equations of state (Helmholtz and Gibbs functions) and a piecemeal approach that is described in Adebiyi and Russell (1992). This paper neither presents a different method to formulate thermodynamic properties of pure substances nor validates the aforementioned approaches. Rather its purpose is to present a method to generate property tables from existing property packages and a method to facilitate the accurate interpretation of fluid thermodynamic property data from those tables. There are two parts to this paper. The first part of the paper shows how efficient and usable property tables were generated, with the minimum number of data points, using an aerospace industry standard property package. The second part describes an innovative interpolation technique that has been developed to properly obtain thermodynamic properties near the saturated liquid and saturated vapor lines.
HARADA, Kazuki; USUI, Masaru; ASAI, Tetsuo
2014-01-01
ABSTRACT In this study, susceptibilities of Pasteurella multocida, Mannheimia haemolytica and Actinobacillus pleuropneumoniae to enrofloxacin and orbifloxacin were tested using an agar diffusion method with the commercial disks and a broth microdilution method. Good correlation between the 2 methods for enrofloxacin and orbifloxacin was observed for P. multocida (r = −0.743 and −0.818, respectively), M. haemolytica (r = −0.739 and −0.800, respectively) and A. pleuropneumoniae (r = −0.785 and −0.809, respectively). Based on the Clinical and Laboratory Standards Institute interpretive criteria for enrofloxacin, high-level categorical agreement between the 2 methods was found for P. multocida (97.9%), M. haemolytica (93.8%) and A. pleuropneumoniae (92.0%). Our findings indicate that the tested commercial disks can be applied for susceptibility testing of veterinary respiratory pathogens. PMID:25008965
ERIC Educational Resources Information Center
Gong, Brian; Marion, Scott
2006-01-01
Dealing with flexibility--or its converse, the extent of standardization--is fundamental to alignment, assessment design, and interpretation of results in fully inclusive assessment systems. Highly standardized tests make it easier to compare (performances, students, and schools) across time and to common standards because certain conditions are…
Optimal control of a harmonic oscillator: Economic interpretations
NASA Astrophysics Data System (ADS)
Janová, Jitka; Hampel, David
2013-10-01
Optimal control is a popular technique for modelling and solving the dynamic decision problems in economics. A standard interpretation of the criteria function and Lagrange multipliers in the profit maximization problem is well known. On a particular example, we aim to a deeper understanding of the possible economic interpretations of further mathematical and solution features of the optimal control problem: we focus on the solution of the optimal control problem for harmonic oscillator serving as a model for Phillips business cycle. We discuss the economic interpretations of arising mathematical objects with respect to well known reasoning for these in other problems.
Bulik, Catharine C.; Fauntleroy, Kathy A.; Jenkins, Stephen G.; Abuali, Mayssa; LaBombardi, Vincent J.; Nicolau, David P.; Kuti, Joseph L.
2010-01-01
We describe the levels of agreement between broth microdilution, Etest, Vitek 2, Sensititre, and MicroScan methods to accurately define the meropenem MIC and categorical interpretation of susceptibility against carbapenemase-producing Klebsiella pneumoniae (KPC). A total of 46 clinical K. pneumoniae isolates with KPC genotypes, all modified Hodge test and blaKPC positive, collected from two hospitals in NY were included. Results obtained by each method were compared with those from broth microdilution (the reference method), and agreement was assessed based on MICs and Clinical Laboratory Standards Institute (CLSI) interpretative criteria using 2010 susceptibility breakpoints. Based on broth microdilution, 0%, 2.2%, and 97.8% of the KPC isolates were classified as susceptible, intermediate, and resistant to meropenem, respectively. Results from MicroScan demonstrated the most agreement with those from broth microdilution, with 95.6% agreement based on the MIC and 2.2% classified as minor errors, and no major or very major errors. Etest demonstrated 82.6% agreement with broth microdilution MICs, a very major error rate of 2.2%, and a minor error rate of 2.2%. Vitek 2 MIC agreement was 30.4%, with a 23.9% very major error rate and a 39.1% minor error rate. Sensititre demonstrated MIC agreement for 26.1% of isolates, with a 3% very major error rate and a 26.1% minor error rate. Application of FDA breakpoints had little effect on minor error rates but increased very major error rates to 58.7% for Vitek 2 and Sensititre. Meropenem MIC results and categorical interpretations for carbapenemase-producing K. pneumoniae differ by methodology. Confirmation of testing results is encouraged when an accurate MIC is required for antibiotic dosing optimization. PMID:20484603
Control vocabulary software designed for CMIP6
NASA Astrophysics Data System (ADS)
Nadeau, D.; Taylor, K. E.; Williams, D. N.; Ames, S.
2016-12-01
The Coupled Model Intercomparison Project Phase 6 (CMIP6) coordinates a number of intercomparison activities and includes many more experiments than its predecessor, CMIP5. In order to organize and facilitate use of the complex collection of expected CMIP6 model output, a standard set of descriptive information has been defined, which must be stored along with the data. This standard information enables automated machine interpretation of the contents of all model output files. The standard metadata is stored in compliance with the Climate and Forecast (CF) standard, which ensures that it can be interpreted and visualized by many standard software packages. Additional attributes (not standardized by CF) are required by CMIP6 to enhance identification of models and experiments, and to provide additional information critical for interpreting the model results. To ensure that CMIP6 data complies with the standards, a python program called "PrePARE" (Pre-Publication Attribute Reviewer for the ESGF) has been developed to check the model output prior to its publication and release for analysis. If, for example, a required attribute is missing or incorrect (e.g., not included in the reference CMIP6 controlled vocabularies), then PrePare will prevent publication. In some circumstances, missing attributes can be created or incorrect attributes can be replaced automatically by PrePARE, and the program will warn users about the changes that have been made. PrePARE provides a final check on model output assuring adherence to a baseline conformity across the output from all CMIP6 models which will facilitate analysis by climate scientists. PrePARE is flexible and can be easily modified for use by similar projects that have a well-defined set of metadata and controlled vocabularies.
Medical cost analysis: application to colorectal cancer data from the SEER Medicare database.
Bang, Heejung
2005-10-01
Incompleteness is a key feature of most survival data. Numerous well established statistical methodologies and algorithms exist for analyzing life or failure time data. However, induced censorship invalidates the use of those standard analytic tools for some survival-type data such as medical costs. In this paper, some valid methods currently available for analyzing censored medical cost data are reviewed. Some cautionary findings under different assumptions are envisioned through application to medical costs from colorectal cancer patients. Cost analysis should be suitably planned and carefully interpreted under various meaningful scenarios even with judiciously selected statistical methods. This approach would be greatly helpful to policy makers who seek to prioritize health care expenditures and to assess the elements of resource use.
Computational intelligence approaches for pattern discovery in biological systems.
Fogel, Gary B
2008-07-01
Biology, chemistry and medicine are faced by tremendous challenges caused by an overwhelming amount of data and the need for rapid interpretation. Computational intelligence (CI) approaches such as artificial neural networks, fuzzy systems and evolutionary computation are being used with increasing frequency to contend with this problem, in light of noise, non-linearity and temporal dynamics in the data. Such methods can be used to develop robust models of processes either on their own or in combination with standard statistical approaches. This is especially true for database mining, where modeling is a key component of scientific understanding. This review provides an introduction to current CI methods, their application to biological problems, and concludes with a commentary about the anticipated impact of these approaches in bioinformatics.
NASA Astrophysics Data System (ADS)
Costin, Ovidiu; Dunne, Gerald V.
2018-01-01
We show how to convert divergent series, which typically occur in many applications in physics, into rapidly convergent inverse factorial series. This can be interpreted physically as a novel resummation of perturbative series. Being convergent, these new series allow rigorous extrapolation from an asymptotic region with a large parameter, to the opposite region where the parameter is small. We illustrate the method with various physical examples, and discuss how these convergent series relate to standard methods such as Borel summation, and also how they incorporate the physical Stokes phenomenon. We comment on the relation of these results to Dyson’s physical argument for the divergence of perturbation theory. This approach also leads naturally to a wide class of relations between bosonic and fermionic partition functions, and Klein-Gordon and Dirac determinants.
NASA Astrophysics Data System (ADS)
Martínez, K.; Mendoza, J. A.; Colberg-Larsen, J.; Ploug, C.
2009-05-01
Near surface geophysics applications are gaining more widespread use in geotechnical and engineering projects. The development of data acquisition, processing tools and interpretation methods have optimized survey time, reduced logistics costs and increase results reliability of seismic surveys during the last decades. However, the use of wide-scale geophysical methods under urban environments continues to face great challenges due to multiple noise sources and obstacles inherent to cities. A seismic pre-investigation was conducted to investigate the feasibility of using seismic methods to obtain information about the subsurface layer locations and media properties in Copenhagen. Such information is needed for hydrological, geotechnical and groundwater modeling related to the Cityringen underground metro project. The pre-investigation objectives were to validate methods in an urban environment and optimize field survey procedures, processing and interpretation methods in urban settings in the event of further seismic investigations. The geological setting at the survey site is characterized by several interlaced layers of clay, till and sand. These layers are found unevenly distributed throughout the city and present varying thickness, overlaying several different unit types of limestone at shallow depths. Specific results objectives were to map the bedrock surface, ascertain a structural geological framework and investigate bedrock media properties relevant to the construction design. The seismic test consisted of a combined seismic reflection and refraction analyses of a profile line conducted along an approximately 1400 m section in the northern part of Copenhagen, along the projected metro city line. The data acquisition was carried out using a 192 channels array, receiver groups with 5 m spacing and a Vibroseis as a source at 10 m spacing. Complementarily, six vertical seismic profiles (VSP) were performed at boreholes located along the line. The reflection data underwent standard interpretation and the refraction included wavepath Eikonal traveltime tomography. The reflection results indicate the presence of horizontal reflectors with discontinuities likely related to deep lying structural features in deeper lying chalk layers. The refraction interpretation allowed the identification of the upper limestone surface, relevant to map for tunneling design. The VSP provided additional information regarding limestone quality and provided correlation data for improved refraction interpretation. In general, the pre-investigation results demonstrated that it is possible to image the limestone surface using the seismic method. The satisfactory results lead to the implementation of a 15 km survey planned during the spring 2009. The survey will combine reflection, refraction, walkaway-VSP and electrical resistivity tomography (ERT). The authors wish to acknowledge Metroselskabet I/S for permission in presenting the preliminary results and the Cityringen Joint Venture partners Arup and Systra.
Tabard-Fougère, Anne; Rose-Dulcina, Kevin; Pittet, Vincent; Dayer, Romain; Vuillerme, Nicolas; Armand, Stéphane
2018-02-01
Electromyography (EMG) is an important parameter in Clinical Gait Analysis (CGA), and is generally interpreted with timing of activation. EMG amplitude comparisons between individuals, muscles or days need normalization. There is no consensus on existing methods. The gold standard, maximum voluntary isometric contraction (MVIC), is not adapted to pathological populations because patients are often unable to perform an MVIC. The normalization method inspired by the isometric grade 3 of manual muscle testing (isoMMT3), which is the ability of a muscle to maintain a position against gravity, could be an interesting alternative. The aim of this study was to evaluate the within- and between-day reliability of the isoMMT3 EMG normalizing method during gait compared with the conventional MVIC method. Lower limb muscles EMG (gluteus medius, rectus femoris, tibialis anterior, semitendinosus) were recorded bilaterally in nine healthy participants (five males, aged 29.7±6.2years, BMI 22.7±3.3kgm -2 ) giving a total of 18 independent legs. Three repeated measurements of the isoMMT3 and MVIC exercises were performed with an EMG recording. EMG amplitude of the muscles during gait was normalized by these two methods. This protocol was repeated one week later. Within- and between-day reliability of normalization tasks were similar for isoMMT3 and MVIC methods. Within- and between-day reliability of gait EMG normalized by isoMMT3 was higher than with MVIC normalization. These results indicate that EMG normalization using isoMMT3 is a reliable method with no special equipment needed and will support CGA interpretation. The next step will be to evaluate this method in pathological populations. Copyright © 2017 Elsevier B.V. All rights reserved.
Interpretive criteria of antimicrobial disk susceptibility tests with flomoxef.
Grimm, H
1991-01-01
320 recently isolated pathogens, 20 strains from each of 16 species, were investigated using Mueller-Hinton agar and DIN as well as NCCLS standards. The geometric mean of the agar dilution MICs of flomoxef were 0.44 mg/l for Staphylococcus aureus, 0.05 mg/l (Klebsiella oxytoca) to 12.6 mg/l (Enterobacter spp.) for enterobacteriaceae, 33.1 mg/l for Acinetobacter anitratus, 64 mg/l for Enterococcus faecalis, and more than 256 mg/l for Pseudomonas aeruginosa. For disk susceptibility testing of flomoxef a 30 micrograms disk loading and the following interpretation of inhibition zones using the DIN method were recommended: resistant-up to 22 mm (corresponding to MICs of 8 mg/l or more), moderately susceptible-23 to 29 mm (corresponding to MICs from 1 to 4 mg/l), and susceptible-30 mm or more (corresponding to MICs of 0.5 mg/l or less). The respective values for the NCCLS method using the American high MIC breakpoints are: resistant--up to 14 mm (corresponding to MICs of 32 mg/l or more), moderately susceptible--15 to 17 mm (corresponding to MICs of 16 mg/l), and susceptible--18 mm or more (corresponding to MICs of 8 mg/l or less).
Vasikaran, Samuel
2008-08-01
* Clinical laboratories should be able to offer interpretation of the results they produce. * At a minimum, contact details for interpretative advice should be available on laboratory reports.Interpretative comments may be verbal or written and printed. * Printed comments on reports should be offered judiciously, only where they would add value; no comment preferred to inappropriate or dangerous comment. * Interpretation should be based on locally agreed or nationally recognised clinical guidelines where available. * Standard tied comments ("canned" comments) can have some limited use.Individualised narrative comments may be particularly useful in the case of tests that are new, complex or unfamiliar to the requesting clinicians and where clinical details are available. * Interpretative commenting should only be provided by appropriately trained and credentialed personnel. * Audit of comments and continued professional development of personnel providing them are important for quality assurance.
NASA Astrophysics Data System (ADS)
Chen, Liang; Zong, Jianfang; Guo, Huiting; Sun, Liang; Liu, Mei
2018-05-01
Standardization is playing an increasingly important role in reducing greenhouse gas emission and in climatic change adaptation, especially in the “three” greenhouse gas emission aspects (measurement, report, verification). Standardization has become one of the most important ways in mitigating the global climate change. Standardization Administration of China (SAC) has taken many productive measures in actively promoting standardization work to cope with climate change. In April 2014, SAC officially approved the establishment of “National Carbon Emission Management Standardization Technical Committee” In November 2015, SAC officially issued the first 11 national standards on carbon management including <
Code of Federal Regulations, 2013 CFR
2013-07-01
... and Secondary National Ambient Air Quality Standards for Ozone I Appendix I to Part 50 Protection of... Secondary National Ambient Air Quality Standards for Ozone 1. General. This appendix explains the data... secondary ambient air quality standards for ozone specified in § 50.10 are met at an ambient ozone air...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Secondary National Ambient Air Quality Standards for Ozone P Appendix P to Part 50 Protection of Environment... Air Quality Standards for Ozone 1. General (a) This appendix explains the data handling conventions... air quality standards for ozone (O3) specified in § 50.15 are met at an ambient O3 air quality...
Code of Federal Regulations, 2012 CFR
2012-07-01
... and Secondary National Ambient Air Quality Standards for Ozone I Appendix I to Part 50 Protection of... Secondary National Ambient Air Quality Standards for Ozone 1. General. This appendix explains the data... secondary ambient air quality standards for ozone specified in § 50.10 are met at an ambient ozone air...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Secondary National Ambient Air Quality Standards for Ozone P Appendix P to Part 50 Protection of Environment... Air Quality Standards for Ozone 1. General (a) This appendix explains the data handling conventions... air quality standards for ozone (O3) specified in § 50.15 are met at an ambient O3 air quality...
Code of Federal Regulations, 2011 CFR
2011-07-01
... and Secondary National Ambient Air Quality Standards for Ozone I Appendix I to Part 50 Protection of... Secondary National Ambient Air Quality Standards for Ozone 1. General. This appendix explains the data... secondary ambient air quality standards for ozone specified in § 50.10 are met at an ambient ozone air...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Secondary National Ambient Air Quality Standards for Ozone P Appendix P to Part 50 Protection of Environment... Air Quality Standards for Ozone 1. General (a) This appendix explains the data handling conventions... air quality standards for ozone (O3) specified in § 50.15 are met at an ambient O3 air quality...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Secondary National Ambient Air Quality Standards for Ozone P Appendix P to Part 50 Protection of Environment... Air Quality Standards for Ozone 1. General (a) This appendix explains the data handling conventions... air quality standards for ozone (O3) specified in § 50.15 are met at an ambient O3 air quality...
Code of Federal Regulations, 2010 CFR
2010-07-01
... and Secondary National Ambient Air Quality Standards for Ozone I Appendix I to Part 50 Protection of... Secondary National Ambient Air Quality Standards for Ozone 1. General. This appendix explains the data... secondary ambient air quality standards for ozone specified in § 50.10 are met at an ambient ozone air...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Secondary National Ambient Air Quality Standards for Ozone P Appendix P to Part 50 Protection of Environment... Air Quality Standards for Ozone 1. General (a) This appendix explains the data handling conventions... air quality standards for ozone (O3) specified in § 50.15 are met at an ambient O3 air quality...
Code of Federal Regulations, 2014 CFR
2014-07-01
... and Secondary National Ambient Air Quality Standards for Ozone I Appendix I to Part 50 Protection of... Secondary National Ambient Air Quality Standards for Ozone 1. General. This appendix explains the data... secondary ambient air quality standards for ozone specified in § 50.10 are met at an ambient ozone air...
Evaluating concentration estimation errors in ELISA microarray experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; White, Amanda M.; Varnum, Susan M.
Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less
Rezaie, Ali; Buresi, Michelle; Lembo, Anthony; Lin, Henry; McCallum, Richard; Rao, Satish; Schmulson, Max; Valdovinos, Miguel; Zakko, Salam; Pimentel, Mark
2017-01-01
Objectives: Breath tests (BTs) are important for the diagnosis of carbohydrate maldigestion syndromes and small intestinal bacterial overgrowth (SIBO). However, standardization is lacking regarding indications for testing, test methodology and interpretation of results. A consensus meeting of experts was convened to develop guidelines for clinicians and research. Methods: Pre-meeting survey questions encompassing five domains; indications, preparation, performance, interpretation of results, and knowledge gaps, were sent to 17 clinician-scientists, and 10 attended a live meeting. Using an evidence-based approach, 28 statements were finalized and voted on anonymously by a working group of specialists. Results: Consensus was reached on 26 statements encompassing all five domains. Consensus doses for lactulose, glucose, fructose and lactose BT were 10, 75, 25 and 25 g, respectively. Glucose and lactulose BTs remain the least invasive alternatives to diagnose SIBO. BT is useful in the diagnosis of carbohydrate maldigestion, methane-associated constipation, and evaluation of bloating/gas but not in the assessment of oro-cecal transit. A rise in hydrogen of ≥20 p.p.m. by 90 min during glucose or lactulose BT for SIBO was considered positive. Methane levels ≥10 p.p.m. was considered methane-positive. SIBO should be excluded prior to BT for carbohydrate malabsorption to avoid false positives. A rise in hydrogen of ≥20 p.p.m. from baseline during BT was considered positive for maldigestion. Conclusions: BT is a useful, inexpensive, simple and safe diagnostic test in the evaluation of common gastroenterology problems. These consensus statements should help to standardize the indications, preparation, performance and interpretation of BT in clinical practice and research. PMID:28323273
NASA Technical Reports Server (NTRS)
Vlassak, Irmien; Rubin, David N.; Odabashian, Jill A.; Garcia, Mario J.; King, Lisa M.; Lin, Steve S.; Drinko, Jeanne K.; Morehead, Annitta J.; Prior, David L.; Asher, Craig R.;
2002-01-01
BACKGROUND: Newer contrast agents as well as tissue harmonic imaging enhance left ventricular (LV) endocardial border delineation, and therefore, improve LV wall-motion analysis. Interpretation of dobutamine stress echocardiography is observer-dependent and requires experience. This study was performed to evaluate whether these new imaging modalities would improve endocardial visualization and enhance accuracy and efficiency of the inexperienced reader interpreting dobutamine stress echocardiography. METHODS AND RESULTS: Twenty-nine consecutive patients with known or suspected coronary artery disease underwent dobutamine stress echocardiography. Both fundamental (2.5 MHZ) and harmonic (1.7 and 3.5 MHZ) mode images were obtained in four standard views at rest and at peak stress during a standard dobutamine infusion stress protocol. Following the noncontrast images, Optison was administered intravenously in bolus (0.5-3.0 ml), and fundamental and harmonic images were obtained. The dobutamine echocardiography studies were reviewed by one experienced and one inexperienced echocardiographer. LV segments were graded for image quality and function. Time for interpretation also was recorded. Contrast with harmonic imaging improved the diagnostic concordance of the novice reader to the expert reader by 7.1%, 7.5%, and 12.6% (P < 0.001) as compared with harmonic imaging, fundamental imaging, and fundamental imaging with contrast, respectively. For the novice reader, reading time was reduced by 47%, 55%, and 58% (P < 0.005) as compared with the time needed for fundamental, fundamental contrast, and harmonic modes, respectively. With harmonic imaging, the image quality score was 4.6% higher (P < 0.001) than for fundamental imaging. Image quality scores were not significantly different for noncontrast and contrast images. CONCLUSION: Harmonic imaging with contrast significantly improves the accuracy and efficiency of the novice dobutamine stress echocardiography reader. The use of harmonic imaging reduces the frequency of nondiagnostic wall segments.
How to use… lymph node biopsy in paediatrics.
Farndon, Sarah; Behjati, Sam; Jonas, Nico; Messahel, Boo
2017-10-01
Lymphadenopathy is a common finding in children. It often causes anxiety among parents and healthcare professionals because it can be a sign of cancer. There is limited high-quality evidence to guide clinicians as to which children should be referred for lymph node biopsy. The gold standard method for evaluating lymphadenopathy of unknown cause is an excision biopsy. In this Interpretation, we discuss the use of lymph node biopsy in children. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Sergievskii, V. V.; Rudakov, A. M.
2006-11-01
An analysis of the accepted methods for calculating the activity coefficients for the components of binary aqueous solutions was performed. It was demonstrated that the use of the osmotic coefficients in auxiliary calculations decreases the accuracy of estimates of the activity coefficients. The possibility of calculating the activity coefficient of the solute from the concentration dependence of the water activity was examined. It was established that, for weak electrolytes, the interpretation of data on heterogeneous equilibria within the framework of the standard assumption that the dissociation is complete encounters serious difficulties.
Next-generation genotype imputation service and methods.
Das, Sayantan; Forer, Lukas; Schönherr, Sebastian; Sidore, Carlo; Locke, Adam E; Kwong, Alan; Vrieze, Scott I; Chew, Emily Y; Levy, Shawn; McGue, Matt; Schlessinger, David; Stambolian, Dwight; Loh, Po-Ru; Iacono, William G; Swaroop, Anand; Scott, Laura J; Cucca, Francesco; Kronenberg, Florian; Boehnke, Michael; Abecasis, Gonçalo R; Fuchsberger, Christian
2016-10-01
Genotype imputation is a key component of genetic association studies, where it increases power, facilitates meta-analysis, and aids interpretation of signals. Genotype imputation is computationally demanding and, with current tools, typically requires access to a high-performance computing cluster and to a reference panel of sequenced genomes. Here we describe improvements to imputation machinery that reduce computational requirements by more than an order of magnitude with no loss of accuracy in comparison to standard imputation tools. We also describe a new web-based service for imputation that facilitates access to new reference panels and greatly improves user experience and productivity.
GTCBio's Precision Medicine Conference (July 7-8, 2016 - Boston, Massachusetts, USA).
Cole, P
2016-09-01
GTCBio's Precision Medicine Conference met this year to outline the many steps forward that precision medicine and individualized genomics has made and the challenges it still faces in technological, modeling, and standards development, interoperability and compatibility advancements, and methods of economic and societal adoption. The conference was split into four sections, 'Overcoming Challenges in the Commercialization of Precision Medicine', 'Implementation of Precision Medicine: Strategies & Technologies', 'Integrating & Interpreting Personal Genomics, Big Data, & Bioinformatics' and 'Incentivizing Precision Medicine: Regulation & Reimbursement', with this report focusing on the final two subjects. Copyright 2016 Prous Science, S.A.U. or its licensors. All rights reserved.
NASA Technical Reports Server (NTRS)
Vangenderen, J. L. (Principal Investigator); Lock, B. F.
1976-01-01
The author has identified the following significant results. Scope of the preprocessing techniques was restricted to standard material from the EROS Data Center accompanied by some enlarging procedures and the use of the diazo process. Investigation has shown that the most appropriate sampling strategy for this study is the stratified random technique. A viable sampling procedure, together with a method for determining minimum number of sample points in order to test results of any interpretation are presented.
Advances in Statistical Methods for Substance Abuse Prevention Research
MacKinnon, David P.; Lockwood, Chondra M.
2010-01-01
The paper describes advances in statistical methods for prevention research with a particular focus on substance abuse prevention. Standard analysis methods are extended to the typical research designs and characteristics of the data collected in prevention research. Prevention research often includes longitudinal measurement, clustering of data in units such as schools or clinics, missing data, and categorical as well as continuous outcome variables. Statistical methods to handle these features of prevention data are outlined. Developments in mediation, moderation, and implementation analysis allow for the extraction of more detailed information from a prevention study. Advancements in the interpretation of prevention research results include more widespread calculation of effect size and statistical power, the use of confidence intervals as well as hypothesis testing, detailed causal analysis of research findings, and meta-analysis. The increased availability of statistical software has contributed greatly to the use of new methods in prevention research. It is likely that the Internet will continue to stimulate the development and application of new methods. PMID:12940467
CCSSM Challenge: Graphing Ratio and Proportion
ERIC Educational Resources Information Center
Kastberg, Signe E.; D'Ambrosio, Beatriz S.; Lynch-Davis, Kathleen; Mintos, Alexia; Krawczyk, Kathryn
2013-01-01
A renewed emphasis was placed on ratio and proportional reasoning in the middle grades in the Common Core State Standards for Mathematics (CCSSM). The expectation for students includes the ability to not only compute and then compare and interpret the results of computations in context but also interpret ratios and proportions as they are…
ERIC Educational Resources Information Center
Sklar, Jeffrey C.; Zwick, Rebecca
2009-01-01
Proper interpretation of standardized test scores is a crucial skill for K-12 teachers and school personnel; however, many do not have sufficient knowledge of measurement concepts to appropriately interpret and communicate test results. In a recent four-year project funded by the National Science Foundation, three web-based instructional…
ERIC Educational Resources Information Center
O'Dell, Robin S.
2012-01-01
There are two primary interpretations of the mean: as a leveler of data (Uccellini 1996, pp. 113-114) and as a balance point of a data set. Typically, both interpretations of the mean are ignored in elementary school and middle school curricula. They are replaced with a rote emphasis on calculation using the standard algorithm. When students are…
29 CFR 471.21 - Who will make rulings and interpretations under Executive Order 13496 and this part?
Code of Federal Regulations, 2010 CFR
2010-07-01
...-MANAGEMENT STANDARDS, DEPARTMENT OF LABOR NOTIFICATION OF EMPLOYEE RIGHTS UNDER FEDERAL LABOR LAWS OBLIGATIONS OF FEDERAL CONTRACTORS AND SUBCONTRACTORS; NOTIFICATION OF EMPLOYEE RIGHTS UNDER FEDERAL LABOR... 29 Labor 2 2010-07-01 2010-07-01 false Who will make rulings and interpretations under Executive...
ERIC Educational Resources Information Center
Walker, William S., III
2016-01-01
In this research, I investigated teachers' interpretations of the goals of professional development and factors that contributed to enacted instructional practices. A multiple-case study design was used to examine the interpretations of four high school teachers participating in a year-long professional development program with a standards-based…
Trail Orienteering: An Effective Way To Practice Map Interpretation.
ERIC Educational Resources Information Center
Horizons, 1999
1999-01-01
Discusses a type of orienteering developed in Great Britain to allow people with physical disabilities to compete on equal terms. Sites are viewed from a wheelchair-accessible main route. The main skill is interpreting the maps at each site, not finding the sites. Describes differences from standard orienteering, how sites work, and essential…
Huang, Linda; Fernandes, Helen; Zia, Hamid; Tavassoli, Peyman; Rennert, Hanna; Pisapia, David; Imielinski, Marcin; Sboner, Andrea; Rubin, Mark A; Kluk, Michael; Elemento, Olivier
2017-05-01
This paper describes the Precision Medicine Knowledge Base (PMKB; https://pmkb.weill.cornell.edu ), an interactive online application for collaborative editing, maintenance, and sharing of structured clinical-grade cancer mutation interpretations. PMKB was built using the Ruby on Rails Web application framework. Leveraging existing standards such as the Human Genome Variation Society variant description format, we implemented a data model that links variants to tumor-specific and tissue-specific interpretations. Key features of PMKB include support for all major variant types, standardized authentication, distinct user roles including high-level approvers, and detailed activity history. A REpresentational State Transfer (REST) application-programming interface (API) was implemented to query the PMKB programmatically. At the time of writing, PMKB contains 457 variant descriptions with 281 clinical-grade interpretations. The EGFR, BRAF, KRAS, and KIT genes are associated with the largest numbers of interpretable variants. PMKB's interpretations have been used in over 1500 AmpliSeq tests and 750 whole-exome sequencing tests. The interpretations are accessed either directly via the Web interface or programmatically via the existing API. An accurate and up-to-date knowledge base of genomic alterations of clinical significance is critical to the success of precision medicine programs. The open-access, programmatically accessible PMKB represents an important attempt at creating such a resource in the field of oncology. The PMKB was designed to help collect and maintain clinical-grade mutation interpretations and facilitate reporting for clinical cancer genomic testing. The PMKB was also designed to enable the creation of clinical cancer genomics automated reporting pipelines via an API. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Computer decision support as a source of interpretation error: the case of electrocardiograms.
Tsai, Theodore L; Fridsma, Douglas B; Gatti, Guido
2003-01-01
The aim of this study was to determine the effect that the computer interpretation (CI) of electrocardiograms (EKGs) has on the accuracy of resident (noncardiologist) physicians reading EKGs. A randomized, controlled trial was conducted in a laboratory setting from February through June 2001, using a two-period crossover design with matched pairs of subjects randomly assigned to sequencing groups. Subjects' interpretive accuracy of discrete, cardiologist-determined EKG findings were measured as judged by a board-certified internist. Without the CI, subjects interpreted 48.9% (95% confidence interval, 45.0% to 52.8%) of the findings correctly. With the CI, subjects interpreted 55.4% (51.9% to 58.9%) correctly (p < 0.0001). When the CIs that agreed with the gold standard (Correct CIs) were not included, 53.1% (47.7% to 58.5%) of the findings were interpreted correctly. When the correct CI was included, accuracy increased to 68.1% (63.2% to 72.7%; p < 0.0001). When computer advice that did not agree with the gold standard (Incorrect CI) was not provided to the subjects, 56.7% (48.5% to 64.5%) of findings were interpreted correctly. Accuracy dropped to 48.3% (40.4% to 56.4%) when the incorrect computer advice was provided (p = 0.131). Subjects erroneously agreed with the incorrect CI more often when it was presented with the EKG 67.7% (57.2% to 76.7%) than when it was not 34.6% (23.8% to 47.3%; p < 0.0001). Computer decision support systems can generally improve the interpretive accuracy of internal medicine residents in reading EKGs. However, subjects were influenced significantly by incorrect advice, which tempers the overall usefulness of computer-generated advice in this and perhaps other areas.
Computer Decision Support as a Source of Interpretation Error: The Case of Electrocardiograms
Tsai, Theodore L.; Fridsma, Douglas B.; Gatti, Guido
2003-01-01
Objective: The aim of this study was to determine the effect that the computer interpretation (CI) of electrocardiograms (EKGs) has on the accuracy of resident (noncardiologist) physicians reading EKGs. Design: A randomized, controlled trial was conducted in a laboratory setting from February through June 2001, using a two-period crossover design with matched pairs of subjects randomly assigned to sequencing groups. Measurements: Subjects' interpretive accuracy of discrete, cardiologist-determined EKG findings were measured as judged by a board-certified internist. Results: Without the CI, subjects interpreted 48.9% (95% confidence interval, 45.0% to 52.8%) of the findings correctly. With the CI, subjects interpreted 55.4% (51.9% to 58.9%) correctly (p < 0.0001). When the CIs that agreed with the gold standard (Correct CIs) were not included, 53.1% (47.7% to 58.5%) of the findings were interpreted correctly. When the correct CI was included, accuracy increased to 68.1% (63.2% to 72.7%; p < 0.0001). When computer advice that did not agree with the gold standard (Incorrect CI) was not provided to the subjects, 56.7% (48.5% to 64.5%) of findings were interpreted correctly. Accuracy dropped to 48.3% (40.4% to 56.4%) when the incorrect computer advice was provided (p = 0.131). Subjects erroneously agreed with the incorrect CI more often when it was presented with the EKG 67.7% (57.2% to 76.7%) than when it was not 34.6% (23.8% to 47.3%; p < 0.0001). Conclusions: Computer decision support systems can generally improve the interpretive accuracy of internal medicine residents in reading EKGs. However, subjects were influenced significantly by incorrect advice, which tempers the overall usefulness of computer-generated advice in this and perhaps other areas. PMID:12807810
Miller, Cynthia L; Coopey, Suzanne B; Rafferty, Elizabeth; Gadd, Michele; Smith, Barbara L; Specht, Michelle C
2016-02-01
Standard specimen mammography (SSM) is performed in the radiology department after wire-localized excision of non-palpable breast lesions to confirm the presence of the target and evaluate margins. Alternatively, intra-operative specimen mammography (ISM) allows surgeons to view images in the operating room (OR). We conducted a randomized study comparing ISM and SSM. Women undergoing wire-localized excision for breast malignancy or imaging abnormality were randomized to SSM or ISM. For SSM, the specimen was transported to the radiology department for imaging and interpretation. For ISM, the specimen was imaged in the OR for interpretation by the surgeon and sent for SSM. Interpretation time was from specimen leaving OR until radiologist interpretation for SSM and from placement in ISM device until surgeon interpretation for ISM. Procedure and interpretation times were compared. Concordance between ISM and SSM for target and margins was evaluated. 72 patients were randomized, 36 ISM and 36 SSM. Median procedure times were similar, 48.5 (17-138) min for ISM, and 54 (17-40) min for SSM (p = 0.72), likely since specimens in both groups traveled to radiology for SSM. Median interpretation time was significantly shorter with ISM, 1 (0.5-2.0) and 9 (4-16) min for ISM and SSM, respectively (p < 0.0001). Among specimens with ISM and SSM, concordance was 100 % (35/35) for target and 93 % (14/15) for margins. In this randomized trial, use of ISM compared with SSM significantly reduced interpretation times, while accurately identifying the target. This could result in decreased operative costs from shorter OR times with use of ISM.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Kuriakose, Jean W.; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Guo, Yanhui; Patel, Smita; Kazerooni, Ella A.
2012-03-01
Vessel segmentation is a fundamental step in an automated pulmonary embolism (PE) detection system. The purpose of this study is to improve the segmentation scheme for pulmonary vessels affected by PE and other lung diseases. We have developed a multiscale hierarchical vessel enhancement and segmentation (MHES) method for pulmonary vessel tree extraction based on the analysis of eigenvalues of Hessian matrices. However, it is difficult to segment the pulmonary vessels accurately under suboptimal conditions, such as vessels occluded by PEs, surrounded by lymphoid tissues or lung diseases, and crossing with other vessels. In this study, we developed a new vessel refinement method utilizing curved planar reformation (CPR) technique combined with optimal path finding method (MHES-CROP). The MHES segmented vessels straightened in the CPR volume was refined using adaptive gray level thresholding where the local threshold was obtained from least-square estimation of a spline curve fitted to the gray levels of the vessel along the straightened volume. An optimal path finding method based on Dijkstra's algorithm was finally used to trace the correct path for the vessel of interest. Two and eight CTPA scans were randomly selected as training and test data sets, respectively. Forty volumes of interest (VOIs) containing "representative" vessels were manually segmented by a radiologist experienced in CTPA interpretation and used as reference standard. The results show that, for the 32 test VOIs, the average percentage volume error relative to the reference standard was improved from 32.9+/-10.2% using the MHES method to 9.9+/-7.9% using the MHES-CROP method. The accuracy of vessel segmentation was improved significantly (p<0.05). The intraclass correlation coefficient (ICC) of the segmented vessel volume between the automated segmentation and the reference standard was improved from 0.919 to 0.988. Quantitative comparison of the MHES method and the MHES-CROP method with the reference standard was also evaluated by the Bland-Altman plot. This preliminary study indicates that the MHES-CROP method has the potential to improve PE detection.
40 CFR 121.30 - Review and advice.
Code of Federal Regulations, 2011 CFR
2011-07-01
... determinations, definitions and interpretations with respect to the meaning and content of water quality... the application of all applicable water quality standards in particular cases and in specific... by dischargers with the conditions and requirements of applicable water quality standards. In cases...
40 CFR 121.30 - Review and advice.
Code of Federal Regulations, 2010 CFR
2010-07-01
... determinations, definitions and interpretations with respect to the meaning and content of water quality... the application of all applicable water quality standards in particular cases and in specific... by dischargers with the conditions and requirements of applicable water quality standards. In cases...
7 CFR 868.306 - Milling requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Milled Rice Principles Governing Application of Standards § 868.306 Milling requirements. The degree of milling for milled rice; i.e., “hard... interpretive line samples for such rice. [67 FR 61250, Sept. 30, 2002] ...
7 CFR 868.306 - Milling requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Milled Rice Principles Governing Application of Standards § 868.306 Milling requirements. The degree of milling for milled rice; i.e., “hard... interpretive line samples for such rice. [67 FR 61250, Sept. 30, 2002] ...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-11
... and emissions input data preparation, model performance evaluation, interpreting modeling results, and... standard based on ambient ozone monitoring data for the 2006- 2008 period. EPA has not yet acted on this... ppm) and years thereafter were at or below the standard. See EPA Air Quality System (AQS) data...
How Standardized Tests Shape--and Limit--Student Learning. A Policy Research Brief
ERIC Educational Resources Information Center
National Council of Teachers of English, 2014
2014-01-01
The term "standardized" tests is often heard along with "high-stakes." Standardized tests are administered, scored, and interpreted in a consistent way, so that the performances of large groups of students can be compared. They are not in themselves high-stakes, but they are often used for high-stakes purposes such as…
Mobini, Sirous; Mackintosh, Bundy; Illingworth, Jo; Gega, Lina; Langdon, Peter; Hoppitt, Laura
2014-06-01
This study examines the effects of a single session of Cognitive Bias Modification to induce positive Interpretative bias (CBM-I) using standard or explicit instructions and an analogue of computer-administered CBT (c-CBT) program on modifying cognitive biases and social anxiety. A sample of 76 volunteers with social anxiety attended a research site. At both pre- and post-test, participants completed two computer-administered tests of interpretative and attentional biases and a self-report measure of social anxiety. Participants in the training conditions completed a single session of either standard or explicit CBM-I positive training and a c-CBT program. Participants in the Control (no training) condition completed a CBM-I neutral task matched the active CBM-I intervention in format and duration but did not encourage positive disambiguation of socially ambiguous or threatening scenarios. Participants in both CBM-I programs (either standard or explicit instructions) and the c-CBT condition exhibited more positive interpretations of ambiguous social scenarios at post-test and one-week follow-up as compared to the Control condition. Moreover, the results showed that CBM-I and c-CBT, to some extent, changed negative attention biases in a positive direction. Furthermore, the results showed that both CBM-I training conditions and c-CBT reduced social anxiety symptoms at one-week follow-up. This study used a single session of CBM-I training, however multi-sessions intervention might result in more endurable positive CBM-I changes. A computerised single session of CBM-I and an analogue of c-CBT program reduced negative interpretative biases and social anxiety. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Davies, Louise; Donnelly, Kyla Z; Goodman, Daisy J; Ogrinc, Greg
2016-01-01
Background The Standards for Quality Improvement Reporting Excellence (SQUIRE) Guideline was published in 2008 (SQUIRE 1.0) and was the first publication guideline specifically designed to advance the science of healthcare improvement. Advances in the discipline of improvement prompted us to revise it. We adopted a novel approach to the revision by asking end-users to ‘road test’ a draft version of SQUIRE 2.0. The aim was to determine whether they understood and implemented the guidelines as intended by the developers. Methods Forty-four participants were assigned a manuscript section (ie, introduction, methods, results, discussion) and asked to use the draft Guidelines to guide their writing process. They indicated the text that corresponded to each SQUIRE item used and submitted it along with a confidential survey. The survey examined usability of the Guidelines using Likert-scaled questions and participants’ interpretation of key concepts in SQUIRE using open-ended questions. On the submitted text, we evaluated concordance between participants’ item usage/interpretation and the developers’ intended application. For the survey, the Likert-scaled responses were summarised using descriptive statistics and the open-ended questions were analysed by content analysis. Results Consistent with the SQUIRE Guidelines’ recommendation that not every item be included, less than one-third (n=14) of participants applied every item in their section in full. Of the 85 instances when an item was partially used or was omitted, only 7 (8.2%) of these instances were due to participants not understanding the item. Usage of Guideline items was highest for items most similar to standard scientific reporting (ie, ‘Specific aim of the improvement’ (introduction), ‘Description of the improvement’ (methods) and ‘Implications for further studies’ (discussion)) and lowest (<20% of the time) for those unique to healthcare improvement (ie, ‘Assessment methods for context factors that contributed to success or failure’ and ‘Costs and strategic trade-offs’). Items unique to healthcare improvement, specifically ‘Evolution of the improvement’, ‘Context elements that influenced the improvement’, ‘The logic on which the improvement was based’, ‘Process and outcome measures’, demonstrated poor concordance between participants’ interpretation and developers’ intended application. Conclusions User testing of a draft version of SQUIRE 2.0 revealed which items have poor concordance between developer intent and author usage, which will inform final editing of the Guideline and development of supporting supplementary materials. It also identified the items that require special attention when teaching about scholarly writing in healthcare improvement. PMID:26263916
[Carl Friedrich von Weizsäcker and the interpretations of quantum theory].
Stöckler, Manfred
2014-01-01
What are 'interpretations' of quantum theory? What are the differences between Carl Friedrich von Weizsäkcker's approach and contemporary views? The various interpretations of quantum mechanics give diverse answers to questions concerning the relation between measuring process and standard time development, the embedding of quantum objects in space ('wave-particle-dualism'), and the reference of state vectors. Does the wave function describe states in the real world or does it refer to our knowledge about nature? First, some relevant conceptions in Weizsäcker's book The Structure of Physics (Der Aufbau der Physik, 1985) are introduced. In a second step I point out why his approach is not any longer present in contemporary debates. One reason is that Weizsäcker is mainly affected by classical philosophy (Platon, Aristoteles, Kant). He could not esteem the philosophy of science that was developed in the spirit of logical empiricism. So he lost interest in disputes with Anglo-Saxon philosophy of quantum mechanics. Especially his interpretation of probability and his analysis of the collapse of the state function as change in knowledge differ from contemporary standard views. In recent years, however, epistemic interpretations of quantum mechanics are proposed that share some of Weizsäcker's intuitions.
Gorbett, Gregory E; Morris, Sarah M; Meacham, Brian J; Wood, Christopher B
2015-01-01
A new method to characterize the degree of fire damage to gypsum wallboard is introduced, implemented, and tested to determine the efficacy of its application among novices. The method was evaluated by comparing degree of fire damage assessments of novices with and without the method. Thirty-nine "novice" raters assessed damage to a gypsum wallboard surface, completing 66 ratings, first without the method, and then again using the method. The inter-rater reliability was evaluated for ratings of damage without and with the method. For novice fire investigators rating degree of damage without the aid of the method, ICC(1,2) = 0.277 with 95% CI (0.211, 0.365), and with the method, ICC(2,1) = 0.593 with 95% CI (0.509, 0.684). Results indicate that the raters were more reliable in their analysis of the degree of fire damage when using the method, which support the use of standardized processes to decrease the variability in data collection and interpretation. © 2014 American Academy of Forensic Sciences.
Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error.
Raab, Stephen S; Grzybicki, Dana Marie; Sudilovsky, Daniel; Balassanian, Ronald; Janosky, Janine E; Vrbin, Colleen M
2006-10-01
Our objective was to determine whether the Toyota Production System process redesign resulted in diagnostic error reduction for patients who underwent cytologic evaluation of thyroid nodules. In this longitudinal, nonconcurrent cohort study, we compared the diagnostic error frequency of a thyroid aspiration service before and after implementation of error reduction initiatives consisting of adoption of a standardized diagnostic terminology scheme and an immediate interpretation service. A total of 2,424 patients underwent aspiration. Following terminology standardization, the false-negative rate decreased from 41.8% to 19.1% (P = .006), the specimen nondiagnostic rate increased from 5.8% to 19.8% (P < .001), and the sensitivity increased from 70.2% to 90.6% (P < .001). Cases with an immediate interpretation had a lower noninterpretable specimen rate than those without immediate interpretation (P < .001). Toyota process change led to significantly fewer diagnostic errors for patients who underwent thyroid fine-needle aspiration.
Vasikaran, Samuel
2008-01-01
Summary Clinical laboratories should be able to offer interpretation of the results they produce.At a minimum, contact details for interpretative advice should be available on laboratory reports.Interpretative comments may be verbal or written and printed.Printed comments on reports should be offered judiciously, only where they would add value; no comment preferred to inappropriate or dangerous comment.Interpretation should be based on locally agreed or nationally recognised clinical guidelines where available.Standard tied comments (“canned” comments) can have some limited use.Individualised narrative comments may be particularly useful in the case of tests that are new, complex or unfamiliar to the requesting clinicians and where clinical details are available.Interpretative commenting should only be provided by appropriately trained and credentialed personnel.Audit of comments and continued professional development of personnel providing them are important for quality assurance. PMID:18852867
Polarimetric imaging of biological tissues based on the indices of polarimetric purity.
Van Eeckhout, Albert; Lizana, Angel; Garcia-Caurel, Enric; Gil, José J; Sansa, Adrià; Rodríguez, Carla; Estévez, Irene; González, Emilio; Escalera, Juan C; Moreno, Ignacio; Campos, Juan
2018-04-01
We highlight the interest of using the indices of polarimetric purity (IPPs) to the inspection of biological tissues. The IPPs were recently proposed in the literature and they result in a further synthetization of the depolarizing properties of samples. Compared with standard polarimetric images of biological samples, IPP-based images lead to larger image contrast of some biological structures and to a further physical interpretation of the depolarizing mechanisms inherent to the samples. In addition, unlike other methods, their calculation do not require advanced algebraic operations (as is the case of polar decompositions), and they result in 3 indicators of easy implementation. We also propose a pseudo-colored encoding of the IPP information that leads to an improved visualization of samples. This last technique opens the possibility of tailored adjustment of tissues contrast by using customized pseudo-colored images. The potential of the IPP approach is experimentally highlighted along the manuscript by studying 3 different ex-vivo samples. A significant image contrast enhancement is obtained by using the IPP-based methods, compared to standard polarimetric images. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Similarity-based modeling in large-scale prediction of drug-drug interactions.
Vilar, Santiago; Uriarte, Eugenio; Santana, Lourdes; Lorberbaum, Tal; Hripcsak, George; Friedman, Carol; Tatonetti, Nicholas P
2014-09-01
Drug-drug interactions (DDIs) are a major cause of adverse drug effects and a public health concern, as they increase hospital care expenses and reduce patients' quality of life. DDI detection is, therefore, an important objective in patient safety, one whose pursuit affects drug development and pharmacovigilance. In this article, we describe a protocol applicable on a large scale to predict novel DDIs based on similarity of drug interaction candidates to drugs involved in established DDIs. The method integrates a reference standard database of known DDIs with drug similarity information extracted from different sources, such as 2D and 3D molecular structure, interaction profile, target and side-effect similarities. The method is interpretable in that it generates drug interaction candidates that are traceable to pharmacological or clinical effects. We describe a protocol with applications in patient safety and preclinical toxicity screening. The time frame to implement this protocol is 5-7 h, with additional time potentially necessary, depending on the complexity of the reference standard DDI database and the similarity measures implemented.
Basu, Gaurab; Costa, Vonessa Phillips; Jain, Priyank
2017-03-01
Access to language services is a required and foundational component of care for patients with limited English proficiency (LEP). National standards for medical interpreting set by the US Department of Health and Human Services and by the National Council on Interpreting in Health Care establish the role of qualified medical interpreters in the provision of care in the United States. In the vignette, the attending physician infringes upon the patient's right to appropriate language services and renders unethical care. Clinicians are obliged to create systems and a culture that ensure quality care for patients with LEP. © 2017 American Medical Association. All Rights Reserved.
Platelet Function Analyzed by Light Transmission Aggregometry.
Hvas, Anne-Mette; Favaloro, Emmanuel J
2017-01-01
Analysis of platelet function is widely used for diagnostic work-up in patients with increased bleeding tendency. During the last decades, platelet function testing has also been introduced for evaluation of antiplatelet therapy, but this is still recommended for research purposes only. Platelet function can also be assessed for hyper-aggregability, but this is less often evaluated. Light transmission aggregometry (LTA) was introduced in the early 1960s and has since been considered the gold standard. This optical detection system is based on changes in turbidity measured as a change in light transmission, which is proportional to the extent of platelet aggregation induced by addition of an agonist. LTA is a flexible method, as different agonists can be used in varying concentrations, but performance of the test requires large blood volumes and experienced laboratory technicians as well as specialized personal to interpret results. In the present chapter, a protocol for LTA is described including all steps from pre-analytical preparation to interpretation of results.
Experimental Protein Structure Verification by Scoring with a Single, Unassigned NMR Spectrum.
Courtney, Joseph M; Ye, Qing; Nesbitt, Anna E; Tang, Ming; Tuttle, Marcus D; Watt, Eric D; Nuzzio, Kristin M; Sperling, Lindsay J; Comellas, Gemma; Peterson, Joseph R; Morrissey, James H; Rienstra, Chad M
2015-10-06
Standard methods for de novo protein structure determination by nuclear magnetic resonance (NMR) require time-consuming data collection and interpretation efforts. Here we present a qualitatively distinct and novel approach, called Comparative, Objective Measurement of Protein Architectures by Scoring Shifts (COMPASS), which identifies the best structures from a set of structural models by numerical comparison with a single, unassigned 2D (13)C-(13)C NMR spectrum containing backbone and side-chain aliphatic signals. COMPASS does not require resonance assignments. It is particularly well suited for interpretation of magic-angle spinning solid-state NMR spectra, but also applicable to solution NMR spectra. We demonstrate COMPASS with experimental data from four proteins--GB1, ubiquitin, DsbA, and the extracellular domain of human tissue factor--and with reconstructed spectra from 11 additional proteins. For all these proteins, with molecular mass up to 25 kDa, COMPASS distinguished the correct fold, most often within 1.5 Å root-mean-square deviation of the reference structure. Copyright © 2015 Elsevier Ltd. All rights reserved.
Experimental Protein Structure Verification by Scoring with a Single, Unassigned NMR Spectrum
Courtney, Joseph M.; Ye, Qing; Nesbitt, Anna E.; Tang, Ming; Tuttle, Marcus D.; Watt, Eric D.; Nuzzio, Kristin M.; Sperling, Lindsay J.; Comellas, Gemma; Peterson, Joseph R.; Morrissey, James H.; Rienstra, Chad M.
2016-01-01
Standard methods for de novo protein structure determination by nuclear magnetic resonance (NMR) require time-consuming data collection and interpretation efforts. Here we present a qualitatively distinct and novel approach, called Comparative, Objective Measurement of Protein Architectures by Scoring Shifts (COMPASS), which identifies the best structures from a set of structural models by numerical comparison with a single, unassigned 2D 13C-13C NMR spectrum containing backbone and side-chain aliphatic signals. COMPASS does not require resonance assignments. It is particularly well suited for interpretation of magic-angle spinning solid-state NMR spectra, but also applicable to solution NMR spectra. We demonstrate COMPASS with experimental data from four proteins—GB1, ubiquitin, DsbA, and the extracellular domain of human tissue factor—and with reconstructed spectra from 11 additional proteins. For all these proteins, with molecular mass up to 25 kDa, COMPASS distinguished the correct fold, most often within 1.5 Å root-mean-square deviation of the reference structure. PMID:26365800
Olsen, Lisa D.
2003-01-01
One of the roles of the U.S. Geological Survey (USGS) is to provide reliable water data and unbiased water science needed to describe and understand the Nation?s water resources. This fact sheet describes selected techniques that were used by the USGS to collect, transmit, evaluate, or interpret data, in support of investigations that describe the quantity and quality of water resources in Maryland (MD), Delaware (DE), and the District of Columbia (D.C.). These hydrologic investigations generally were performed in cooperation with universities, research centers, and other Federal, State, and local Government agencies. The applications of hydrologic science and research that were selected for this fact sheet were used or tested in the MD-DE-DC District from 2001 through 2003, and include established methods, new approaches, and preliminary research. The USGS usually relies on standard methods or protocols when conducting water-resources research. Occasionally, traditional methods must be modified to address difficult environmental questions or challenging sampling conditions. Technologies developed for other purposes can sometimes be successfully applied to the collection or dissemination of water-resources data. The USGS is continually exploring new ways to collect, transmit, evaluate, and interpret data. The following applications of hydrologic science and research illustrate a few of the recent advances made by scientists working for and with the USGS.
Land, Sally; Cunningham, Philip; Zhou, Jialun; Frost, Kevin; Katzenstein, David; Kantor, Rami; Chen, Yi-Ming Arthur; Oka, Shinichi; DeLong, Allison; Sayer, David; Smith, Jeffery; Dax, Elizabeth M.; Law, Matthew
2010-01-01
The TREAT Asia (Therapeutics, Research, Education, and AIDS Training in Asia) Network is building capacity for Human Immunodeficiency Virus Type-1 (HIV-1) drug resistance testing in the region. The objective of the TREAT Asia Quality Assessment Scheme – designated TAQAS – is to standardize HIV-1 genotypic resistance testing (HIV genotyping) among laboratories to permit rigorous comparison of results from different clinics and testing centres. TAQAS has evaluated three panels of HIV-1-positive plasma from clinical material or low-passage, culture supernatant for up to 10 Asian laboratories. Laboratory participants used their standard protocols to perform HIV genotyping. Assessment was in comparison to a target genotype derived from all participants and the reference laboratory’s result. Agreement between most participants at the edited nucleotide sequence level was high (>98%). Most participants performed to the reference laboratory standard in detection of drug resistance mutations (DRMs). However, there was variation in the detection of nucleotide mixtures (0–83%) and a significant correlation with the detection of DRMs (p < 0.01). Interpretation of antiretroviral resistance showed ~70% agreement among participants when different interpretation systems were used but >90% agreement with a common interpretation system, within the Stanford University Drug Resistance Database. Using the principles of external quality assessment and a reference laboratory, TAQAS has demonstrated high quality HIV genotyping results from Asian laboratories. PMID:19490972
Teachers' Interpretations of Exit Exam Scores and College Readiness
ERIC Educational Resources Information Center
McIntosh, Shelby
2013-01-01
This study examined teachers' interpretations of Virginia's high school exit exam policy through the teachers' responses to a survey. The survey was administered to teachers from one school district in Northern Virginia. The teachers selected for the survey taught a subject in which students must pass a Standards of Learning (SOL) test in order to…
Simplifying Nanowire Hall Effect Characterization by Using a Three-Probe Device Design.
Hultin, Olof; Otnes, Gaute; Samuelson, Lars; Storm, Kristian
2017-02-08
Electrical characterization of nanowires is a time-consuming and challenging task due to the complexity of single nanowire device fabrication and the difficulty in interpreting the measurements. We present a method to measure Hall effect in nanowires using a three-probe device that is simpler to fabricate than previous four-probe nanowire Hall devices and allows characterization of nanowires with smaller diameter. Extraction of charge carrier concentration from the three-probe measurements using an analytical model is discussed and compared to simulations. The validity of the method is experimentally verified by a comparison between results obtained with the three-probe method and results obtained using four-probe nanowire Hall measurements. In addition, a nanowire with a diameter of only 65 nm is characterized to demonstrate the capabilities of the method. The three-probe Hall effect method offers a relatively fast and simple, yet accurate way to quantify the charge carrier concentration in nanowires and has the potential to become a standard characterization technique for nanowires.
Instrumental variable methods in comparative safety and effectiveness research†
Brookhart, M. Alan; Rassen, Jeremy A.; Schneeweiss, Sebastian
2010-01-01
Summary Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial. PMID:20354968