Sample records for standard interpretive methods

  1. Court Interpreters and Translators: Developing Ethical and Professional Standards.

    ERIC Educational Resources Information Center

    Funston, Richard

    Changing needs in the courtroom have raised questions about the need for standards in court interpreter qualifications. In California, no formal training or familiarity with the legal system is required for certification, which is done entirely by language testing. The fact that often court interpreters are officers of the court may be…

  2. 44 CFR 61.14 - Standard Flood Insurance Policy Interpretations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.14 Standard Flood Insurance Policy Interpretations. (a... 44 Emergency Management and Assistance 1 2012-10-01 2011-10-01 true Standard Flood Insurance...

  3. 44 CFR 61.14 - Standard Flood Insurance Policy Interpretations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.14 Standard Flood Insurance Policy Interpretations. (a... 44 Emergency Management and Assistance 1 2013-10-01 2013-10-01 false Standard Flood Insurance...

  4. 44 CFR 61.14 - Standard Flood Insurance Policy Interpretations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.14 Standard Flood Insurance Policy Interpretations. (a... 44 Emergency Management and Assistance 1 2014-10-01 2014-10-01 false Standard Flood Insurance...

  5. 44 CFR 61.14 - Standard Flood Insurance Policy Interpretations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.14 Standard Flood Insurance Policy Interpretations. (a... 44 Emergency Management and Assistance 1 2011-10-01 2011-10-01 false Standard Flood Insurance...

  6. 44 CFR 61.14 - Standard Flood Insurance Policy Interpretations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program INSURANCE COVERAGE AND RATES § 61.14 Standard Flood Insurance Policy Interpretations. (a... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Standard Flood Insurance...

  7. [Do different interpretative methods used for evaluation of checkerboard synergy test affect the results?].

    PubMed

    Ozseven, Ayşe Gül; Sesli Çetin, Emel; Ozseven, Levent

    2012-07-01

    In recent years, owing to the presence of multi-drug resistant nosocomial bacteria, combination therapies are more frequently applied. Thus there is more need to investigate the in vitro activity of drug combinations against multi-drug resistant bacteria. Checkerboard synergy testing is among the most widely used standard technique to determine the activity of antibiotic combinations. It is based on microdilution susceptibility testing of antibiotic combinations. Although this test has a standardised procedure, there are many different methods for interpreting the results. In many previous studies carried out with multi-drug resistant bacteria, different rates of synergy have been reported with various antibiotic combinations using checkerboard technique. These differences might be attributed to the different features of the strains. However, different synergy rates detected by checkerboard method have also been reported in other studies using the same drug combinations and same types of bacteria. It was thought that these differences in synergy rates might be due to the different methods of interpretation of synergy test results. In recent years, multi-drug resistant Acinetobacter baumannii has been the most commonly encountered nosocomial pathogen especially in intensive-care units. For this reason, multidrug resistant A.baumannii has been the subject of a considerable amount of research about antimicrobial combinations. In the present study, the in vitro activities of frequently preferred combinations in A.baumannii infections like imipenem plus ampicillin/sulbactam, and meropenem plus ampicillin/sulbactam were tested by checkerboard synergy method against 34 multi-drug resistant A.baumannii isolates. Minimum inhibitory concentration (MIC) values for imipenem, meropenem and ampicillin/sulbactam were determined by the broth microdilution method. Subsequently the activity of two different combinations were tested in the dilution range of 4 x MIC and 0.03 x MIC in

  8. Standardizing Interpretive Training to Create a More Meaningful Visitor Experience

    ERIC Educational Resources Information Center

    Carr, Rob

    2016-01-01

    Implementing a standardized interpretive training and mentoring program across multiple departments has helped created a shared language that staff and volunteers use to collaborate and evaluate interpretive programs and products. This has led to more efficient and effective training and measurable improvements in the quality of the visitor's…

  9. 48 CFR 9901.305 - Requirements for standards and interpretive rulings.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... promulgation of cost accounting standards and interpretations thereof, the Board shall: (a) Take into account, after consultation and discussion with the Comptroller General, professional accounting organizations... ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET...

  10. 48 CFR 9901.305 - Requirements for standards and interpretive rulings.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... promulgation of cost accounting standards and interpretations thereof, the Board shall: (a) Take into account, after consultation and discussion with the Comptroller General, professional accounting organizations... ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET...

  11. 48 CFR 9901.305 - Requirements for standards and interpretive rulings.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... promulgation of cost accounting standards and interpretations thereof, the Board shall: (a) Take into account, after consultation and discussion with the Comptroller General, professional accounting organizations... ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET...

  12. 48 CFR 9901.305 - Requirements for standards and interpretive rulings.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... promulgation of cost accounting standards and interpretations thereof, the Board shall: (a) Take into account, after consultation and discussion with the Comptroller General, professional accounting organizations... ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET...

  13. 48 CFR 9901.305 - Requirements for standards and interpretive rulings.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... promulgation of cost accounting standards and interpretations thereof, the Board shall: (a) Take into account, after consultation and discussion with the Comptroller General, professional accounting organizations... ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET...

  14. Performance of a Method to Standardize Breast Ultrasound Interpretation Using Image Processing and Case-Based Reasoning

    NASA Astrophysics Data System (ADS)

    André, M. P.; Galperin, M.; Berry, A.; Ojeda-Fournier, H.; O'Boyle, M.; Olson, L.; Comstock, C.; Taylor, A.; Ledgerwood, M.

    Our computer-aided diagnostic (CADx) tool uses advanced image processing and artificial intelligence to analyze findings on breast sonography images. The goal is to standardize reporting of such findings using well-defined descriptors and to improve accuracy and reproducibility of interpretation of breast ultrasound by radiologists. This study examined several factors that may impact accuracy and reproducibility of the CADx software, which proved to be highly accurate and stabile over several operating conditions.

  15. A new IRT-based standard setting method: application to eCat-listening.

    PubMed

    García, Pablo Eduardo; Abad, Francisco José; Olea, Julio; Aguado, David

    2013-01-01

    Criterion-referenced interpretations of tests are highly necessary, which usually involves the difficult task of establishing cut scores. Contrasting with other Item Response Theory (IRT)-based standard setting methods, a non-judgmental approach is proposed in this study, in which Item Characteristic Curve (ICC) transformations lead to the final cut scores. eCat-Listening, a computerized adaptive test for the evaluation of English Listening, was administered to 1,576 participants, and the proposed standard setting method was applied to classify them into the performance standards of the Common European Framework of Reference for Languages (CEFR). The results showed a classification closely related to relevant external measures of the English language domain, according to the CEFR. It is concluded that the proposed method is a practical and valid standard setting alternative for IRT-based tests interpretations.

  16. 30 CFR 784.200 - Interpretive rules related to General Performance Standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RECLAMATION AND OPERATION PLAN § 784.200 Interpretive rules related to General Performance Standards. The... ENFORCEMENT, DEPARTMENT OF THE INTERIOR SURFACE COAL MINING AND RECLAMATION OPERATIONS PERMITS AND COAL... Surface Mining Reclamation and Enforcement. (a) Interpretation of § 784.15: Reclamation plan: Postmining...

  17. Interpretation of IEEE-854 floating-point standard and definition in the HOL system

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.

    1995-01-01

    The ANSI/IEEE Standard 854-1987 for floating-point arithmetic is interpreted by converting the lexical descriptions in the standard into mathematical conditional descriptions organized in tables. The standard is represented in higher-order logic within the framework of the HOL (Higher Order Logic) system. The paper is divided in two parts with the first part the interpretation and the second part the description in HOL.

  18. How Engineering Standards Are Interpreted and Translated for Middle School

    ERIC Educational Resources Information Center

    Judson, Eugene; Ernzen, John; Krause, Stephen; Middleton, James A.; Culbertson, Robert J.

    2016-01-01

    In this exploratory study we examined the alignment of Next Generation Science Standards (NGSS) middle school engineering design standards with lesson ideas from middle school teachers, science education faculty, and engineering faculty (4-6 members per group). Respondents were prompted to provide plain language interpretations of two middle…

  19. DOE interpretations Guide to OSH standards. Update to the Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-03-31

    Reflecting Secretary O`Leary`s focus on occupational safety and health, the Office of Occupational Safety is pleased to provide you with the latest update to the DOE Interpretations Guide to OSH Standards. This Guide was developed in cooperation with the Occupational Safety and Health Administration, which continued its support during this last revision by facilitating access to the interpretations found on the OSHA Computerized Information System (OCIS). This March 31, 1994 update contains 123 formal interpretation letters written OSHA. As a result of the unique requests received by the 1-800 Response Line, this update also contains 38 interpretations developed by DOE.more » This new occupational safety and health information adds still more important guidance to the four volume reference set that you presently have in your possession.« less

  20. Interpreting international governance standards for health IT use within general medical practice.

    PubMed

    Mahncke, Rachel J; Williams, Patricia A H

    2014-01-01

    General practices in Australia recognise the importance of comprehensive protective security measures. Some elements of information security governance are incorporated into recommended standards, however the governance component of information security is still insufficiently addressed in practice. The International Organistion for Standardisation (ISO) released a new global standard in May 2013 entitled, ISO/IEC 27014:2013 Information technology - Security techniques - Governance of information security. This standard, applicable to organisations of all sizes, offers a framework against which to assess and implement the governance components of information security. The standard demonstrates the relationship between governance and the management of information security, provides strategic principles and processes, and forms the basis for establishing a positive information security culture. An analysis interpretation of this standard for use in Australian general practice was performed. This work is unique as such interpretation for the Australian healthcare environment has not been undertaken before. It demonstrates an application of the standard at a strategic level to inform existing development of an information security governance framework.

  1. Standardization of Laboratory Methods for the PERCH Study

    PubMed Central

    Karron, Ruth A.; Morpeth, Susan C.; Bhat, Niranjan; Levine, Orin S.; Baggett, Henry C.; Brooks, W. Abdullah; Feikin, Daniel R.; Hammitt, Laura L.; Howie, Stephen R. C.; Knoll, Maria Deloria; Kotloff, Karen L.; Madhi, Shabir A.; Scott, J. Anthony G.; Thea, Donald M.; Adrian, Peter V.; Ahmed, Dilruba; Alam, Muntasir; Anderson, Trevor P.; Antonio, Martin; Baillie, Vicky L.; Dione, Michel; Endtz, Hubert P.; Gitahi, Caroline; Karani, Angela; Kwenda, Geoffrey; Maiga, Abdoul Aziz; McClellan, Jessica; Mitchell, Joanne L.; Morailane, Palesa; Mugo, Daisy; Mwaba, John; Mwansa, James; Mwarumba, Salim; Nyongesa, Sammy; Panchalingam, Sandra; Rahman, Mustafizur; Sawatwong, Pongpun; Tamboura, Boubou; Toure, Aliou; Whistler, Toni; O’Brien, Katherine L.; Murdoch, David R.

    2017-01-01

    Abstract The Pneumonia Etiology Research for Child Health study was conducted across 7 diverse research sites and relied on standardized clinical and laboratory methods for the accurate and meaningful interpretation of pneumonia etiology data. Blood, respiratory specimens, and urine were collected from children aged 1–59 months hospitalized with severe or very severe pneumonia and community controls of the same age without severe pneumonia and were tested with an extensive array of laboratory diagnostic tests. A standardized testing algorithm and standard operating procedures were applied across all study sites. Site laboratories received uniform training, equipment, and reagents for core testing methods. Standardization was further assured by routine teleconferences, in-person meetings, site monitoring visits, and internal and external quality assurance testing. Targeted confirmatory testing and testing by specialized assays were done at a central reference laboratory. PMID:28575358

  2. Interpretive model for ''A Concurrency Method''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, C.L.

    1987-01-01

    This paper describes an interpreter for ''A Concurrency Method,'' in which concurrency is the inherent mode of operation and not an appendage to sequentiality. This method is based on the notions of data-drive and single-assignment while preserving a natural manner of programming. The interpreter is designed for and implemented on a network of Corvus Concept Personal Workstations, which are based on the Motorola MC68000 super-microcomputer. The interpreter utilizes the MC68000 processors in each workstation by communicating across OMNINET, the local area network designed for the workstations. The interpreter is a complete system, containing an editor, a compiler, an operating systemmore » with load balancer, and a communication facility. The system includes the basic arithmetic and trigonometric primitive operations for mathematical computations as well as the ability to construct more complex operations from these. 9 refs., 5 figs.« less

  3. DOE interpretations Guide to OSH standards. Update to the Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-03-31

    Reflecting Secretary O`Leary`s focus on occupational safety and health, the Office of Occupational Safety is pleased to provide you with the latest update to the DOE Interpretations Guide to OSH Standards. This Guide was developed in cooperation with the Occupational Safety and Health Administration, which continued it`s support during this last revision by facilitating access to the interpretations found on the OSHA Computerized Information System (OCIS). This March 31, 1994 update contains 123 formal in letter written by OSHA. As a result of the unique requests received by the 1-800 Response Line, this update also contains 38 interpretations developed bymore » DOE. This new occupational safety and health information adds still more important guidance to the four volume reference set that you presently have in your possession.« less

  4. Standards and guidelines for HIV prevention research: considerations for local context in the interpretation of global ethical standards.

    PubMed

    Haire, Bridget G; Folayan, Morenike Oluwatoyin; Brown, Brandon

    2014-09-01

    While international standards are important for conducting clinical research, they may require interpretation in particular contexts. Standard of care in HIV prevention research is now complicated, given that there are now two new biomedical prevention interventions - 'treatment-as-prevention', and pre-exposure prophylaxis--in addition to barrier protection, counselling, male circumcision and treatment of sexually transmissible infections. Proper standards of care must be considered with regard to both normative guidance and the circumstances of the particular stakeholders--the community, trial population, researchers and sponsors. In addition, the special circumstances of the lives of participants need to be acknowledged in designing trial protocols and study procedures. When researchers are faced with the dilemma of interpretation of international ethics guidelines and the realities of the daily lives of persons and their practices, the decisions of the local ethics committee become crucial. The challenge then becomes how familiar ethics committee members in these local settings are with these guidelines, and how their interpretation and use in the local context ensures the respect for persons and communities. It also includes justice and the fair selection of study participants without compromising data quality, and ensuring that the risks for study participants and their community do not outweigh the potential benefits.

  5. An Interpretative Phenomenological Analysis of the Common Core Standards Program in the State of South Dakota

    ERIC Educational Resources Information Center

    Alase, Abayomi

    2017-01-01

    This interpretative phenomenological analysis (IPA) study investigated and interpreted the Common Core State Standards program (the phenomenon) that has been the dominating topic of discussions amongst educators all across the country since the inauguration of the program in 2014/2015 school session. Common Core State Standards (CCSS) was a…

  6. 76 FR 62 - Interpretive Standards for Systemic Compensation Discrimination and Voluntary Guidelines for Self...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-03

    ... 1250-ZA00 Interpretive Standards for Systemic Compensation Discrimination and Voluntary Guidelines for... Order 11246 with respect to Systemic Compensation Discrimination (Standards) and Voluntary Guidelines... to Systemic Compensation Discrimination (Voluntary Guidelines). OFCCP is proposing to rescind the...

  7. 40 CFR Appendix K to Part 50 - Interpretation of the National Ambient Air Quality Standards for Particulate Matter

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 2 2014-07-01 2014-07-01 false Interpretation of the National Ambient Air Quality Standards for Particulate Matter K Appendix K to Part 50 Protection of Environment... STANDARDS Pt. 50, App. K Appendix K to Part 50—Interpretation of the National Ambient Air Quality Standards...

  8. 40 CFR Appendix K to Part 50 - Interpretation of the National Ambient Air Quality Standards for Particulate Matter

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 2 2013-07-01 2013-07-01 false Interpretation of the National Ambient Air Quality Standards for Particulate Matter K Appendix K to Part 50 Protection of Environment... STANDARDS Pt. 50, App. K Appendix K to Part 50—Interpretation of the National Ambient Air Quality Standards...

  9. 40 CFR Appendix K to Part 50 - Interpretation of the National Ambient Air Quality Standards for Particulate Matter

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 2 2012-07-01 2012-07-01 false Interpretation of the National Ambient Air Quality Standards for Particulate Matter K Appendix K to Part 50 Protection of Environment... STANDARDS Pt. 50, App. K Appendix K to Part 50—Interpretation of the National Ambient Air Quality Standards...

  10. 40 CFR Appendix K to Part 50 - Interpretation of the National Ambient Air Quality Standards for Particulate Matter

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 2 2011-07-01 2011-07-01 false Interpretation of the National Ambient Air Quality Standards for Particulate Matter K Appendix K to Part 50 Protection of Environment... STANDARDS Pt. 50, App. K Appendix K to Part 50—Interpretation of the National Ambient Air Quality Standards...

  11. Deriving allowable properties of lumber : a practical guide for interpretation of ASTM standards

    Treesearch

    Alan Bendtsen; William L. Galligan

    1978-01-01

    The ASTM standards for establishing clear wood mechanical properties and for deriving structural grades and related allowable properties for visually graded lumber can be confusing and difficult for the uninitiated to interpret. This report provides a practical guide to using these standards for individuals not familiar with their application. Sample stress...

  12. Calibration method helps in seismic velocity interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guzman, C.E.; Davenport, H.A.; Wilhelm, R.

    1997-11-03

    Acoustic velocities derived from seismic reflection data, when properly calibrated to subsurface measurements, help interpreters make pure velocity predictions. A method of calibrating seismic to measured velocities has improved interpretation of subsurface features in the Gulf of Mexico. In this method, the interpreter in essence creates a kind of gauge. Properly calibrated, the gauge enables the interpreter to match predicted velocities to velocities measured at wells. Slow-velocity zones are of special interest because they sometimes appear near hydrocarbon accumulations. Changes in velocity vary in strength with location; the structural picture is hidden unless the variations are accounted for by mappingmore » in depth instead of time. Preliminary observations suggest that the presence of hydrocarbons alters the lithology in the neighborhood of the trap; this hydrocarbon effect may be reflected in the rock velocity. The effect indicates a direct use of seismic velocity in exploration. This article uses the terms seismic velocity and seismic stacking velocity interchangeably. It uses ground velocity, checkshot average velocity, and well velocity interchangeably. Interval velocities are derived from seismic stacking velocities or well average velocities; they refer to velocities of subsurface intervals or zones. Interval travel time (ITT) is the reciprocal of interval velocity in microseconds per foot.« less

  13. How concept images affect students' interpretations of Newton's method

    NASA Astrophysics Data System (ADS)

    Engelke Infante, Nicole; Murphy, Kristen; Glenn, Celeste; Sealey, Vicki

    2018-07-01

    Knowing when students have the prerequisite knowledge to be able to read and understand a mathematical text is a perennial concern for instructors. Using text describing Newton's method and Vinner's notion of concept image, we exemplify how prerequisite knowledge influences understanding. Through clinical interviews with first-semester calculus students, we determined how evoked concept images of tangent lines and roots contributed to students' interpretation and application of Newton's method. Results show that some students' concept images of root and tangent line developed throughout the interview process, and most students were able to adequately interpret the text on Newton's method. However, students with insufficient concept images of tangent line and students who were unwilling or unable to modify their concept images of tangent line after reading the text were not successful in interpreting Newton's method.

  14. 20 CFR Appendix A to Part 718 - Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays)

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Interpretation of Chest Roentgenograms (X-Rays) A Appendix A to Part 718 Employees' Benefits OFFICE OF WORKERS... Appendix A to Part 718—Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays) The... procedures are used in administering and interpreting X-rays and that the best available medical evidence...

  15. 20 CFR Appendix A to Part 718 - Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays)

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Interpretation of Chest Roentgenograms (X-Rays) A Appendix A to Part 718 Employees' Benefits OFFICE OF WORKERS... Appendix A to Part 718—Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays) The... procedures are used in administering and interpreting X-rays and that the best available medical evidence...

  16. 20 CFR Appendix A to Part 718 - Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays)

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Interpretation of Chest Roentgenograms (X-Rays) A Appendix A to Part 718 Employees' Benefits OFFICE OF WORKERS... Appendix A to Part 718—Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays) The... procedures are used in administering and interpreting X-rays and that the best available medical evidence...

  17. 20 CFR Appendix A to Part 718 - Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays)

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Interpretation of Chest Roentgenograms (X-Rays) A Appendix A to Part 718 Employees' Benefits OFFICE OF WORKERS... Appendix A to Part 718—Standards for Administration and Interpretation of Chest Roentgenograms (X-Rays) The... procedures are used in administering and interpreting X-rays and that the best available medical evidence...

  18. Web-based comparison of historical vs contemporary methods of fetal heart rate interpretation.

    PubMed

    Epstein, Aaron J; Iriye, Brian K; Hancock, Lyle; Quilligan, Edward J; Rumney, Pamela J; Hancock, Judy; Ghamsary, Mark; Eakin, Cortney M; Smith, Cheryl; Wing, Deborah A

    2016-10-01

    Contemporary interpretation of fetal heart rate patterns is based largely on the tenets of Drs Quilligan and Hon. This method differs from an older method that was championed by Dr Caldeyro-Barcia in recording speed and classification of decelerations. The latter uses a paper speed of 1 cm/min and classifies decelerations referent to uterine contractions as type I or II dips, compared with conventional classification as early, late, or variable with paper speed of 3 cm/min. We hypothesized that 3 cm/min speed may lead to over-analysis of fetal heart rate and that 1 cm/min may provide adequate information without compromising accuracy or efficiency. The purpose of this study was to compare the Hon-Quilligan method of fetal heart rate interpretation with the Caldeyro-Barcia method among groups of obstetrics care providers with the use of an online interactive testing tool. We deidentified 40 fetal heart rate tracings from the terminal 30 minutes before delivery. A website was created to view these tracings with the use of the standard Hon-Quilligan method and adjusted the same tracings to the 1 cm/min monitoring speed for the Caldeyro-Barcia method. We invited 2-4 caregivers to participate: maternal-fetal medicine experts, practicing maternal-fetal medicine specialists, maternal-fetal medicine fellows, obstetrics nurses, and certified nurse midwives. After completing an introductory tutorial and quiz, they were asked to interpret the fetal heart rate tracings (the order was scrambled) to manage and predict maternal and neonatal outcomes using both methods. Their results were compared with those of our expert, Edward Quilligan, and were compared among groups. Analysis was performed with the use of 3 measures: percent classification, Kappa, and adjusted Gwet-Kappa (P < .05 was considered significant). Overall, our results show from moderate to almost perfect agreement with the expert and both between and within examiners (Gwet-Kappa 0.4-0.8). The agreement at each

  19. Interpretive focus groups: a participatory method for interpreting and extending secondary analysis of qualitative data.

    PubMed

    Redman-MacLaren, Michelle; Mills, Jane; Tommbe, Rachael

    2014-01-01

    Participatory approaches to qualitative research practice constantly change in response to evolving research environments. Researchers are increasingly encouraged to undertake secondary analysis of qualitative data, despite epistemological and ethical challenges. Interpretive focus groups can be described as a more participative method for groups to analyse qualitative data. To facilitate interpretive focus groups with women in Papua New Guinea to extend analysis of existing qualitative data and co-create new primary data. The purpose of this was to inform a transformational grounded theory and subsequent health promoting action. A two-step approach was used in a grounded theory study about how women experience male circumcision in Papua New Guinea. Participants analysed portions or 'chunks' of existing qualitative data in story circles and built upon this analysis by using the visual research method of storyboarding. New understandings of the data were evoked when women in interpretive focus groups analysed the data 'chunks'. Interpretive focus groups encouraged women to share their personal experiences about male circumcision. The visual method of storyboarding enabled women to draw pictures to represent their experiences. This provided an additional focus for whole-of-group discussions about the research topic. Interpretive focus groups offer opportunity to enhance trustworthiness of findings when researchers undertake secondary analysis of qualitative data. The co-analysis of existing data and co-generation of new data between research participants and researchers informed an emergent transformational grounded theory and subsequent health promoting action.

  20. Interpretive focus groups: a participatory method for interpreting and extending secondary analysis of qualitative data

    PubMed Central

    Redman-MacLaren, Michelle; Mills, Jane; Tommbe, Rachael

    2014-01-01

    Background Participatory approaches to qualitative research practice constantly change in response to evolving research environments. Researchers are increasingly encouraged to undertake secondary analysis of qualitative data, despite epistemological and ethical challenges. Interpretive focus groups can be described as a more participative method for groups to analyse qualitative data. Objective To facilitate interpretive focus groups with women in Papua New Guinea to extend analysis of existing qualitative data and co-create new primary data. The purpose of this was to inform a transformational grounded theory and subsequent health promoting action. Design A two-step approach was used in a grounded theory study about how women experience male circumcision in Papua New Guinea. Participants analysed portions or ‘chunks’ of existing qualitative data in story circles and built upon this analysis by using the visual research method of storyboarding. Results New understandings of the data were evoked when women in interpretive focus groups analysed the data ‘chunks’. Interpretive focus groups encouraged women to share their personal experiences about male circumcision. The visual method of storyboarding enabled women to draw pictures to represent their experiences. This provided an additional focus for whole-of-group discussions about the research topic. Conclusions Interpretive focus groups offer opportunity to enhance trustworthiness of findings when researchers undertake secondary analysis of qualitative data. The co-analysis of existing data and co-generation of new data between research participants and researchers informed an emergent transformational grounded theory and subsequent health promoting action. PMID:25138532

  1. Standardized Interpretation of Chest Radiographs in Cases of Pediatric Pneumonia From the PERCH Study.

    PubMed

    Fancourt, Nicholas; Deloria Knoll, Maria; Barger-Kamate, Breanna; de Campo, John; de Campo, Margaret; Diallo, Mahamadou; Ebruke, Bernard E; Feikin, Daniel R; Gleeson, Fergus; Gong, Wenfeng; Hammitt, Laura L; Izadnegahdar, Rasa; Kruatrachue, Anchalee; Madhi, Shabir A; Manduku, Veronica; Matin, Fariha Bushra; Mahomed, Nasreen; Moore, David P; Mwenechanya, Musaku; Nahar, Kamrun; Oluwalana, Claire; Ominde, Micah Silaba; Prosperi, Christine; Sande, Joyce; Suntarattiwong, Piyarat; O'Brien, Katherine L

    2017-06-15

    Chest radiographs (CXRs) are a valuable diagnostic tool in epidemiologic studies of pneumonia. The World Health Organization (WHO) methodology for the interpretation of pediatric CXRs has not been evaluated beyond its intended application as an endpoint measure for bacterial vaccine trials. The Pneumonia Etiology Research for Child Health (PERCH) study enrolled children aged 1-59 months hospitalized with WHO-defined severe and very severe pneumonia from 7 low- and middle-income countries. An interpretation process categorized each CXR into 1 of 5 conclusions: consolidation, other infiltrate, both consolidation and other infiltrate, normal, or uninterpretable. Two members of a 14-person reading panel, who had undertaken training and standardization in CXR interpretation, interpreted each CXR. Two members of an arbitration panel provided additional independent reviews of CXRs with discordant interpretations at the primary reading, blinded to previous reports. Further discordance was resolved with consensus discussion. A total of 4172 CXRs were obtained from 4232 cases. Observed agreement for detecting consolidation (with or without other infiltrate) between primary readers was 78% (κ = 0.50) and between arbitrators was 84% (κ = 0.61); agreement for primary readers and arbitrators across 5 conclusion categories was 43.5% (κ = 0.25) and 48.5% (κ = 0.32), respectively. Disagreement was most frequent between conclusions of other infiltrate and normal for both the reading panel and the arbitration panel (32% and 30% of discordant CXRs, respectively). Agreement was similar to that of previous evaluations using the WHO methodology for detecting consolidation, but poor for other infiltrates despite attempts at a rigorous standardization process. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.

  2. Using the Halstead-Reitan Battery to diagnose brain damage: a comparison of the predictive power of traditional techniques to Rohling's Interpretive Method.

    PubMed

    Rohling, Martin L; Williamson, David J; Miller, L Stephen; Adams, Russell L

    2003-11-01

    The aim of this project was to validate an alternative global measure of neurocognitive impairment (Rohling Interpretive Method, or RIM) that could be generated from data gathered from a flexible battery approach. A critical step in this process is to establish the utility of the technique against current standards in the field. In this paper, we compared results from the Rohling Interpretive Method to those obtained from the General Neuropsychological Deficit Scale (GNDS; Reitan & Wolfson, 1988) and the Halstead-Russell Average Impairment Rating (AIR; Russell, Neuringer & Goldstein, 1970) on a large previously published sample of patients assessed with the Halstead-Reitan Battery (HRB). Findings support the use of the Rohling Interpretive Method in producing summary statistics similar in diagnostic sensitivity and specificity to the traditional HRB indices.

  3. 48 CFR 9904.406-61 - Interpretation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.406-61 Interpretation. (a) Questions have arisen as to... categories of costs that have been included in the past and may be considered in the future as restructuring... restructuring costs shall not exceed five years. The straight-line method of amortization should normally be...

  4. Comparison of interpretation methods of thermocouple psychrometer readouts

    NASA Astrophysics Data System (ADS)

    Guz, Łukasz; Majerek, Dariusz; Sobczuk, Henryk; Guz, Ewa; Połednik, Bernard

    2017-07-01

    Thermocouple psychrometers allow to determine the water potential, which can be easily recalculated into relative humidity of air in cavity of porous materials. The available typical measuring range of probe is very narrow. The lower limit of water potential measurements is about -200 kPa. On the other hand, the upper limit is approximately equal to -7000 kPa and depends on many factors. These paper presents a comparison of two interpretation methods of thermocouple microvolt output regarding: i) amplitude of voltage during wet-bulb temperature depression, ii) field under microvolt output curve. Previous results of experiments indicate that there is a robust correlation between water potential and field under microvolt output curve. In order to obtain correct results of water potential, each probe should be calibrated. The range of NaCl salt solutions with molality from 0.75M to 2.25M was used for calibration, which enable to obtain the osmotic potential from -3377 kPa to -10865 kPa. During measurements was applied 5mA heating current over a span 5 s and 5 mA cooling current aver a span 30s. The conducted study proves that using only different interpretation method based on field under microvolt output it is possible to achieve about 1000 kPa wider range of water potential. The average relative mean square error (RMSE) of this interpretation method is 1199 kPa while voltage amplitude based method yields average RMSE equaling 1378 kPa during calibration in temperature not stabilized conditions.

  5. External Standards or Standard Addition? Selecting and Validating a Method of Standardization

    NASA Astrophysics Data System (ADS)

    Harvey, David T.

    2002-05-01

    A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.

  6. A method for the geometric and densitometric standardization of intraoral radiographs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duckworth, J.E.; Judy, P.F.; Goodson, J.M.

    1983-07-01

    The interpretation of dental radiographs for the diagnosis of periodontal disease conditions poses several difficulties. These include the inability to adequately reproduce the projection geometry and optical density of the exposures. In order to improve the ability to extract accurate quantitative information from a radiographic survey of periodontal status, a method was developed which provided for consistent reproduction of both geometric and densitometric exposure parameters. This technique employed vertical bitewing projections in holders customized to individual segments of the dentition. A copper stepwedge was designed to provide densitometric standardization, and wire markers were included to permit measurement of angular variation.more » In a series of 53 paired radiographs, measurement of alveolar crest heights was found to be reproducible within approximately 0.1 mm. This method provided a full mouth radiographic survey using seven films, each complete with internal standards suitable for computer-based image processing.« less

  7. 20 CFR 640.3 - Interpretation of Federal law requirements.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 3 2014-04-01 2014-04-01 false Interpretation of Federal law requirements... STANDARD FOR BENEFIT PAYMENT PROMPTNESS-UNEMPLOYMENT COMPENSATION § 640.3 Interpretation of Federal law... require that a State law include provision for such methods of administration as will reasonable insure...

  8. 20 CFR 640.3 - Interpretation of Federal law requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 3 2012-04-01 2012-04-01 false Interpretation of Federal law requirements... STANDARD FOR BENEFIT PAYMENT PROMPTNESS-UNEMPLOYMENT COMPENSATION § 640.3 Interpretation of Federal law... require that a State law include provision for such methods of administration as will reasonable insure...

  9. 20 CFR 640.3 - Interpretation of Federal law requirements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 3 2013-04-01 2013-04-01 false Interpretation of Federal law requirements... STANDARD FOR BENEFIT PAYMENT PROMPTNESS-UNEMPLOYMENT COMPENSATION § 640.3 Interpretation of Federal law... require that a State law include provision for such methods of administration as will reasonable insure...

  10. 20 CFR 640.3 - Interpretation of Federal law requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Interpretation of Federal law requirements... STANDARD FOR BENEFIT PAYMENT PROMPTNESS-UNEMPLOYMENT COMPENSATION § 640.3 Interpretation of Federal law... require that a State law include provision for such methods of administration as will reasonable insure...

  11. Standardization in gully erosion studies: methodology and interpretation of magnitudes from a global review

    NASA Astrophysics Data System (ADS)

    Castillo, Carlos; Gomez, Jose Alfonso

    2016-04-01

    Standardization is the process of developing common conventions or proceedings to facilitate the communication, use, comparison and exchange of products or information among different parties. It has been an useful tool in different fields from industry to statistics due to technical, economic and social reasons. In science the need for standardization has been recognised in the definition of methods as well as in publication formats. With respect to gully erosion, a number of initiatives have been carried out to propose common methodologies, for instance, for gully delineation (Castillo et al., 2014) and geometrical measurements (Casalí et al., 2015). The main aims of this work are: 1) to examine previous proposals in gully erosion literature implying standardization processes; 2) to contribute with new approaches to improve the homogeneity of methodologies and presentation of results for a better communication among the gully erosion community. For this purpose, we evaluated the basic information provided on environmental factors, discussed the delineation and measurement procedures proposed in previous works and, finally, we analysed statistically the severity of degradation levels derived from different indicators at the world scale. As a result, we presented suggestions aiming to serve as guidance for survey design as well as for the interpretation of vulnerability levels and degradation rates for future gully erosion studies. References Casalí, J., Giménez, R., and Campo-Bescós, M. A.: Gully geometry: what are we measuring?, SOIL, 1, 509-513, doi:10.5194/soil-1-509-2015, 2015. Castillo C., Taguas E. V., Zarco-Tejada P., James M. R., and Gómez J. A. (2014), The normalized topographic method: an automated procedure for gully mapping using GIS, Earth Surf. Process. Landforms, 39, 2002-2015, doi: 10.1002/esp.3595

  12. ANSI/ASHRAE/IES Standard 90.1-2010 Performance Rating Method Reference Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goel, Supriya; Rosenberg, Michael I.

    This document is intended to be a reference manual for the Appendix G Performance Rating Method (PRM) of ANSI/ASHRAE/IES Standard 90.1- 2010 (Standard 90.1-2010).The PRM is used for rating the energy efficiency of commercial and high-rise residential buildings with designs that exceed the requirements of Standard 90.1. The procedures and processes described in this manual are designed to provide consistency and accuracy by filling in gaps and providing additional details needed by users of the PRM. It should be noted that this document is created independently from ASHRAE and SSPC 90.1 and is not sanctioned nor approved by either ofmore » those entities . Potential users of this manual include energy modelers, software developers and implementers of “beyond code” energy programs. Energy modelers using ASHRAE Standard 90.1-2010 for beyond code programs can use this document as a reference manual for interpreting requirements of the Performance Rating method. Software developers, developing tools for automated creation of the baseline model can use this reference manual as a guideline for developing the rules for the baseline model.« less

  13. Analysis of Indonesian educational system standard with KSIM cross-impact method

    NASA Astrophysics Data System (ADS)

    Arridjal, F.; Aldila, D.; Bustamam, A.

    2017-07-01

    The Result of The Programme of International Student Assessment (PISA) on 2012 shows that Indonesia is on 64'th position from 65 countries in Mathematics Mean Score. The 2013 Learning Curve Mapping, Indonesia is included in the 10th category of countries with the lowest performance on cognitive skills aspect, i.e. 37'th position from 40 countries. Competency is built by 3 aspects, one of them is cognitive aspect. The low result of mapping on cognitive aspect, describe the low of graduate competences as an output of Indonesia National Education System (INES). INES adopting a concept Eight Educational System Standards (EESS), one of them is graduate competency standard which connected directly with Indonesia's students. This research aims is to model INES by using KSIM cross-impact. Linear regression models of EESS constructed using the accreditation national data of Senior High Schools in Indonesia. The results then interpreted as impact value on the construction of KSIM cross-impact INES. The construction is used to analyze the interaction of EESS and doing numerical simulation for possible public policy in the education sector, i.e. stimulate the growth of education staff standard, content, process and infrastructure. All simulations of public policy has been done with 2 methods i.e with a multiplier impact method and with constant intervention method. From numerical simulation result, it is shown that stimulate the growth standard of content in the construction KSIM cross-impact EESS is the best option for public policy to maximize the growth of graduate competency standard.

  14. Patient Satisfaction with Different Interpreting Methods: A Randomized Controlled Trial

    PubMed Central

    Leng, Jennifer; Shapiro, Ephraim; Abramson, David; Motola, Ivette; Shield, David C.; Changrani, Jyotsna

    2007-01-01

    Background Growth of the foreign-born population in the U.S. has led to increasing numbers of limited-English-proficient (LEP) patients. Innovative medical interpreting strategies, including remote simultaneous medical interpreting (RSMI), have arisen to address the language barrier. This study evaluates the impact of interpreting method on patient satisfaction. Methods 1,276 English-, Spanish-, Mandarin-, and Cantonese-speaking patients attending the primary care clinic and emergency department of a large New York City municipal hospital were screened for enrollment in a randomized controlled trial. Language-discordant patients were randomized to RSMI or usual and customary (U&C) interpreting. Patients with language-concordant providers received usual care. Demographic and patient satisfaction questionnaires were administered to all participants. Results 541 patients were language-concordant with their providers and not randomized; 371 were randomized to RSMI, 167 of whom were exposed to RSMI; and 364 were randomized to U&C, 198 of whom were exposed to U&C. Patients randomized to RSMI were more likely than those with U&C to think doctors treated them with respect (RSMI 71%, U&C 64%, p < 0.05), but they did not differ in other measures of physician communication/care. In a linear regression analysis, exposure to RSMI was significantly associated with an increase in overall satisfaction with physician communication/care (β 0.10, 95% CI 0.02–0.18, scale 0–1.0). Patients randomized to RSMI were more likely to think the interpreting method protected their privacy (RSMI 51%, U&C 38%, p < 0.05). Patients randomized to either arm of interpretation reported less comprehension and satisfaction than patients in language-concordant encounters. Conclusions While not a substitute for language-concordant providers, RSMI can improve patient satisfaction and privacy among LEP patients. Implementing RSMI should be considered an important component of a multipronged

  15. Cutibacterium acnes molecular typing: time to standardize the method.

    PubMed

    Dagnelie, M-A; Khammari, A; Dréno, B; Corvec, S

    2018-03-12

    The Gram-positive, anaerobic/aerotolerant bacterium Cutibacterium acnes is a commensal of healthy human skin; it is subdivided into six main phylogenetic groups or phylotypes: IA1, IA2, IB, IC, II and III. To decipher how far specific subgroups of C. acnes are involved in disease physiopathology, different molecular typing methods have been developed to identify these subgroups: i.e. phylotypes, clonal complexes, and types defined by single-locus sequence typing (SLST). However, as several molecular typing methods have been developed over the last decade, it has become a difficult task to compare the results from one article to another. Based on the scientific literature, the aim of this narrative review is to propose a standardized method to perform molecular typing of C. acnes, according to the degree of resolution needed (phylotypes, clonal complexes, or SLST types). We discuss the existing different typing methods from a critical point of view, emphasizing their advantages and drawbacks, and we identify the most frequently used methods. We propose a consensus algorithm according to the needed phylogeny resolution level. We first propose to use multiplex PCR for phylotype identification, MLST9 for clonal complex determination, and SLST for phylogeny investigation including numerous isolates. There is an obvious need to create a consensus about molecular typing methods for C. acnes. This standardization will facilitate the comparison of results between one article and another, and also the interpretation of clinical data. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  16. [Interpretative method as a synthesis of explicative, teleologic and analogic models].

    PubMed

    Yáñez Cortés, R

    1980-06-01

    To establish the basis of the interpretative method is congruous with finding a solid basis--epistemologically speaking--to the analytic theory. This basis would be the means to transform this theory into a real science with its necessary adecuation among method, act and object of knowledge. It is only from a scientific stand that the psychoanalytic theory will be able to face successfully the reductionisms that menace it, be it the biologist-naturalism with its explanations of the psychic phenomena by means of mechanisms and biologic models or be it the speculative ideologies with their nucleus of technical praxis which make it impossible for the social-factic sciences to become real sciences. We propose as interpretative method the union of two models: the teleologic one which makes possible the appearance of intelligible, contingent and variable explanations between an antecedent and a consequent on one side, and on the other, the analogic model with its two moments: the comparative and the symbolic one. These moments makes possible the comparison and the union between antecedent and consequent baring in mind the "natural" ambiguity of the subject-object in question. The principal objective of the method--as a regulative idea in the Kantian sense--would be the search of univocity as regards the choice of one and only one sense from all the possible senses that "explain" the motive relationship or motive-end relationship in order to make the interpretation scientific. This status of scientificity should obey the rules of explanation: that the interpretations be derived effectively from the presupposed theory, that they really explain what they claim to explain, that they are not contradictory or contrary in the same ontologic level. We postulate that the synthesis of the two mentioned models, the teleologic-explanative and the analogic one allows us to find a possibility to make clear the "dark" sense of the noun interpretation and in this way the factibility of

  17. An international effort towards developing standards for best practices in analysis, interpretation and reporting of clinical genome sequencing results in the CLARITY Challenge.

    PubMed

    Brownstein, Catherine A; Beggs, Alan H; Homer, Nils; Merriman, Barry; Yu, Timothy W; Flannery, Katherine C; DeChene, Elizabeth T; Towne, Meghan C; Savage, Sarah K; Price, Emily N; Holm, Ingrid A; Luquette, Lovelace J; Lyon, Elaine; Majzoub, Joseph; Neupert, Peter; McCallie, David; Szolovits, Peter; Willard, Huntington F; Mendelsohn, Nancy J; Temme, Renee; Finkel, Richard S; Yum, Sabrina W; Medne, Livija; Sunyaev, Shamil R; Adzhubey, Ivan; Cassa, Christopher A; de Bakker, Paul I W; Duzkale, Hatice; Dworzyński, Piotr; Fairbrother, William; Francioli, Laurent; Funke, Birgit H; Giovanni, Monica A; Handsaker, Robert E; Lage, Kasper; Lebo, Matthew S; Lek, Monkol; Leshchiner, Ignaty; MacArthur, Daniel G; McLaughlin, Heather M; Murray, Michael F; Pers, Tune H; Polak, Paz P; Raychaudhuri, Soumya; Rehm, Heidi L; Soemedi, Rachel; Stitziel, Nathan O; Vestecka, Sara; Supper, Jochen; Gugenmus, Claudia; Klocke, Bernward; Hahn, Alexander; Schubach, Max; Menzel, Mortiz; Biskup, Saskia; Freisinger, Peter; Deng, Mario; Braun, Martin; Perner, Sven; Smith, Richard J H; Andorf, Janeen L; Huang, Jian; Ryckman, Kelli; Sheffield, Val C; Stone, Edwin M; Bair, Thomas; Black-Ziegelbein, E Ann; Braun, Terry A; Darbro, Benjamin; DeLuca, Adam P; Kolbe, Diana L; Scheetz, Todd E; Shearer, Aiden E; Sompallae, Rama; Wang, Kai; Bassuk, Alexander G; Edens, Erik; Mathews, Katherine; Moore, Steven A; Shchelochkov, Oleg A; Trapane, Pamela; Bossler, Aaron; Campbell, Colleen A; Heusel, Jonathan W; Kwitek, Anne; Maga, Tara; Panzer, Karin; Wassink, Thomas; Van Daele, Douglas; Azaiez, Hela; Booth, Kevin; Meyer, Nic; Segal, Michael M; Williams, Marc S; Tromp, Gerard; White, Peter; Corsmeier, Donald; Fitzgerald-Butt, Sara; Herman, Gail; Lamb-Thrush, Devon; McBride, Kim L; Newsom, David; Pierson, Christopher R; Rakowsky, Alexander T; Maver, Aleš; Lovrečić, Luca; Palandačić, Anja; Peterlin, Borut; Torkamani, Ali; Wedell, Anna; Huss, Mikael; Alexeyenko, Andrey; Lindvall, Jessica M; Magnusson, Måns; Nilsson, Daniel; Stranneheim, Henrik; Taylan, Fulya; Gilissen, Christian; Hoischen, Alexander; van Bon, Bregje; Yntema, Helger; Nelen, Marcel; Zhang, Weidong; Sager, Jason; Zhang, Lu; Blair, Kathryn; Kural, Deniz; Cariaso, Michael; Lennon, Greg G; Javed, Asif; Agrawal, Saloni; Ng, Pauline C; Sandhu, Komal S; Krishna, Shuba; Veeramachaneni, Vamsi; Isakov, Ofer; Halperin, Eran; Friedman, Eitan; Shomron, Noam; Glusman, Gustavo; Roach, Jared C; Caballero, Juan; Cox, Hannah C; Mauldin, Denise; Ament, Seth A; Rowen, Lee; Richards, Daniel R; San Lucas, F Anthony; Gonzalez-Garay, Manuel L; Caskey, C Thomas; Bai, Yu; Huang, Ying; Fang, Fang; Zhang, Yan; Wang, Zhengyuan; Barrera, Jorge; Garcia-Lobo, Juan M; González-Lamuño, Domingo; Llorca, Javier; Rodriguez, Maria C; Varela, Ignacio; Reese, Martin G; De La Vega, Francisco M; Kiruluta, Edward; Cargill, Michele; Hart, Reece K; Sorenson, Jon M; Lyon, Gholson J; Stevenson, David A; Bray, Bruce E; Moore, Barry M; Eilbeck, Karen; Yandell, Mark; Zhao, Hongyu; Hou, Lin; Chen, Xiaowei; Yan, Xiting; Chen, Mengjie; Li, Cong; Yang, Can; Gunel, Murat; Li, Peining; Kong, Yong; Alexander, Austin C; Albertyn, Zayed I; Boycott, Kym M; Bulman, Dennis E; Gordon, Paul M K; Innes, A Micheil; Knoppers, Bartha M; Majewski, Jacek; Marshall, Christian R; Parboosingh, Jillian S; Sawyer, Sarah L; Samuels, Mark E; Schwartzentruber, Jeremy; Kohane, Isaac S; Margulies, David M

    2014-03-25

    There is tremendous potential for genome sequencing to improve clinical diagnosis and care once it becomes routinely accessible, but this will require formalizing research methods into clinical best practices in the areas of sequence data generation, analysis, interpretation and reporting. The CLARITY Challenge was designed to spur convergence in methods for diagnosing genetic disease starting from clinical case history and genome sequencing data. DNA samples were obtained from three families with heritable genetic disorders and genomic sequence data were donated by sequencing platform vendors. The challenge was to analyze and interpret these data with the goals of identifying disease-causing variants and reporting the findings in a clinically useful format. Participating contestant groups were solicited broadly, and an independent panel of judges evaluated their performance. A total of 30 international groups were engaged. The entries reveal a general convergence of practices on most elements of the analysis and interpretation process. However, even given this commonality of approach, only two groups identified the consensus candidate variants in all disease cases, demonstrating a need for consistent fine-tuning of the generally accepted methods. There was greater diversity of the final clinical report content and in the patient consenting process, demonstrating that these areas require additional exploration and standardization. The CLARITY Challenge provides a comprehensive assessment of current practices for using genome sequencing to diagnose and report genetic diseases. There is remarkable convergence in bioinformatic techniques, but medical interpretation and reporting are areas that require further development by many groups.

  18. Patient satisfaction with different interpreting methods: a randomized controlled trial.

    PubMed

    Gany, Francesca; Leng, Jennifer; Shapiro, Ephraim; Abramson, David; Motola, Ivette; Shield, David C; Changrani, Jyotsna

    2007-11-01

    Growth of the foreign-born population in the U.S. has led to increasing numbers of limited-English-proficient (LEP) patients. Innovative medical interpreting strategies, including remote simultaneous medical interpreting (RSMI), have arisen to address the language barrier. This study evaluates the impact of interpreting method on patient satisfaction. 1,276 English-, Spanish-, Mandarin-, and Cantonese-speaking patients attending the primary care clinic and emergency department of a large New York City municipal hospital were screened for enrollment in a randomized controlled trial. Language-discordant patients were randomized to RSMI or usual and customary (U&C) interpreting. Patients with language-concordant providers received usual care. Demographic and patient satisfaction questionnaires were administered to all participants. 541 patients were language-concordant with their providers and not randomized; 371 were randomized to RSMI, 167 of whom were exposed to RSMI; and 364 were randomized to U&C, 198 of whom were exposed to U&C. Patients randomized to RSMI were more likely than those with U&C to think doctors treated them with respect (RSMI 71%, U&C 64%, p < 0.05), but they did not differ in other measures of physician communication/care. In a linear regression analysis, exposure to RSMI was significantly associated with an increase in overall satisfaction with physician communication/care (beta 0.10, 95% CI 0.02-0.18, scale 0-1.0). Patients randomized to RSMI were more likely to think the interpreting method protected their privacy (RSMI 51%, U&C 38%, p < 0.05). Patients randomized to either arm of interpretation reported less comprehension and satisfaction than patients in language-concordant encounters. While not a substitute for language-concordant providers, RSMI can improve patient satisfaction and privacy among LEP patients. Implementing RSMI should be considered an important component of a multipronged approach to addressing language barriers in health

  19. Methods for interpreting change over time in patient-reported outcome measures.

    PubMed

    Wyrwich, K W; Norquist, J M; Lenderking, W R; Acaster, S

    2013-04-01

    Interpretation guidelines are needed for patient-reported outcome (PRO) measures' change scores to evaluate efficacy of an intervention and to communicate PRO results to regulators, patients, physicians, and providers. The 2009 Food and Drug Administration (FDA) Guidance for Industry Patient-Reported Outcomes (PRO) Measures: Use in Medical Product Development to Support Labeling Claims (hereafter referred to as the final FDA PRO Guidance) provides some recommendations for the interpretation of change in PRO scores as evidence of treatment efficacy. This article reviews the evolution of the methods and the terminology used to describe and aid in the communication of meaningful PRO change score thresholds. Anchor- and distribution-based methods have played important roles, and the FDA has recently stressed the importance of cross-sectional patient global assessments of concept as anchor-based methods for estimation of the responder definition, which describes an individual-level treatment benefit. The final FDA PRO Guidance proposes the cumulative distribution function (CDF) of responses as a useful method to depict the effect of treatments across the study population. While CDFs serve an important role, they should not be a replacement for the careful investigation of a PRO's relevant responder definition using anchor-based methods and providing stakeholders with a relevant threshold for the interpretation of change over time.

  20. Standard setting: comparison of two methods.

    PubMed

    George, Sanju; Haque, M Sayeed; Oyebode, Femi

    2006-09-14

    The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.

  1. Methods of collecting and interpreting ground-water data

    USGS Publications Warehouse

    Bentall, Ray

    1963-01-01

    Because ground water is hidden from view, ancient man could only theorize as to its sources of replenishment and its behavior. His theories held sway until the latter part of the 17th century, which marked the first experimental work to determine the source and movement of ground water. Thus founded, the science of ground-water hydrology grew slowly and not until the 19th century is there substantial evidence of conclusions having been based on observational data. The 20th century has witnessed tremendous advances in the science in the methods of field investigation and interpretation of collected data, in the methods of determining the hydrologic characteristics of water-bearing material, and in the methods of inventorying ground-water supplies. Now, as is true of many other disciplines, the science of ground-water hydrology is characterized by frequent advancement of new ideas and techniques, refinement of old techniques, and an increasing wealth of data awaiting interpretation.So that its widely scattered staff of professional hydrologists could keep abreast of new ideas and advances in the techniques of groundwater investigation, it has been the practice in the U.S. Geological Survey to distribute such information for immediate internal use. As the methods become better established and developed, they are described in formal publications. Six papers pertaining to widely different phases of ground-water investigation comprise this particular contribution. For the sake of clarity and conformity, the original papers have been revised and edited by the compiler.

  2. An international effort towards developing standards for best practices in analysis, interpretation and reporting of clinical genome sequencing results in the CLARITY Challenge

    PubMed Central

    2014-01-01

    Background There is tremendous potential for genome sequencing to improve clinical diagnosis and care once it becomes routinely accessible, but this will require formalizing research methods into clinical best practices in the areas of sequence data generation, analysis, interpretation and reporting. The CLARITY Challenge was designed to spur convergence in methods for diagnosing genetic disease starting from clinical case history and genome sequencing data. DNA samples were obtained from three families with heritable genetic disorders and genomic sequence data were donated by sequencing platform vendors. The challenge was to analyze and interpret these data with the goals of identifying disease-causing variants and reporting the findings in a clinically useful format. Participating contestant groups were solicited broadly, and an independent panel of judges evaluated their performance. Results A total of 30 international groups were engaged. The entries reveal a general convergence of practices on most elements of the analysis and interpretation process. However, even given this commonality of approach, only two groups identified the consensus candidate variants in all disease cases, demonstrating a need for consistent fine-tuning of the generally accepted methods. There was greater diversity of the final clinical report content and in the patient consenting process, demonstrating that these areas require additional exploration and standardization. Conclusions The CLARITY Challenge provides a comprehensive assessment of current practices for using genome sequencing to diagnose and report genetic diseases. There is remarkable convergence in bioinformatic techniques, but medical interpretation and reporting are areas that require further development by many groups. PMID:24667040

  3. Comparison of two teaching methods for cardiac arrhythmia interpretation among nursing students.

    PubMed

    Varvaroussis, Dimitrios P; Kalafati, Maria; Pliatsika, Paraskevi; Castrén, Maaret; Lott, Carsten; Xanthos, Theodoros

    2014-02-01

    The aim of this study was to compare the six-stage method (SSM) for instructing primary cardiac arrhythmias interpretation to students without basic electrocardiogram (ECG) knowledge with a descriptive teaching method in a single educational intervention. This is a randomized trial. Following a brief instructional session, undergraduate nursing students, assigned to group A (SSM) and group B (descriptive teaching method), undertook a written test in cardiac rhythm recognition, immediately after the educational intervention (initial exam). Participants were also examined with an unannounced retention test (final exam), one month after instruction. Altogether 134 students completed the study. Interpretation accuracy for each cardiac arrhythmia was assessed. Mean score at the initial exam was 8.71±1.285 for group A and 8.74±1.303 for group B. Mean score at the final exam was 8.25±1.46 for group A vs 7.84±1.44 for group B. Overall results showed that the SSM was equally effective with the descriptive teaching method. The study showed that in each group bradyarrhythmias were identified correctly by more students than tachyarrhythmias. No significant difference between the two teaching methods was seen for any specific cardiac arrhythmia. The SSM effectively develops staff competency for interpreting common cardiac arrhythmias in students without ECG knowledge. More research is needed to support this conclusion and the method's effectiveness must be evaluated if being implemented to trainee groups with preexisting basic ECG interpretation knowledge. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Reflective topical autobiography: an under utilised interpretive research method in nursing.

    PubMed

    Johnstone, M J

    1999-01-01

    Reflective topical autobiography (an autobiographical method) belongs to the genre of testimonial research and is located within the postpositivist interpretive research paradigm. Despite the (reflective) topical autobiographical method enjoying a 'rebirth' in recent years and being utilised by a range of researchers in the human and literary disciplines, it remains largely unknown and under utilised in nursing research domains. In this article it is proposed that reflective topical autobiography is an important research method in its own right, and one which promises to make a substantive contribution to the overall project of advancing nursing inquiry and knowledge. This is particularly so where nursing research shares in the affirming projects of interpretive research generally and the relatively new sociology of the emotions in particular apropos: (i) increasing understanding of subjectivity and making subjective experiences more visible and intelligible, (ii) the search for meaning and increasing understanding of the commonality of existential human experience, and (iii) decentring the detached observer and his/her privileging the objectivist illusion in the hierarchy of research discourses, paving the way for the admission of multiple realities and interpretations of lived experience. In this article, a coherent reflective topical autobiographical research method is advanced for use in nursing education and research contexts.

  5. Graphic methods for interpreting longitudinal dyadic patterns from repeated-measures actor-partner interdependence models.

    PubMed

    Perry, Nicholas S; Baucom, Katherine J W; Bourne, Stacia; Butner, Jonathan; Crenshaw, Alexander O; Hogan, Jasara N; Imel, Zac E; Wiltshire, Travis J; Baucom, Brian R W

    2017-08-01

    Researchers commonly use repeated-measures actor-partner interdependence models (RM-APIM) to understand how romantic partners change in relation to one another over time. However, traditional interpretations of the results of these models do not fully or correctly capture the dyadic temporal patterns estimated in RM-APIM. Interpretation of results from these models largely focuses on the meaning of single-parameter estimates in isolation from all the others. However, considering individual coefficients separately impedes the understanding of how these associations combine to produce an interdependent pattern that emerges over time. Additionally, positive within-person, or actor, effects are commonly misinterpreted as indicating growth from one time point to the next when they actually represent decline. We suggest that change-as-outcome RM-APIMs and vector field diagrams (VFDs) can be used to improve the understanding and presentation of dyadic patterns of association described by standard RM-APIMs. The current article briefly reviews the conceptual foundations of RM-APIMs, demonstrates how change-as-outcome RM-APIMs and VFDs can aid interpretation of standard RM-APIMs, and provides a tutorial in making VFDs using multilevel modeling. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. 42 CFR 37.52 - Method of obtaining definitive interpretations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Method of obtaining definitive interpretations. 37.52 Section 37.52 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICAL... obtained until a consensus involving two or more readings in the same major category is obtained. [43 FR...

  7. A new method for the automatic interpretation of Schlumberger and Wenner sounding curves

    USGS Publications Warehouse

    Zohdy, A.A.R.

    1989-01-01

    A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author

  8. 40 CFR Appendix H to Part 50 - Interpretation of the 1-Hour Primary and Secondary National Ambient Air Quality Standards for Ozone

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... and Secondary National Ambient Air Quality Standards for Ozone H Appendix H to Part 50 Protection of... Secondary National Ambient Air Quality Standards for Ozone 1. General This appendix explains how to... associated examples are contained in the “Guideline for Interpretation of Ozone Air Quality Standards.” For...

  9. 40 CFR Appendix H to Part 50 - Interpretation of the 1-Hour Primary and Secondary National Ambient Air Quality Standards for Ozone

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... and Secondary National Ambient Air Quality Standards for Ozone H Appendix H to Part 50 Protection of... Secondary National Ambient Air Quality Standards for Ozone 1. General This appendix explains how to... associated examples are contained in the “Guideline for Interpretation of Ozone Air Quality Standards.” For...

  10. 40 CFR Appendix H to Part 50 - Interpretation of the 1-Hour Primary and Secondary National Ambient Air Quality Standards for Ozone

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... and Secondary National Ambient Air Quality Standards for Ozone H Appendix H to Part 50 Protection of... Secondary National Ambient Air Quality Standards for Ozone 1. General This appendix explains how to... associated examples are contained in the “Guideline for Interpretation of Ozone Air Quality Standards.” For...

  11. 40 CFR Appendix H to Part 50 - Interpretation of the 1-Hour Primary and Secondary National Ambient Air Quality Standards for Ozone

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... and Secondary National Ambient Air Quality Standards for Ozone H Appendix H to Part 50 Protection of... Secondary National Ambient Air Quality Standards for Ozone 1. General This appendix explains how to... associated examples are contained in the “Guideline for Interpretation of Ozone Air Quality Standards.” For...

  12. Interpreting findings from Mendelian randomization using the MR-Egger method.

    PubMed

    Burgess, Stephen; Thompson, Simon G

    2017-05-01

    Mendelian randomization-Egger (MR-Egger) is an analysis method for Mendelian randomization using summarized genetic data. MR-Egger consists of three parts: (1) a test for directional pleiotropy, (2) a test for a causal effect, and (3) an estimate of the causal effect. While conventional analysis methods for Mendelian randomization assume that all genetic variants satisfy the instrumental variable assumptions, the MR-Egger method is able to assess whether genetic variants have pleiotropic effects on the outcome that differ on average from zero (directional pleiotropy), as well as to provide a consistent estimate of the causal effect, under a weaker assumption-the InSIDE (INstrument Strength Independent of Direct Effect) assumption. In this paper, we provide a critical assessment of the MR-Egger method with regard to its implementation and interpretation. While the MR-Egger method is a worthwhile sensitivity analysis for detecting violations of the instrumental variable assumptions, there are several reasons why causal estimates from the MR-Egger method may be biased and have inflated Type 1 error rates in practice, including violations of the InSIDE assumption and the influence of outlying variants. The issues raised in this paper have potentially serious consequences for causal inferences from the MR-Egger approach. We give examples of scenarios in which the estimates from conventional Mendelian randomization methods and MR-Egger differ, and discuss how to interpret findings in such cases.

  13. [Case-non case studies: Principles, methods, bias and interpretation].

    PubMed

    Faillie, Jean-Luc

    2017-10-31

    Case-non case studies belongs to the methods assessing drug safety by analyzing the disproportionality of notifications of adverse drug reactions in pharmacovigilance databases. Used for the first time in the 1980s, the last few decades have seen a significant increase in the use of this design. The principle of the case-non case study is to compare drug exposure in cases of a studied adverse reaction with that of cases of other reported adverse reactions and called "non cases". Results are presented in the form of a reporting odds ratio (ROR), the interpretation of which makes it possible to identify drug safety signals. This article describes the principle of the case-non case study, the method of calculating the ROR and its confidence interval, the different modalities of analysis and how to interpret its results with regard to the advantages and limitations of this design. Copyright © 2017 Société française de pharmacologie et de thérapeutique. Published by Elsevier Masson SAS. All rights reserved.

  14. A Method for the Interpretation of Flow Cytometry Data Using Genetic Algorithms.

    PubMed

    Angeletti, Cesar

    2018-01-01

    Flow cytometry analysis is the method of choice for the differential diagnosis of hematologic disorders. It is typically performed by a trained hematopathologist through visual examination of bidimensional plots, making the analysis time-consuming and sometimes too subjective. Here, a pilot study applying genetic algorithms to flow cytometry data from normal and acute myeloid leukemia subjects is described. Initially, Flow Cytometry Standard files from 316 normal and 43 acute myeloid leukemia subjects were transformed into multidimensional FITS image metafiles. Training was performed through introduction of FITS metafiles from 4 normal and 4 acute myeloid leukemia in the artificial intelligence system. Two mathematical algorithms termed 018330 and 025886 were generated. When tested against a cohort of 312 normal and 39 acute myeloid leukemia subjects, both algorithms combined showed high discriminatory power with a receiver operating characteristic (ROC) curve of 0.912. The present results suggest that machine learning systems hold a great promise in the interpretation of hematological flow cytometry data.

  15. Standard methods for open hole tension testing of textile composites

    NASA Technical Reports Server (NTRS)

    Portanova, M. A.; Masters, J. E.

    1995-01-01

    Sizing effects have been investigated by comparing the open hole failure strengths of each of the four different braided architectures as a function of specimen thickness, hole diameter, and the ratio of specimen width to hole diameter. The data used to make these comparisons was primarily generated by Boeing. Direct comparisons of Boeing's results were made with experiments conducted at West Virginia University whenever possible. Indirect comparisons were made with test results for other 2-D braids and 3-D weaves tested by Boeing and Lockheed. In general, failure strength was found to decrease with increasing plate thickness, increase with decreasing hole size, and decreasing with decreasing width to diameter ratio. The interpretation of the sensitive to each of these geometrical parameters was complicated by scatter in the test data. For open hole tension testing of textile composites, the use of standard testing practices employed by industry, such as ASTM D5766 - Standard Test Method for Open Hole Tensile Strength of Polymer Matrix Composite Laminates should provide adequate results for material comparisons studies.

  16. Interpretative commenting.

    PubMed

    Vasikaran, Samuel

    2008-08-01

    * Clinical laboratories should be able to offer interpretation of the results they produce. * At a minimum, contact details for interpretative advice should be available on laboratory reports.Interpretative comments may be verbal or written and printed. * Printed comments on reports should be offered judiciously, only where they would add value; no comment preferred to inappropriate or dangerous comment. * Interpretation should be based on locally agreed or nationally recognised clinical guidelines where available. * Standard tied comments ("canned" comments) can have some limited use.Individualised narrative comments may be particularly useful in the case of tests that are new, complex or unfamiliar to the requesting clinicians and where clinical details are available. * Interpretative commenting should only be provided by appropriately trained and credentialed personnel. * Audit of comments and continued professional development of personnel providing them are important for quality assurance.

  17. Interpretative Commenting

    PubMed Central

    Vasikaran, Samuel

    2008-01-01

    Summary Clinical laboratories should be able to offer interpretation of the results they produce.At a minimum, contact details for interpretative advice should be available on laboratory reports.Interpretative comments may be verbal or written and printed.Printed comments on reports should be offered judiciously, only where they would add value; no comment preferred to inappropriate or dangerous comment.Interpretation should be based on locally agreed or nationally recognised clinical guidelines where available.Standard tied comments (“canned” comments) can have some limited use.Individualised narrative comments may be particularly useful in the case of tests that are new, complex or unfamiliar to the requesting clinicians and where clinical details are available.Interpretative commenting should only be provided by appropriately trained and credentialed personnel.Audit of comments and continued professional development of personnel providing them are important for quality assurance. PMID:18852867

  18. 10 CFR 20.1006 - Interpretations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...

  19. 10 CFR 20.1006 - Interpretations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...

  20. 10 CFR 20.1006 - Interpretations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...

  1. 10 CFR 20.1006 - Interpretations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...

  2. 10 CFR 20.1006 - Interpretations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Interpretations. 20.1006 Section 20.1006 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION General Provisions § 20.1006 Interpretations. Except as specifically authorized by the Commission in writing, no interpretation of the meaning of the...

  3. Parity among interpretation methods of MLEE patterns and disparity among clustering methods in epidemiological typing of Candida albicans.

    PubMed

    Boriollo, Marcelo Fabiano Gomes; Rosa, Edvaldo Antonio Ribeiro; Gonçalves, Reginaldo Bruno; Höfling, José Francisco

    2006-03-01

    The typing of C. albicans by MLEE (multilocus enzyme electrophoresis) is dependent on the interpretation of enzyme electrophoretic patterns, and the study of the epidemiological relationships of these yeasts can be conducted by cluster analysis. Therefore, the aims of the present study were to first determine the discriminatory power of genetic interpretation (deduction of the allelic composition of diploid organisms) and numerical interpretation (mere determination of the presence and absence of bands) of MLEE patterns, and then to determine the concordance (Pearson product-moment correlation coefficient) and similarity (Jaccard similarity coefficient) of the groups of strains generated by three cluster analysis models, and the discriminatory power of such models as well [model A: genetic interpretation, genetic distance matrix of Nei (d(ij)) and UPGMA dendrogram; model B: genetic interpretation, Dice similarity matrix (S(D1)) and UPGMA dendrogram; model C: numerical interpretation, Dice similarity matrix (S(D2)) and UPGMA dendrogram]. MLEE was found to be a powerful and reliable tool for the typing of C. albicans due to its high discriminatory power (>0.9). Discriminatory power indicated that numerical interpretation is a method capable of discriminating a greater number of strains (47 versus 43 subtypes), but also pointed to model B as a method capable of providing a greater number of groups, suggesting its use for the typing of C. albicans by MLEE and cluster analysis. Very good agreement was only observed between the elements of the matrices S(D1) and S(D2), but a large majority of the groups generated in the three UPGMA dendrograms showed similarity S(J) between 4.8% and 75%, suggesting disparities in the conclusions obtained by the cluster assays.

  4. Electrocardiographic interpretation skills of cardiology residents: are they competent?

    PubMed

    Sibbald, Matthew; Davies, Edward G; Dorian, Paul; Yu, Eric H C

    2014-12-01

    Achieving competency at electrocardiogram (ECG) interpretation among cardiology subspecialty residents has traditionally focused on interpreting a target number of ECGs during training. However, there is little evidence to support this approach. Further, there are no data documenting the competency of ECG interpretation skills among cardiology residents, who become de facto the gold standard in their practice communities. We tested 29 Cardiology residents from all 3 years in a large training program using a set of 20 ECGs collected from a community cardiology practice over a 1-month period. Residents interpreted half of the ECGs using a standard analytic framework, and half using their own approach. Residents were scored on the number of correct and incorrect diagnoses listed. Overall diagnostic accuracy was 58%. Of 6 potentially life-threatening diagnoses, residents missed 36% (123 of 348) including hyperkalemia (81%), long QT (52%), complete heart block (35%), and ventricular tachycardia (19%). Residents provided additional inappropriate diagnoses on 238 ECGs (41%). Diagnostic accuracy was similar between ECGs interpreted using an analytic framework vs ECGs interpreted without an analytic framework (59% vs 58%; F(1,1333) = 0.26; P = 0.61). Cardiology resident proficiency at ECG interpretation is suboptimal. Despite the use of an analytic framework, there remain significant deficiencies in ECG interpretation among Cardiology residents. A more systematic method of addressing these important learning gaps is urgently needed. Copyright © 2014 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  5. Method Designed to Respect Molecular Heterogeneity Can Profoundly Correct Present Data Interpretations for Genome-Wide Expression Analysis

    PubMed Central

    Chen, Chih-Hao; Hsu, Chueh-Lin; Huang, Shih-Hao; Chen, Shih-Yuan; Hung, Yi-Lin; Chen, Hsiao-Rong; Wu, Yu-Chung

    2015-01-01

    Although genome-wide expression analysis has become a routine tool for gaining insight into molecular mechanisms, extraction of information remains a major challenge. It has been unclear why standard statistical methods, such as the t-test and ANOVA, often lead to low levels of reproducibility, how likely applying fold-change cutoffs to enhance reproducibility is to miss key signals, and how adversely using such methods has affected data interpretations. We broadly examined expression data to investigate the reproducibility problem and discovered that molecular heterogeneity, a biological property of genetically different samples, has been improperly handled by the statistical methods. Here we give a mathematical description of the discovery and report the development of a statistical method, named HTA, for better handling molecular heterogeneity. We broadly demonstrate the improved sensitivity and specificity of HTA over the conventional methods and show that using fold-change cutoffs has lost much information. We illustrate the especial usefulness of HTA for heterogeneous diseases, by applying it to existing data sets of schizophrenia, bipolar disorder and Parkinson’s disease, and show it can abundantly and reproducibly uncover disease signatures not previously detectable. Based on 156 biological data sets, we estimate that the methodological issue has affected over 96% of expression studies and that HTA can profoundly correct 86% of the affected data interpretations. The methodological advancement can better facilitate systems understandings of biological processes, render biological inferences that are more reliable than they have hitherto been and engender translational medical applications, such as identifying diagnostic biomarkers and drug prediction, which are more robust. PMID:25793610

  6. Methods proposed to achieve air quality standards for mobile sources and technology surveillance.

    PubMed Central

    Piver, W T

    1975-01-01

    The methods proposed to meet the 1975 Standards of the Clean Air Act for mobile sources are alternative antiknocks, exhaust emission control devices, and alternative engine designs. Technology surveillance analysis applied to this situation is an attempt to anticipate potential public and environmental health problems from these methods, before they happen. Components of this analysis are exhaust emission characterization, environmental transport and transformation, levels of public and environmental exposure, and the influence of economics on the selection of alternative methods. The purpose of this presentation is to show trends as a result of the interaction of these different components. In no manner can these trends be interpreted explicitly as to what will really happen. Such an analysis is necessary so that public and environmental health officials have the opportunity to act on potential problems before they become manifest. PMID:50944

  7. Issues and Methods for Standard-Setting.

    ERIC Educational Resources Information Center

    Hambleton, Ronald K.; And Others

    Issues involved in standard setting along with methods for standard setting are reviewed, with specific reference to their relevance for criterion referenced testing. Definitions are given of continuum and state models, and traditional and normative standard setting procedures. Since continuum models are considered more appropriate for criterion…

  8. ICADx: interpretable computer aided diagnosis of breast masses

    NASA Astrophysics Data System (ADS)

    Kim, Seong Tae; Lee, Hakmin; Kim, Hak Gu; Ro, Yong Man

    2018-02-01

    In this study, a novel computer aided diagnosis (CADx) framework is devised to investigate interpretability for classifying breast masses. Recently, a deep learning technology has been successfully applied to medical image analysis including CADx. Existing deep learning based CADx approaches, however, have a limitation in explaining the diagnostic decision. In real clinical practice, clinical decisions could be made with reasonable explanation. So current deep learning approaches in CADx are limited in real world deployment. In this paper, we investigate interpretability in CADx with the proposed interpretable CADx (ICADx) framework. The proposed framework is devised with a generative adversarial network, which consists of interpretable diagnosis network and synthetic lesion generative network to learn the relationship between malignancy and a standardized description (BI-RADS). The lesion generative network and the interpretable diagnosis network compete in an adversarial learning so that the two networks are improved. The effectiveness of the proposed method was validated on public mammogram database. Experimental results showed that the proposed ICADx framework could provide the interpretability of mass as well as mass classification. It was mainly attributed to the fact that the proposed method was effectively trained to find the relationship between malignancy and interpretations via the adversarial learning. These results imply that the proposed ICADx framework could be a promising approach to develop the CADx system.

  9. Application of ISO 9000 Standards to Education and Training. Interpretation and Guidelines in a European Perspective. CEDEFOP Document.

    ERIC Educational Resources Information Center

    Van den Berghe, Wouter

    This report brings together European experience on the interpretation and implementation of ISO 9000 in education and training (ET) environments. Chapter 1 discusses the importance of quality concepts in ET and summarizes key concepts of total quality management (TQM) and its relevance for ET. Chapter 2 introduces the ISO 9000 standards. It…

  10. Informatics and Standards for Nanomedicine Technology

    PubMed Central

    Thomas, Dennis G.; Klaessig, Fred; Harper, Stacey L.; Fritts, Martin; Hoover, Mark D.; Gaheen, Sharon; Stokes, Todd H.; Reznik-Zellen, Rebecca; Freund, Elaine T.; Klemm, Juli D.; Paik, David S.; Baker, Nathan A.

    2011-01-01

    There are several issues to be addressed concerning the management and effective use of information (or data), generated from nanotechnology studies in biomedical research and medicine. These data are large in volume, diverse in content, and are beset with gaps and ambiguities in the description and characterization of nanomaterials. In this work, we have reviewed three areas of nanomedicine informatics: information resources; taxonomies, controlled vocabularies, and ontologies; and information standards. Informatics methods and standards in each of these areas are critical for enabling collaboration, data sharing, unambiguous representation and interpretation of data, semantic (meaningful) search and integration of data; and for ensuring data quality, reliability, and reproducibility. In particular, we have considered four types of information standards in this review, which are standard characterization protocols, common terminology standards, minimum information standards, and standard data communication (exchange) formats. Currently, due to gaps and ambiguities in the data, it is also difficult to apply computational methods and machine learning techniques to analyze, interpret and recognize patterns in data that are high dimensional in nature, and also to relate variations in nanomaterial properties to variations in their chemical composition, synthesis, characterization protocols, etc. Progress towards resolving the issues of information management in nanomedicine using informatics methods and standards discussed in this review will be essential to the rapidly growing field of nanomedicine informatics. PMID:21721140

  11. Interpretive computer simulator for the NASA Standard Spacecraft Computer-2 (NSSC-2)

    NASA Technical Reports Server (NTRS)

    Smith, R. S.; Noland, M. S.

    1979-01-01

    An Interpretive Computer Simulator (ICS) for the NASA Standard Spacecraft Computer-II (NSSC-II) was developed as a code verification and testing tool for the Annular Suspension and Pointing System (ASPS) project. The simulator is written in the higher level language PASCAL and implented on the CDC CYBER series computer system. It is supported by a metal assembler, a linkage loader for the NSSC-II, and a utility library to meet the application requirements. The architectural design of the NSSC-II is that of an IBM System/360 (S/360) and supports all but four instructions of the S/360 standard instruction set. The structural design of the ICS is described with emphasis on the design differences between it and the NSSC-II hardware. The program flow is diagrammed, with the function of each procedure being defined; the instruction implementation is discussed in broad terms; and the instruction timings used in the ICS are listed. An example of the steps required to process an assembly level language program on the ICS is included. The example illustrates the control cards necessary to assemble, load, and execute assembly language code; the sample program to to be executed; the executable load module produced by the loader; and the resulting output produced by the ICS.

  12. Interpretation of Series National Standards of China on “Greenhouse Gas Emissions Accounting and Reporting for Enterprises”

    NASA Astrophysics Data System (ADS)

    Chen, Liang; Zong, Jianfang; Guo, Huiting; Sun, Liang; Liu, Mei

    2018-05-01

    Standardization is playing an increasingly important role in reducing greenhouse gas emission and in climatic change adaptation, especially in the “three” greenhouse gas emission aspects (measurement, report, verification). Standardization has become one of the most important ways in mitigating the global climate change. Standardization Administration of China (SAC) has taken many productive measures in actively promoting standardization work to cope with climate change. In April 2014, SAC officially approved the establishment of “National Carbon Emission Management Standardization Technical Committee” In November 2015, SAC officially issued the first 11 national standards on carbon management including <> and the requirements of the greenhouse gas emissions accounting and reporting in 10 sectors including power generation, power grid, iron and steel, chemical engineering, electrolytic aluminum, magnesium smelting, plate glass, cement, ceramics and civil aviation, which proposes unified requirements of “what to calculate and how to calculate” the greenhouse gas emission for enterprises. This paper focuses on the detailed interpretation of the main contents of the first 11 national standards, so as to provide technical supports for users of the standards and to comprehensively promote the emission reduction of greenhouse gas at the enterprise level.

  13. [How to Interpret and Use Routine Laboratory Data--Our Methods to Interpret Routine Laboratory Data--Chairmen's Introductory Remarks].

    PubMed

    Honda, Takayuki; Tozuka, Minoru

    2015-09-01

    In the reversed clinicopathological conference (R-CPC), three specialists in laboratory medicine interpreted routine laboratory data independently in order to understand the detailed state of a patient. R-CPC is an educational method to use laboratory data appropriately, and it is also important to select differential diagnoses in a process of clinical reasoning in addition to the present illness and physical examination. Routine laboratory tests can be performed repeatedly at a relatively low cost, and their time-series analysis can be performed. Interpretation of routine laboratory data is almost the same as taking physical findings. General findings are initially checked and then the state of each organ is examined. Although routine laboratory tests cost little, we can gain much more information from them about the patient than physical examinations.

  14. Interpretation and classification of microvolt T wave alternans tests

    NASA Technical Reports Server (NTRS)

    Bloomfield, Daniel M.; Hohnloser, Stefan H.; Cohen, Richard J.

    2002-01-01

    Measurement of microvolt-level T wave alternans (TWA) during routine exercise stress testing now is possible as a result of sophisticated noise reduction techniques and analytic methods that have become commercially available. Even though this technology is new, the available data suggest that microvolt TWA is a potent predictor of arrhythmia risk in diverse disease states. As this technology becomes more widely available, physicians will be called upon to interpret microvolt TWA tracings. This review seeks to establish uniform standards for the clinical interpretation of microvolt TWA tracings.

  15. Standard methods for sampling North American freshwater fishes

    USGS Publications Warehouse

    Bonar, Scott A.; Hubert, Wayne A.; Willis, David W.

    2009-01-01

    This important reference book provides standard sampling methods recommended by the American Fisheries Society for assessing and monitoring freshwater fish populations in North America. Methods apply to ponds, reservoirs, natural lakes, and streams and rivers containing cold and warmwater fishes. Range-wide and eco-regional averages for indices of abundance, population structure, and condition for individual species are supplied to facilitate comparisons of standard data among populations. Provides information on converting nonstandard to standard data, statistical and database procedures for analyzing and storing standard data, and methods to prevent transfer of invasive species while sampling.

  16. A Method for Improved Interpretation of "Spot" Biomarker Data ...

    EPA Pesticide Factsheets

    A Method for Improved Interpretation of "Spot" Biomarker Data The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  17. Study Methods to Standardize Thermography NDE

    NASA Technical Reports Server (NTRS)

    Walker, James L.; Workman, Gary L.

    1998-01-01

    The purpose of this work is to develop thermographic inspection methods and standards for use in evaluating structural composites and aerospace hardware. Qualification techniques and calibration methods are investigated to standardize the thermographic method for use in the field. Along with the inspections of test standards structural hardware, support hardware is designed and fabricated to aid in the thermographic process. Also, a standard operating procedure is developed for performing inspections with the Bales Thermal Image Processor (TIP). Inspections are performed on a broad range of structural composites. These materials include various graphite/epoxies, graphite/cyanide-ester, graphite/silicon-carbide, graphite phenolic and Keviar/epoxy. Also metal honeycomb (titanium and aluminum faceplates over an aluminum honeycomb core) structures are investigated. Various structural shapes are investigated and the thickness of the structures vary from as few as 3 plies to as many as 80 plies. Special emphasis is placed on characterizing defects in attachment holes and bondlines, in addition to those resulting from impact damage and the inclusion of foreign matter. Image processing through statistical analysis and digital filtering is investigated to enhance the quality and quantify the NDE thermal images when necessary.

  18. Study Methods to Standardize Thermography NDE

    NASA Technical Reports Server (NTRS)

    Walker, James L.; Workman, Gary L.

    1998-01-01

    The purpose of this work is to develop thermographic inspection methods and standards for use in evaluating structural composites and aerospace hardware. Qualification techniques and calibration methods are investigated to standardize the thermographic method for use in the field. Along with the inspections of test standards structural hardware, support hardware is designed and fabricated to aid in the thermographic process. Also, a standard operating procedure is developed for performing inspections with the Bales Thermal Image Processor (TIP). Inspections are performed on a broad range of structural composites. These materials include graphite/epoxies, graphite/cyanide-ester, graphite/silicon-carbide, graphite phenolic and Kevlar/epoxy. Also metal honeycomb (titanium and aluminum faceplates over an aluminum honeycomb core) structures are investigated. Various structural shapes are investigated and the thickness of the structures vary from as few as 3 plies to as many as 80 plies. Special emphasis is placed on characterizing defects in attachment holes and bondlines, in addition to those resulting from impact damage and the inclusion of foreign matter. Image processing through statistical analysis and digital filtering is investigated to enhance the quality and quantify the NDE thermal images when necessary.

  19. Blinded interpretation of study results can feasibly and effectively diminish interpretation bias.

    PubMed

    Järvinen, Teppo L N; Sihvonen, Raine; Bhandari, Mohit; Sprague, Sheila; Malmivaara, Antti; Paavola, Mika; Schünemann, Holger J; Guyatt, Gordon H

    2014-07-01

    Controversial and misleading interpretation of data from randomized trials is common. How to avoid misleading interpretation has received little attention. Herein, we describe two applications of an approach that involves blinded interpretation of the results by study investigators. The approach involves developing two interpretations of the results on the basis of a blinded review of the primary outcome data (experimental treatment A compared with control treatment B). One interpretation assumes that A is the experimental intervention and another assumes that A is the control. After agreeing that there will be no further changes, the investigators record their decisions and sign the resulting document. The randomization code is then broken, the correct interpretation chosen, and the manuscript finalized. Review of the document by an external authority before finalization can provide another safeguard against interpretation bias. We found the blinded preparation of a summary of data interpretation described in this article practical, efficient, and useful. Blinded data interpretation may decrease the frequency of misleading data interpretation. Widespread adoption of blinded data interpretation would be greatly facilitated were it added to the minimum set of recommendations outlining proper conduct of randomized controlled trials (eg, the Consolidated Standards of Reporting Trials statement). Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Towards a physical interpretation of the entropic lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Malaspinas, Orestis; Deville, Michel; Chopard, Bastien

    2008-12-01

    The entropic lattice Boltzmann method (ELBM) is one among several different versions of the lattice Boltzmann method for the simulation of hydrodynamics. The collision term of the ELBM is characterized by a nonincreasing H function, guaranteed by a variable relaxation time. We propose here an analysis of the ELBM using the Chapman-Enskog expansion. We show that it can be interpreted as some kind of subgrid model, where viscosity correction scales like the strain rate tensor. We confirm our analytical results by the numerical computations of the relaxation time modifications on the two-dimensional dipole-wall interaction benchmark.

  1. 48 CFR 9904.402-61 - Interpretation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-61 Section 9904.402-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.402-61 Interpretation. (a) 9904.402, Cost Accounting...

  2. Direct toxicity assessment - Methods, evaluation, interpretation.

    PubMed

    Gruiz, Katalin; Fekete-Kertész, Ildikó; Kunglné-Nagy, Zsuzsanna; Hajdu, Csilla; Feigl, Viktória; Vaszita, Emese; Molnár, Mónika

    2016-09-01

    Direct toxicity assessment (DTA) results provide the scale of the actual adverse effect of contaminated environmental samples. DTA results are used in environmental risk management of contaminated water, soil and waste, without explicitly translating the results into chemical concentration. The end points are the same as in environmental toxicology in general, i.e. inhibition rate, decrease in the growth rate or in yield and the 'no effect' or the 'lowest effect' measurement points of the sample dilution-response curve. The measurement unit cannot be a concentration, since the contaminants and their content in the sample is unknown. Thus toxicity is expressed as the sample proportion causing a certain scale of inhibition or no inhibition. Another option for characterizing the scale of toxicity of an environmental sample is equivalencing. Toxicity equivalencing represents an interpretation tool which enables toxicity of unknown mixtures of chemicals be converted into the concentration of an equivalently toxic reference substance. Toxicity equivalencing, (i.e. expressing the toxicity of unknown contaminants as the concentration of the reference) makes DTA results better understandable for non-ecotoxicologists and other professionals educated and thinking based on the chemical model. This paper describes and discusses the role, the principles, the methodology and the interpretation of direct toxicity assessment (DTA) with the aim to contribute to the understanding of the necessity to integrate DTA results into environmental management of contaminated soil and water. The paper also introduces the benefits of the toxicity equivalency method. The use of DTA is illustrated through two case studies. The first case study focuses on DTA of treated wastewater with the aim to characterize the treatment efficacy of a biological wastewater treatment plant by frequent bioassaying. The second case study applied DTA to investigate the cover layers of two bauxite residue (red mud

  3. 48 CFR 9904.403-61 - Interpretation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-61 Section 9904.403-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.403-61 Interpretation. (a) Questions have arisen as to...

  4. 48 CFR 9904.401-61 - Interpretation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-61 Section 9904.401-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.401-61 Interpretation. (a) 9904.401, Cost Accounting... accounting practices used in accumulating and reporting costs.” (b) In estimating the cost of direct material...

  5. A method for operative quantitative interpretation of multispectral images of biological tissues

    NASA Astrophysics Data System (ADS)

    Lisenko, S. A.; Kugeiko, M. M.

    2013-10-01

    A method for operative retrieval of spatial distributions of biophysical parameters of a biological tissue by using a multispectral image of it has been developed. The method is based on multiple regressions between linearly independent components of the diffuse reflection spectrum of the tissue and unknown parameters. Possibilities of the method are illustrated by an example of determining biophysical parameters of the skin (concentrations of melanin, hemoglobin and bilirubin, blood oxygenation, and scattering coefficient of the tissue). Examples of quantitative interpretation of the experimental data are presented.

  6. Design, analysis, and interpretation of field quality-control data for water-sampling projects

    USGS Publications Warehouse

    Mueller, David K.; Schertz, Terry L.; Martin, Jeffrey D.; Sandstrom, Mark W.

    2015-01-01

    The report provides extensive information about statistical methods used to analyze quality-control data in order to estimate potential bias and variability in environmental data. These methods include construction of confidence intervals on various statistical measures, such as the mean, percentiles and percentages, and standard deviation. The methods are used to compare quality-control results with the larger set of environmental data in order to determine whether the effects of bias and variability might interfere with interpretation of these data. Examples from published reports are presented to illustrate how the methods are applied, how bias and variability are reported, and how the interpretation of environmental data can be qualified based on the quality-control analysis.

  7. Interpretations

    NASA Astrophysics Data System (ADS)

    Bellac, Michel Le

    2014-11-01

    Although nobody can question the practical efficiency of quantum mechanics, there remains the serious question of its interpretation. As Valerio Scarani puts it, "We do not feel at ease with the indistinguishability principle (that is, the superposition principle) and some of its consequences." Indeed, this principle which pervades the quantum world is in stark contradiction with our everyday experience. From the very beginning of quantum mechanics, a number of physicists--but not the majority of them!--have asked the question of its "interpretation". One may simply deny that there is a problem: according to proponents of the minimalist interpretation, quantum mechanics is self-sufficient and needs no interpretation. The point of view held by a majority of physicists, that of the Copenhagen interpretation, will be examined in Section 10.1. The crux of the problem lies in the status of the state vector introduced in the preceding chapter to describe a quantum system, which is no more than a symbolic representation for the Copenhagen school of thought. Conversely, one may try to attribute some "external reality" to this state vector, that is, a correspondence between the mathematical description and the physical reality. In this latter case, it is the measurement problem which is brought to the fore. In 1932, von Neumann was first to propose a global approach, in an attempt to build a purely quantum theory of measurement examined in Section 10.2. This theory still underlies modern approaches, among them those grounded on decoherence theory, or on the macroscopic character of the measuring apparatus: see Section 10.3. Finally, there are non-standard interpretations such as Everett's many worlds theory or the hidden variables theory of de Broglie and Bohm (Section 10.4). Note, however, that this variety of interpretations has no bearing whatsoever on the practical use of quantum mechanics. There is no controversy on the way we should use quantum mechanics!

  8. 48 CFR 9904.409-61 - Interpretation. [Reserved

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...] 9904.409-61 Section 9904.409-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.409-61 Interpretation. [Reserved] ...

  9. 48 CFR 9904.407-61 - Interpretation. [Reserved

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...] 9904.407-61 Section 9904.407-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.407-61 Interpretation. [Reserved] ...

  10. 48 CFR 9904.405-61 - Interpretation. [Reserved

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...] 9904.405-61 Section 9904.405-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.405-61 Interpretation. [Reserved] ...

  11. 48 CFR 9904.410-61 - Interpretation. [Reserved

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...] 9904.410-61 Section 9904.410-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.410-61 Interpretation. [Reserved] ...

  12. 48 CFR 9904.404-61 - Interpretation. [Reserved

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...] 9904.404-61 Section 9904.404-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.404-61 Interpretation. [Reserved] ...

  13. 48 CFR 9904.408-61 - Interpretation. [Reserved

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...] 9904.408-61 Section 9904.408-61 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.408-61 Interpretation. [Reserved] ...

  14. Standardized Methods for Electronic Shearography

    NASA Technical Reports Server (NTRS)

    Lansing, Matthew D.

    1997-01-01

    Research was conducted in development of operating procedures and standard methods to evaluate fiber reinforced composite materials, bonded or sprayed insulation, coatings, and laminated structures with MSFC electronic shearography systems. Optimal operating procedures were developed for the Pratt and Whitney Electronic Holography/Shearography Inspection System (EH/SIS) operating in shearography mode, as well as the Laser Technology, Inc. (LTI) SC-4000 and Ettemeyer SHS-94 ISTRA shearography systems. Operating practices for exciting the components being inspected were studied, including optimal methods for transient heating with heat lamps and other methods as appropriate to enhance inspection capability.

  15. Middle Grade Students' Interpretations of Contourmaps

    ERIC Educational Resources Information Center

    Carter, Glenda; Cook, Michelle; Park, John C.; Wiebe, Eric N.; Butler, Susan M.

    2008-01-01

    This study examined eighth graders' approach to three tasks implemented to assist students with learning to interpret contour maps. Students' approach to and interpretation of these three tasks were analyzed qualitatively. When students were rank ordered according to their scores on a standardized test of spatial ability, the Minnesota Paper Form…

  16. Inter-Rater Reliability of Provider Interpretations of Irritable Bowel Syndrome Food and Symptom Journals

    PubMed Central

    Chung, Chia-Fang; Xu, Kaiyuan; Dong, Yi; Schenk, Jeanette M.; Cain, Kevin; Munson, Sean; Heitkemper, Margaret M.

    2017-01-01

    There are currently no standardized methods for identifying trigger food(s) from irritable bowel syndrome (IBS) food and symptom journals. The primary aim of this study was to assess the inter-rater reliability of providers’ interpretations of IBS journals. A second aim was to describe whether these interpretations varied for each patient. Eight providers reviewed 17 IBS journals and rated how likely key food groups (fermentable oligo-di-monosaccharides and polyols, high-calorie, gluten, caffeine, high-fiber) were to trigger IBS symptoms for each patient. Agreement of trigger food ratings was calculated using Krippendorff’s α-reliability estimate. Providers were also asked to write down recommendations they would give to each patient. Estimates of agreement of trigger food likelihood ratings were poor (average α = 0.07). Most providers gave similar trigger food likelihood ratings for over half the food groups. Four providers gave the exact same written recommendation(s) (range 3–7) to over half the patients. Inter-rater reliability of provider interpretations of IBS food and symptom journals was poor. Providers favored certain trigger food likelihood ratings and written recommendations. This supports the need for a more standardized method for interpreting these journals and/or more rigorous techniques to accurately identify personalized IBS food triggers. PMID:29113044

  17. Standardized radiographic interpretation of thoracic tuberculosis in children.

    PubMed

    Concepcion, Nathan David P; Laya, Bernard F; Andronikou, Savvas; Daltro, Pedro A N; Sanchez, Marion O; Uy, Jacqueline Austine U; Lim, Timothy Reynold U

    2017-09-01

    There is a lack of standardized approach and terminology to classify the diverse spectrum of manifestations in tuberculosis. It is important to recognize the different clinical and radiographic patterns to guide treatment. As a result of changing epidemiology, there is considerable overlap in the radiologic presentations of primary tuberculosis and post-primary tuberculosis. In this article we promote a standardized approach in clinical and radiographic classification for children suspected of having or diagnosed with childhood tuberculosis. We propose standardized terms to diminish confusion and miscommunication, which can affect management. In addition, we present pitfalls and limitations of imaging.

  18. Does periodic lung screening of films meets standards?

    PubMed Central

    Binay, Songul; Arbak, Peri; Safak, Alp Alper; Balbay, Ege Gulec; Bilgin, Cahit; Karatas, Naciye

    2016-01-01

    Objective: To determine whether the workers’ periodic chest x-ray screening techniques in accordance with the quality standards is the responsibility of physicians. Evaluation of differences of interpretations by physicians in different levels of education and the importance of standardization of interpretation. Methods: Previously taken chest radiographs of 400 workers who are working in a factory producing the glass run channels were evaluated according to technical and quality standards by three observers (pulmonologist, radiologist, pulmonologist assistant). There was a perfect concordance between radiologist and pulmonologist for the underpenetrated films. Whereas there was perfect concordance between pulmonologist and pulmonologist assistant for over penetrated films. Results: Pulmonologist (52%) has interpreted the dose of the films as regular more than other observers (radiologist; 44.3%, pulmonologist assistant; 30.4%). The frequency of interpretation of the films as taken in inspiratory phase by the pulmonologist (81.7%) was less than other observers (radiologist; 92.1%, pulmonologist assistant; 92.6%). The rate of the pulmonologist (53.5%) was higher than the other observers (radiologist; 44.6%, pulmonologist assistant; 41.8%) for the assessment of the positioning of the patients as symmetrical. Pulmonologist assistant (15.3%) was the one who most commonly reported the parenchymal findings (radiologist; 2.2%, pulmonologist; 12.9%). Conclusion: It is necessary to reorganize the technical standards and exposure procedures for improving the quality of the chest radiographs. The reappraisal of all interpreters and continuous training of technicians is required. PMID:28083054

  19. A Comparative Study of Standard-Setting Methods.

    ERIC Educational Resources Information Center

    Livingston, Samuel A.; Zieky, Michael J.

    1989-01-01

    The borderline group standard-setting method (BGSM), Nedelsky method (NM), and Angoff method (AM) were compared, using reading scores for 1,948 and mathematics scores for 2,191 sixth through ninth graders. The NM and AM were inconsistent with the BGSM. Passing scores were higher where students were more able. (SLD)

  20. International recommendations for electrocardiographic interpretation in athletes.

    PubMed

    Sharma, Sanjay; Drezner, Jonathan A; Baggish, Aaron; Papadakis, Michael; Wilson, Mathew G; Prutkin, Jordan M; La Gerche, Andre; Ackerman, Michael J; Borjesson, Mats; Salerno, Jack C; Asif, Irfan M; Owens, David S; Chung, Eugene H; Emery, Michael S; Froelicher, Victor F; Heidbuchel, Hein; Adamuz, Carmen; Asplund, Chad A; Cohen, Gordon; Harmon, Kimberly G; Marek, Joseph C; Molossi, Silvana; Niebauer, Josef; Pelto, Hank F; Perez, Marco V; Riding, Nathan R; Saarel, Tess; Schmied, Christian M; Shipon, David M; Stein, Ricardo; Vetter, Victoria L; Pelliccia, Antonio; Corrado, Domenico

    2018-04-21

    Sudden cardiac death (SCD) is the leading cause of mortality in athletes during sport. A variety of mostly hereditary, structural, or electrical cardiac disorders are associated with SCD in young athletes, the majority of which can be identified or suggested by abnormalities on a resting 12-lead electrocardiogram (ECG). Whether used for diagnostic or screening purposes, physicians responsible for the cardiovascular care of athletes should be knowledgeable and competent in ECG interpretation in athletes. However, in most countries a shortage of physician expertise limits wider application of the ECG in the care of the athlete. A critical need exists for physician education in modern ECG interpretation that distinguishes normal physiological adaptations in athletes from distinctly abnormal findings suggestive of underlying pathology. Since the original 2010 European Society of Cardiology recommendations for ECG interpretation in athletes, ECG standards have evolved quickly over the last decade; pushed by a growing body of scientific data that both tests proposed criteria sets and establishes new evidence to guide refinements. On 26-27 February 2015, an international group of experts in sports cardiology, inherited cardiac disease, and sports medicine convened in Seattle, Washington, to update contemporary standards for ECG interpretation in athletes. The objective of the meeting was to define and revise ECG interpretation standards based on new and emerging research and to develop a clear guide to the proper evaluation of ECG abnormalities in athletes. This statement represents an international consensus for ECG interpretation in athletes and provides expert opinion-based recommendations linking specific ECG abnormalities and the secondary evaluation for conditions associated with SCD.

  1. Developing a 'personalome' for precision medicine: emerging methods that compute interpretable effect sizes from single-subject transcriptomes.

    PubMed

    Vitali, Francesca; Li, Qike; Schissler, A Grant; Berghout, Joanne; Kenost, Colleen; Lussier, Yves A

    2017-12-18

    The development of computational methods capable of analyzing -omics data at the individual level is critical for the success of precision medicine. Although unprecedented opportunities now exist to gather data on an individual's -omics profile ('personalome'), interpreting and extracting meaningful information from single-subject -omics remain underdeveloped, particularly for quantitative non-sequence measurements, including complete transcriptome or proteome expression and metabolite abundance. Conventional bioinformatics approaches have largely been designed for making population-level inferences about 'average' disease processes; thus, they may not adequately capture and describe individual variability. Novel approaches intended to exploit a variety of -omics data are required for identifying individualized signals for meaningful interpretation. In this review-intended for biomedical researchers, computational biologists and bioinformaticians-we survey emerging computational and translational informatics methods capable of constructing a single subject's 'personalome' for predicting clinical outcomes or therapeutic responses, with an emphasis on methods that provide interpretable readouts. (i) the single-subject analytics of the transcriptome shows the greatest development to date and, (ii) the methods were all validated in simulations, cross-validations or independent retrospective data sets. This survey uncovers a growing field that offers numerous opportunities for the development of novel validation methods and opens the door for future studies focusing on the interpretation of comprehensive 'personalomes' through the integration of multiple -omics, providing valuable insights into individual patient outcomes and treatments. © The Author 2017. Published by Oxford University Press.

  2. 42 CFR 37.52 - Method of obtaining definitive interpretations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Roentgenographic Examinations Specifications for Interpretation, Classification, and Submission of Chest... described in § 37.51. If there is agreement between the two interpreters as defined in paragraph (b) of this... (with one exception noted below) are within one minor category (ILO Classification 12-point scale) of...

  3. A new method for reporting and interpreting textural composition of spawning gravel.

    Treesearch

    Fredrick B. Lotspeich; Fred H. Everest

    1981-01-01

    A new method has been developed for collecting, sorting, and interpreting gravel quality. Samples are collected with a tri-tube freeze-core device and dry-sorted by using sieves based on the Wentworth scale. An index to the quality of gravel is obtained by dividing geometric mean particle size by the sorting coefficient (a measure of the distribution of grain sizes) of...

  4. Standard Setting: A Systematic Approach to Interpreting Student Learning.

    ERIC Educational Resources Information Center

    DeMars, Christine E.; Sundre, Donna L.; Wise, Steven L.

    2002-01-01

    Describes workshops designed to set standards for freshman technological literacy at James Madison University (Virginia). Results indicated that about 30% of incoming freshmen could meet the standards set initially; by the end of the year, an additional 50-60% could meet them. Provides recommendations for standard setting in a general education…

  5. Standardization of Laser Methods and Techniques for Vibration Measurements and Calibrations

    NASA Astrophysics Data System (ADS)

    von Martens, Hans-Jürgen

    2010-05-01

    The realization and dissemination of the SI units of motion quantities (vibration and shock) have been based on laser interferometer methods specified in international documentary standards. New and refined laser methods and techniques developed by national metrology institutes and by leading manufacturers in the past two decades have been swiftly specified as standard methods for inclusion into in the series ISO 16063 of international documentary standards. A survey of ISO Standards for the calibration of vibration and shock transducers demonstrates the extended ranges and improved accuracy (measurement uncertainty) of laser methods and techniques for vibration and shock measurements and calibrations. The first standard for the calibration of laser vibrometers by laser interferometry or by a reference accelerometer calibrated by laser interferometry (ISO 16063-41) is on the stage of a Draft International Standard (DIS) and may be issued by the end of 2010. The standard methods with refined techniques proved to achieve wider measurement ranges and smaller measurement uncertainties than that specified in the ISO Standards. The applicability of different standardized interferometer methods to vibrations at high frequencies was recently demonstrated up to 347 kHz (acceleration amplitudes up to 350 km/s2). The relative deviations between the amplitude measurement results of the different interferometer methods that were applied simultaneously, differed by less than 1% in all cases.

  6. Melanins and melanogenesis: methods, standards, protocols.

    PubMed

    d'Ischia, Marco; Wakamatsu, Kazumasa; Napolitano, Alessandra; Briganti, Stefania; Garcia-Borron, José-Carlos; Kovacs, Daniela; Meredith, Paul; Pezzella, Alessandro; Picardo, Mauro; Sarna, Tadeusz; Simon, John D; Ito, Shosuke

    2013-09-01

    Despite considerable advances in the past decade, melanin research still suffers from the lack of universally accepted and shared nomenclature, methodologies, and structural models. This paper stems from the joint efforts of chemists, biochemists, physicists, biologists, and physicians with recognized and consolidated expertise in the field of melanins and melanogenesis, who critically reviewed and experimentally revisited methods, standards, and protocols to provide for the first time a consensus set of recommended procedures to be adopted and shared by researchers involved in pigment cell research. The aim of the paper was to define an unprecedented frame of reference built on cutting-edge knowledge and state-of-the-art methodology, to enable reliable comparison of results among laboratories and new progress in the field based on standardized methods and shared information. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Behind the mask of method: political orientation and constitutional interpretive preferences.

    PubMed

    Furgeson, Joshua R; Babcock, Linda; Shane, Peter M

    2008-12-01

    Debate about how to best interpret the Constitution often revolves around interpretive methodologies (e.g., originalism or expansive interpretation). This article examines whether individuals' political orientation influences the methodologies they prefer to use to interpret the Constitution. We study this proposed relationship using a survey of federal law clerks and an experimental study with college students. The survey results indicate that, compared to conservatives, liberal clerks prefer the current meaning or the most plausible appealing meaning of the constitutional text, while conservatives prefer the original meaning of the text. Liberal clerks also prefer to interpret the Constitution much more expansively. The second study manipulates the policy implications of expansive interpretation and finds this manipulation differentially affects liberals' and conservatives' expansiveness preferences.

  8. Criteria to Evaluate Interpretive Guides for Criterion-Referenced Tests

    ERIC Educational Resources Information Center

    Trapp, William J.

    2007-01-01

    This project provides a list of criteria for which the contents of interpretive guides written for customized, criterion-referenced tests can be evaluated. The criteria are based on the "Standards for Educational and Psychological Testing" (1999) and examine the content breadth of interpretive guides. Interpretive guides written for…

  9. Standardization of glycohemoglobin results and reference values in whole blood studied in 103 laboratories using 20 methods.

    PubMed

    Weykamp, C W; Penders, T J; Miedema, K; Muskiet, F A; van der Slik, W

    1995-01-01

    We investigated the effect of calibration with lyophilized calibrators on whole-blood glycohemoglobin (glyHb) results. One hundred three laboratories, using 20 different methods, determined glyHb in two lyophilized calibrators and two whole-blood samples. For whole-blood samples with low (5%) and high (9%) glyHb percentages, respectively, calibration decreased overall interlaboratory variation (CV) from 16% to 9% and from 11% to 6% and decreased intermethod variation from 14% to 6% and from 12% to 5%. Forty-seven laboratories, using 14 different methods, determined mean glyHb percentages in self-selected groups of 10 nondiabetic volunteers each. With calibration their overall mean (2SD) was 5.0% (0.5%), very close to the 5.0% (0.3%) derived from the reference method used in the Diabetes Control and Complications Trial. In both experiments the Abbott IMx and Vision showed deviating results. We conclude that, irrespective of the analytical method used, calibration enables standardization of glyHb results, reference values, and interpretation criteria.

  10. Standard methods for sampling freshwater fishes: Opportunities for international collaboration

    USGS Publications Warehouse

    Bonar, Scott A.; Mercado-Silva, Norman; Hubert, Wayne A.; Beard, Douglas; Dave, Göran; Kubečka, Jan; Graeb, Brian D. S.; Lester, Nigel P.; Porath, Mark T.; Winfield, Ian J.

    2017-01-01

    With publication of Standard Methods for Sampling North American Freshwater Fishes in 2009, the American Fisheries Society (AFS) recommended standard procedures for North America. To explore interest in standardizing at intercontinental scales, a symposium attended by international specialists in freshwater fish sampling was convened at the 145th Annual AFS Meeting in Portland, Oregon, in August 2015. Participants represented all continents except Australia and Antarctica and were employed by state and federal agencies, universities, nongovernmental organizations, and consulting businesses. Currently, standardization is practiced mostly in North America and Europe. Participants described how standardization has been important for management of long-term data sets, promoting fundamental scientific understanding, and assessing efficacy of large spatial scale management strategies. Academics indicated that standardization has been useful in fisheries education because time previously used to teach how sampling methods are developed is now more devoted to diagnosis and treatment of problem fish communities. Researchers reported that standardization allowed increased sample size for method validation and calibration. Group consensus was to retain continental standards where they currently exist but to further explore international and intercontinental standardization, specifically identifying where synergies and bridges exist, and identify means to collaborate with scientists where standardization is limited but interest and need occur.

  11. Using Standardized Interpretation of Chest Radiographs to Identify Adults with Bacterial Pneumonia--Guatemala, 2007-2012.

    PubMed

    Wortham, Jonathan M; Gray, Jennifer; Verani, Jennifer; Contreras, Carmen Lucia; Bernart, Chris; Moscoso, Fabiola; Moir, Juan Carlos; Reyes Marroquin, Emma Lissette; Castellan, Rigoberto; Arvelo, Wences; Lindblade, Kim; McCracken, John P

    2015-01-01

    Bacterial pneumonia is a leading cause of illness and death worldwide, but quantifying its burden is difficult due to insensitive diagnostics. Although World Health Organization (WHO) protocol standardizes pediatric chest radiograph (CXR) interpretation for epidemiologic studies of bacterial pneumonia, its validity in adults is unknown. Patients (age ≥ 15 years) admitted with respiratory infections to two Guatemalan hospitals between November 2007 and March 2012 had urine and nasopharyngeal/oropharyngeal (NP/OP) swabs collected; blood cultures and CXR were also performed at physician clinical discretion. 'Any bacterial infection' was defined as a positive urine pneumococcal antigen test, isolation of a bacterial pneumonia pathogen from blood culture, or detection of an atypical bacterial pathogen by polymerase chain reaction (PCR) of nasopharyngeal/oropharyngeal (NP/OP) specimens. 'Viral infection' was defined as detection of viral pathogens by PCR of NP/OP specimens. CXRs were interpreted according to the WHO protocol as having 'endpoint consolidation', 'other infiltrate', or 'normal' findings. We examined associations between bacterial and viral infections and endpoint consolidation. Urine antigen and/or blood culture results were available for 721 patients with CXR interpretations; of these, 385 (53%) had endpoint consolidation and 253 (35%) had other infiltrate. Any bacterial infection was detected in 119 (17%) patients, including 106 (89%) pneumococcal infections. Any bacterial infection (Diagnostic Odds Ratio [DOR] = 2.9; 95% confidence Interval (CI): 1.3-7.9) and pneumococcal infection (DOR = 3.4; 95% CI: 1.5-10.0) were associated with 'endpoint consolidation', but not 'other infiltrate' (DOR = 1.7; 95% CI: 0.7-4.9, and 1.7; 95% CI: 0.7-4.9 respectively). Viral infection was not significantly associated with 'endpoint consolidation', 'other infiltrate,' or 'normal' findings. 'Endpoint consolidation' was associated with 'any bacterial infection

  12. Human Fecal Source Identification: Real-Time Quantitative PCR Method Standardization

    EPA Science Inventory

    Method standardization or the formal development of a protocol that establishes uniform performance benchmarks and practices is necessary for widespread adoption of a fecal source identification approach. Standardization of a human-associated fecal identification method has been...

  13. Comparing biomarker measurements to a normal range: when to use standard error of the mean (SEM) or standard deviation (SD) confidence intervals tests.

    PubMed

    Pleil, Joachim D

    2016-01-01

    This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.

  14. Use of the dynamic stiffness method to interpret experimental data from a nonlinear system

    NASA Astrophysics Data System (ADS)

    Tang, Bin; Brennan, M. J.; Gatti, G.

    2018-05-01

    The interpretation of experimental data from nonlinear structures is challenging, primarily because of dependency on types and levels of excitation, and coupling issues with test equipment. In this paper, the use of the dynamic stiffness method, which is commonly used in the analysis of linear systems, is used to interpret the data from a vibration test of a controllable compressed beam structure coupled to a test shaker. For a single mode of the system, this method facilitates the separation of mass, stiffness and damping effects, including nonlinear stiffness effects. It also allows the separation of the dynamics of the shaker from the structure under test. The approach needs to be used with care, and is only suitable if the nonlinear system has a response that is predominantly at the excitation frequency. For the structure under test, the raw experimental data revealed little about the underlying causes of the dynamic behaviour. However, the dynamic stiffness approach allowed the effects due to the nonlinear stiffness to be easily determined.

  15. Interpreting the ASTM 'content standard for digital geospatial metadata'

    USGS Publications Warehouse

    Nebert, Douglas D.

    1996-01-01

    ASTM and the Federal Geographic Data Committee have developed a content standard for spatial metadata to facilitate documentation, discovery, and retrieval of digital spatial data using vendor-independent terminology. Spatial metadata elements are identifiable quality and content characteristics of a data set that can be tied to a geographic location or area. Several Office of Management and Budget Circulars and initiatives have been issued that specify improved cataloguing of and accessibility to federal data holdings. An Executive Order further requires the use of the metadata content standard to document digital spatial data sets. Collection and reporting of spatial metadata for field investigations performed for the federal government is an anticipated requirement. This paper provides an overview of the draft spatial metadata content standard and a description of how the standard could be applied to investigations collecting spatially-referenced field data.

  16. The Interpretation of "in Context" Verbal Probability Expressions Used in International Accounting Standards: A Comparison of English and Chinese Students Studying at English Speaking Universities

    ERIC Educational Resources Information Center

    Salleh, Safrul Izani Mohd; Gardner, John C.; Sulong, Zunaidah; McGowan, Carl B., Jr.

    2011-01-01

    This study examines the differences in the interpretation of ten "in context" verbal probability expressions used in accounting standards between native Chinese speaking and native English speaking accounting students in United Kingdom universities. The study assesses the degree of grouping factors consensus on the numerical…

  17. The cancer precision medicine knowledge base for structured clinical-grade mutations and interpretations

    PubMed Central

    Huang, Linda; Fernandes, Helen; Zia, Hamid; Tavassoli, Peyman; Rennert, Hanna; Pisapia, David; Imielinski, Marcin; Sboner, Andrea; Rubin, Mark A; Kluk, Michael

    2017-01-01

    Objective: This paper describes the Precision Medicine Knowledge Base (PMKB; https://pmkb.weill.cornell.edu), an interactive online application for collaborative editing, maintenance, and sharing of structured clinical-grade cancer mutation interpretations. Materials and Methods: PMKB was built using the Ruby on Rails Web application framework. Leveraging existing standards such as the Human Genome Variation Society variant description format, we implemented a data model that links variants to tumor-specific and tissue-specific interpretations. Key features of PMKB include support for all major variant types, standardized authentication, distinct user roles including high-level approvers, and detailed activity history. A REpresentational State Transfer (REST) application-programming interface (API) was implemented to query the PMKB programmatically. Results: At the time of writing, PMKB contains 457 variant descriptions with 281 clinical-grade interpretations. The EGFR, BRAF, KRAS, and KIT genes are associated with the largest numbers of interpretable variants. PMKB’s interpretations have been used in over 1500 AmpliSeq tests and 750 whole-exome sequencing tests. The interpretations are accessed either directly via the Web interface or programmatically via the existing API. Discussion: An accurate and up-to-date knowledge base of genomic alterations of clinical significance is critical to the success of precision medicine programs. The open-access, programmatically accessible PMKB represents an important attempt at creating such a resource in the field of oncology. Conclusion: The PMKB was designed to help collect and maintain clinical-grade mutation interpretations and facilitate reporting for clinical cancer genomic testing. The PMKB was also designed to enable the creation of clinical cancer genomics automated reporting pipelines via an API. PMID:27789569

  18. 24 CFR 242.8 - Standards for licensure and methods of operation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Standards for licensure and methods of operation. The Secretary shall require satisfactory evidence that the... of licensure and methods of operation for hospitals, and satisfactory assurance that such standards...

  19. The importance of proper administration and interpretation of neuropsychological baseline and postconcussion computerized testing.

    PubMed

    Moser, Rosemarie Scolaro; Schatz, Philip; Lichtenstein, Jonathan D

    2015-01-01

    Media coverage, litigation, and new legislation have resulted in a heightened awareness of the prevalence of sports concussion in both adult and youth athletes. Baseline and postconcussion testing is now commonly used for the assessment and management of sports-related concussion in schools and in youth sports leagues. With increased use of computerized neurocognitive sports concussion testing, there is a need for standards for proper administration and interpretation. To date, there has been a lack of standardized procedures by which assessments are administered. More specifically, individuals who are not properly trained often interpret test results, and their methods of interpretation vary considerably. The purpose of this article is to outline factors affecting the validity of test results, to provide examples of misuse and misinterpretation of test results, and to communicate the need to administer testing in the most effective and useful manner. An increase in the quality of test administration and application may serve to decrease the prevalence of invalid test results and increase the accuracy and utility of baseline test results if an athlete sustains a concussion. Standards for test use should model the American Psychological Association and Centers for Disease Control and Prevention guidelines, as well as the recent findings of the joint position paper on computerized neuropsychological assessment devices.

  20. Useful Effect Size Interpretations for Single Case Research

    ERIC Educational Resources Information Center

    Parker, Richard I.; Hagan-Burke, Shanna

    2007-01-01

    An obstacle to broader acceptability of effect sizes in single case research is their lack of intuitive and useful interpretations. Interpreting Cohen's d as "standard deviation units difference" and R[superscript 2] as "percent of variance accounted for" do not resound with most visual analysts. In fact, the only comparative analysis widely…

  1. Utilising Biographical Narrative Interpretive Methods: Rich Perspectives on Union Learning Journeys and Learner Motivations

    ERIC Educational Resources Information Center

    Ross, C.; Moore, S.

    2016-01-01

    This article explores the use of Biographical Narrative Interpretive Methods (BNIM) in research on motivations for trade union learning. Our use of BNIM--a new methodological approach for us--was intended to test our own research practice in an effort to get further inside the "felt world" and "lived life" of the union learner.…

  2. The transactional interpretation of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Cramer, John G.

    2001-06-01

    The transactional interpretation of quantum mechanics [1] was originally published in 1986 and is now about 14 years old. It is an explicitly nonlocal and Lorentz invariant alternative to the Copenhagen interpretation. It interprets the formalism for a quantum interaction as describing a "handshake" between retarded waves (ψ) and advanced waves (ψ*) for each quantum event or "transaction" in which energy, momentum, angular momentum, and other conserved quantities are transferred. The transactional interpretation offers the advantages that (1) it is actually "visible" in the formalism of quantum mechanics, (2) it is economical, involving fewer independent assumptions than its rivals, (3) it is paradox-free, resolving all of the paradoxes of standard quantum theory including nonlocality and wave function collapse, (4) it does not give a privileged role to observers or measurements, and (5) it permits the visualization of quantum events. We will review the transactional interpretation and some of its applications to "quantum paradoxes."

  3. Unregulated Autonomy: Uncredentialed Educational Interpreters in Rural Schools.

    PubMed

    Fitzmaurice, Stephen

    2017-01-01

    Although many rural Deaf and Hard of Hearing students attend public schools most of the day and use the services of educational interpreters to gain access to the school environment, little information exists on what interpreters are doing in rural school systems in the absence of credentialing requirements. The researcher used ethnographic interviews and field observations of three educational interpreters with no certification or professional assessment to explore how uncredentialed interpreters were enacting their role in a rural high school. The findings indicate that uncredentialed interpreters in rural settings perform four major functions during their school day: preparing the environment, staff, and materials; interpreting a variety of content; interacting with numerous stakeholders; and directly instructing Deaf and Hard of Hearing students. Generally, educational interpreters in rural districts operate with unregulated autonomy, a situation that warrants further research and a national standard for all educational interpreters.

  4. Euroforgen-NoE collaborative exercise on LRmix to demonstrate standardization of the interpretation of complex DNA profiles.

    PubMed

    Prieto, L; Haned, H; Mosquera, A; Crespillo, M; Alemañ, M; Aler, M; Alvarez, F; Baeza-Richer, C; Dominguez, A; Doutremepuich, C; Farfán, M J; Fenger-Grøn, M; García-Ganivet, J M; González-Moya, E; Hombreiro, L; Lareu, M V; Martínez-Jarreta, B; Merigioli, S; Milans Del Bosch, P; Morling, N; Muñoz-Nieto, M; Ortega-González, E; Pedrosa, S; Pérez, R; Solís, C; Yurrebaso, I; Gill, P

    2014-03-01

    There has been very little work published on the variation of reporting practices of mixtures between laboratories, but it has been previously demonstrated that there is little consistency. This is because there is no current uniformity of practice, so different laboratories will operate using different rules. The interpretation of mixtures is not solely a matter of using some software to provide 'an answer'. An assessment of a case will usually begin with a consideration of the circumstances of a crime. Assumptions made about the numbers of contributors follow from an examination of the electropherogram(s)--and these may differ between the prosecution and the defence hypotheses. There may be a necessity to evaluate several sets of hypotheses for any given case if the circumstances are uncertain. Once the hypotheses are formulated, the mathematical analysis is complex and can only be accomplished by the use of specialist software. In order to obtain meaningful results, it is essential that scientists are trained, not only in the use of the software, but also in the methodology to understand the likelihood ratio concept that is used. The Euroforgen-NoE initiative has developed a training course that utilizes the LRmix program to carry out the calculations. This software encompasses the recommendations of the ISFG DNA commissions on mixture interpretation and is able to interpret samples that may come from two or more contributors and may also be partial profiles. Recently, eighteen different laboratories were trained in the methodology. Afterwards they were asked to independently analyze two different cases with partial mixture DNA evidence and to write a statement court-report. We show that by introducing a structured training programme, it is possible to demonstrate, for the first time, that a high degree of standardization, leading to uniformity of results can be achieved by participating laboratories. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. [Modified Delphi method in the constitution of school sanitation standard].

    PubMed

    Yin, Xunqiang; Liang, Ying; Tan, Hongzhuan; Gong, Wenjie; Deng, Jing; Luo, Jiayou; Di, Xiaokang; Wu, Yue

    2012-11-01

    To constitute school sanitation standard using modified Delphi method, and to explore the feasibility and the predominance of Delphi method in the constitution of school sanitation standard. Two rounds of expert consultations were adopted in this study. The data were analyzed with SPSS15.0 to screen indices of school sanitation standard. Thirty-two experts accomplished the 2 rounds of consultations. The average length of expert service was (24.69 ±8.53) years. The authority coefficient was 0.729 ±0.172. The expert positive coefficient was 94.12% (32/34) in the first round and 100% (32/32) in the second round. The harmonious coefficients of importance, feasibility and rationality in the second round were 0.493 (P<0.05), 0.527 (P<0.01), and 0.535 (P<0.01), respectively, suggesting unanimous expert opinions. According to the second round of consultation, 38 indices were included in the framework. Theoretical analysis, literature review, investigation and so on are generally used in health standard constitution currently. Delphi method is a rapid, effective and feasible method in this field.

  6. Localized Smart-Interpretation

    NASA Astrophysics Data System (ADS)

    Lundh Gulbrandsen, Mats; Mejer Hansen, Thomas; Bach, Torben; Pallesen, Tom

    2014-05-01

    The complex task of setting up a geological model consists not only of combining available geological information into a conceptual plausible model, but also requires consistency with availably data, e.g. geophysical data. However, in many cases the direct geological information, e.g borehole samples, are very sparse, so in order to create a geological model, the geologist needs to rely on the geophysical data. The problem is however, that the amount of geophysical data in many cases are so vast that it is practically impossible to integrate all of them in the manual interpretation process. This means that a lot of the information available from the geophysical surveys are unexploited, which is a problem, due to the fact that the resulting geological model does not fulfill its full potential and hence are less trustworthy. We suggest an approach to geological modeling that 1. allow all geophysical data to be considered when building the geological model 2. is fast 3. allow quantification of geological modeling. The method is constructed to build a statistical model, f(d,m), describing the relation between what the geologists interpret, d, and what the geologist knows, m. The para- meter m reflects any available information that can be quantified, such as geophysical data, the result of a geophysical inversion, elevation maps, etc... The parameter d reflects an actual interpretation, such as for example the depth to the base of a ground water reservoir. First we infer a statistical model f(d,m), by examining sets of actual interpretations made by a geological expert, [d1, d2, ...], and the information used to perform the interpretation; [m1, m2, ...]. This makes it possible to quantify how the geological expert performs interpolation through f(d,m). As the geological expert proceeds interpreting, the number of interpreted datapoints from which the statistical model is inferred increases, and therefore the accuracy of the statistical model increases. When a model f

  7. Coordination and standardization of federal sedimentation activities

    USGS Publications Warehouse

    Glysson, G. Douglas; Gray, John R.

    1997-01-01

    - precipitation information critical to water resources management. Memorandum M-92-01 covers primarily freshwater bodies and includes activities, such as "development and distribution of consensus standards, field-data collection and laboratory analytical methods, data processing and interpretation, data-base management, quality control and quality assurance, and water- resources appraisals, assessments, and investigations." Research activities are not included.

  8. HUMAN FECAL SOURCE IDENTIFICATION: REAL-TIME QUANTITATIVE PCR METHOD STANDARDIZATION - abstract

    EPA Science Inventory

    Method standardization or the formal development of a protocol that establishes uniform performance benchmarks and practices is necessary for widespread adoption of a fecal source identification approach. Standardization of a human-associated fecal identification method has been...

  9. Puerto Rican understandings of child disability: methods for the cultural validation of standardized measures of child health.

    PubMed

    Gannotti, Mary E; Handwerker, W Penn

    2002-12-01

    Validating the cultural context of health is important for obtaining accurate and useful information from standardized measures of child health adapted for cross-cultural applications. This paper describes the application of ethnographic triangulation for cultural validation of a measure of childhood disability, the Pediatric Evaluation of Disability Inventory (PEDI) for use with children living in Puerto Rico. The key concepts include macro-level forces such as geography, demography, and economics, specific activities children performed and their key social interactions, beliefs, attitudes, emotions, and patterns of behavior surrounding independence in children and childhood disability, as well as the definition of childhood disability. Methods utilize principal components analysis to establish the validity of cultural concepts and multiple regression analysis to identify intracultural variation. Findings suggest culturally specific modifications to the PEDI, provide contextual information for informed interpretation of test scores, and point to the need to re-standardize normative values for use with Puerto Rican children. Without this type of information, Puerto Rican children may appear more disabled than expected for their level of impairment or not to be making improvements in functional status. The methods also allow for cultural boundaries to be quantitatively established, rather than presupposed. Copyright 2002 Elsevier Science Ltd.

  10. The Objective Borderline Method: A Probabilistic Method for Standard Setting

    ERIC Educational Resources Information Center

    Shulruf, Boaz; Poole, Phillippa; Jones, Philip; Wilkinson, Tim

    2015-01-01

    A new probability-based standard setting technique, the Objective Borderline Method (OBM), was introduced recently. This was based on a mathematical model of how test scores relate to student ability. The present study refined the model and tested it using 2500 simulated data-sets. The OBM was feasible to use. On average, the OBM performed well…

  11. Standard method of test for grindability of coal by the Hardgrove-machine method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1975-01-01

    A procedure is described for sampling coal, grinding in a Hardgrove grinding machine, and passing through standard sieves to determine the degree of pulverization of coals. The grindability index of the coal tested is calculated from a calibration chart prepared by plotting weight of material passing a No. 200 sieve versus the Hardgrove Grindability Index for the standard reference samples. The Hardgrove machine is shown schematically. The method for preparing and determining grindability indexes of standard reference samples is given in the appendix. (BLM)

  12. Selection of reference standard during method development using the analytical hierarchy process.

    PubMed

    Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun

    2015-03-25

    Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Method of interpretation of remotely sensed data and applications to land use

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Dossantos, A. P.; Foresti, C.; Demoraesnovo, E. M. L.; Niero, M.; Lombardo, M. A.

    1981-01-01

    Instructional material describing a methodology of remote sensing data interpretation and examples of applicatons to land use survey are presented. The image interpretation elements are discussed for different types of sensor systems: aerial photographs, radar, and MSS/LANDSAT. Visual and automatic LANDSAT image interpretation is emphasized.

  14. Condensed Mastery Profile Method for Setting Standards for Diagnostic Assessment Systems

    ERIC Educational Resources Information Center

    Clark, A. K.; Nash, B.; Karvonen, M.; Kingston, N.

    2017-01-01

    The purpose of this study was to develop a standard-setting method appropriate for use with a diagnostic assessment that produces profiles of student mastery rather than a single raw or scale score value. The condensed mastery profile method draws from established holistic standard-setting methods to use rounds of range finding and pinpointing to…

  15. A Proposed Interpretation of the ISO 10015 and Implications for HRD Theory and Research

    ERIC Educational Resources Information Center

    Jacobs, Ronald L.; Wang, Bryan

    2007-01-01

    While recent discussions of ISO 10015- Guidelines for Training have done much to promote the need for the standard, no interpretation of the standard has been presented that would guide its actual implementation. This paper proposes an interpretation of the ISO 10015 based on the specifications of the guideline and two other standards related to…

  16. Standard Test Methods for Textile Composites

    NASA Technical Reports Server (NTRS)

    Masters, John E.; Portanova, Marc A.

    1996-01-01

    Standard testing methods for composite laminates reinforced with continuous networks of braided, woven, or stitched fibers have been evaluated. The microstructure of these textile' composite materials differs significantly from that of tape laminates. Consequently, specimen dimensions and loading methods developed for tape type composites may not be applicable to textile composites. To this end, a series of evaluations were made comparing testing practices currently used in the composite industry. Information was gathered from a variety of sources and analyzed to establish a series of recommended test methods for textile composites. The current practices established for laminated composite materials by ASTM and the MIL-HDBK-17 Committee were considered. This document provides recommended test methods for determining both in-plane and out-of-plane properties. Specifically, test methods are suggested for: unnotched tension and compression; open and filled hole tension; open hole compression; bolt bearing; and interlaminar tension. A detailed description of the material architectures evaluated is also provided, as is a recommended instrumentation practice.

  17. Abstract Interpreters for Free

    NASA Astrophysics Data System (ADS)

    Might, Matthew

    In small-step abstract interpretations, the concrete and abstract semantics bear an uncanny resemblance. In this work, we present an analysis-design methodology that both explains and exploits that resemblance. Specifically, we present a two-step method to convert a small-step concrete semantics into a family of sound, computable abstract interpretations. The first step re-factors the concrete state-space to eliminate recursive structure; this refactoring of the state-space simultaneously determines a store-passing-style transformation on the underlying concrete semantics. The second step uses inference rules to generate an abstract state-space and a Galois connection simultaneously. The Galois connection allows the calculation of the "optimal" abstract interpretation. The two-step process is unambiguous, but nondeterministic: at each step, analysis designers face choices. Some of these choices ultimately influence properties such as flow-, field- and context-sensitivity. Thus, under the method, we can give the emergence of these properties a graph-theoretic characterization. To illustrate the method, we systematically abstract the continuation-passing style lambda calculus to arrive at two distinct families of analyses. The first is the well-known k-CFA family of analyses. The second consists of novel "environment-centric" abstract interpretations, none of which appear in the literature on static analysis of higher-order programs.

  18. Augmenting Amyloid PET Interpretations With Quantitative Information Improves Consistency of Early Amyloid Detection.

    PubMed

    Harn, Nicholas R; Hunt, Suzanne L; Hill, Jacqueline; Vidoni, Eric; Perry, Mark; Burns, Jeffrey M

    2017-08-01

    Establishing reliable methods for interpreting elevated cerebral amyloid-β plaque on PET scans is increasingly important for radiologists, as availability of PET imaging in clinical practice increases. We examined a 3-step method to detect plaque in cognitively normal older adults, focusing on the additive value of quantitative information during the PET scan interpretation process. Fifty-five F-florbetapir PET scans were evaluated by 3 experienced raters. Scans were first visually interpreted as having "elevated" or "nonelevated" plaque burden ("Visual Read"). Images were then processed using a standardized quantitative analysis software (MIMneuro) to generate whole brain and region of interest SUV ratios. This "Quantitative Read" was considered elevated if at least 2 of 6 regions of interest had an SUV ratio of more than 1.1. The final interpretation combined both visual and quantitative data together ("VisQ Read"). Cohen kappa values were assessed as a measure of interpretation agreement. Plaque was elevated in 25.5% to 29.1% of the 165 total Visual Reads. Interrater agreement was strong (kappa = 0.73-0.82) and consistent with reported values. Quantitative Reads were elevated in 45.5% of participants. Final VisQ Reads changed from initial Visual Reads in 16 interpretations (9.7%), with most changing from "nonelevated" Visual Reads to "elevated." These changed interpretations demonstrated lower plaque quantification than those initially read as "elevated" that remained unchanged. Interrater variability improved for VisQ Reads with the addition of quantitative information (kappa = 0.88-0.96). Inclusion of quantitative information increases consistency of PET scan interpretations for early detection of cerebral amyloid-β plaque accumulation.

  19. Comparison of two methods of standard setting: the performance of the three-level Angoff method.

    PubMed

    Jalili, Mohammad; Hejri, Sara M; Norcini, John J

    2011-12-01

    Cut-scores, reliability and validity vary among standard-setting methods. The modified Angoff method (MA) is a well-known standard-setting procedure, but the three-level Angoff approach (TLA), a recent modification, has not been extensively evaluated. This study aimed to compare standards and pass rates in an objective structured clinical examination (OSCE) obtained using two methods of standard setting with discussion and reality checking, and to assess the reliability and validity of each method. A sample of 105 medical students participated in a 14-station OSCE. Fourteen and 10 faculty members took part in the MA and TLA procedures, respectively. In the MA, judges estimated the probability that a borderline student would pass each station. In the TLA, judges estimated whether a borderline examinee would perform the task correctly or not. Having given individual ratings, judges discussed their decisions. One week after the examination, the procedure was repeated using normative data. The mean score for the total test was 54.11% (standard deviation: 8.80%). The MA cut-scores for the total test were 49.66% and 51.52% after discussion and reality checking, respectively (the consequent percentages of passing students were 65.7% and 58.1%, respectively). The TLA yielded mean pass scores of 53.92% and 63.09% after discussion and reality checking, respectively (rates of passing candidates were 44.8% and 12.4%, respectively). Compared with the TLA, the MA showed higher agreement between judges (0.94 versus 0.81) and a narrower 95% confidence interval in standards (3.22 versus 11.29). The MA seems a more credible and reliable procedure with which to set standards for an OSCE than does the TLA, especially when a reality check is applied. © Blackwell Publishing Ltd 2011.

  20. Using FTIR-ATR Spectroscopy to Teach the Internal Standard Method

    ERIC Educational Resources Information Center

    Bellamy, Michael K.

    2010-01-01

    The internal standard method is widely applied in quantitative analyses. However, most analytical chemistry textbooks either omit this topic or only provide examples of a single-point internal standardization. An experiment designed to teach students how to prepare an internal standard calibration curve is described. The experiment is a modified…

  1. Scale structure: Processing Minimum Standard and Maximum Standard Scalar Adjectives

    PubMed Central

    Frazier, Lyn; Clifton, Charles; Stolterfoht, Britta

    2008-01-01

    Gradable adjectives denote a function that takes an object and returns a measure of the degree to which the object possesses some gradable property (Kennedy, 1999). Scales, ordered sets of degrees, have begun to be studied systematically in semantics (Kennedy, to appear, Kennedy & McNally, 2005, Rotstein & Winter, 2004). We report four experiments designed to investigate the processing of absolute adjectives with a maximum standard (e.g., clean) and their minimum standard antonyms (dirty). The central hypothesis is that the denotation of an absolute adjective introduces a ‘standard value’ on a scale as part of the normal comprehension of a sentence containing the adjective (the “Obligatory Scale” hypothesis). In line with the predictions of Kennedy and McNally (2005) and Rotstein and Winter (2004), maximum standard adjectives and minimum standard adjectives systematically differ from each other when they are combined with minimizing modifiers like slightly, as indicated by speeded acceptability judgments. An eye movement recording study shows that, as predicted by the Obligatory Scale hypothesis, the penalty due to combining slightly with a maximum standard adjective can be observed during the processing of the sentence; the penalty is not the result of some after-the-fact inferencing mechanism. Further, a type of ‘quantificational variability effect’ may be observed when a quantificational adverb (mostly) is combined with a minimum standard adjective in sentences like The dishes are mostly dirty, which may receive either a degree interpretation (e.g. 80% dirty) or a quantity interpretation (e.g., 80% of the dishes are dirty). The quantificational variability results provide suggestive support for the Obligatory Scale hypothesis by showing that the standard of a scalar adjective influences the preferred interpretation of other constituents in the sentence. PMID:17376422

  2. Interpreting and Reporting Radiological Water-Quality Data

    USGS Publications Warehouse

    McCurdy, David E.; Garbarino, John R.; Mullin, Ann H.

    2008-01-01

    This document provides information to U.S. Geological Survey (USGS) Water Science Centers on interpreting and reporting radiological results for samples of environmental matrices, most notably water. The information provided is intended to be broadly useful throughout the United States, but it is recommended that scientists who work at sites containing radioactive hazardous wastes need to consult additional sources for more detailed information. The document is largely based on recognized national standards and guidance documents for radioanalytical sample processing, most notably the Multi-Agency Radiological Laboratory Analytical Protocols Manual (MARLAP), and on documents published by the U.S. Environmental Protection Agency and the American National Standards Institute. It does not include discussion of standard USGS practices including field quality-control sample analysis, interpretive report policies, and related issues, all of which shall always be included in any effort by the Water Science Centers. The use of 'shall' in this report signifies a policy requirement of the USGS Office of Water Quality.

  3. Developments in FT-ICR MS instrumentation, ionization techniques, and data interpretation methods for petroleomics.

    PubMed

    Cho, Yunju; Ahmed, Arif; Islam, Annana; Kim, Sunghwan

    2015-01-01

    Because of the increasing importance of heavy and unconventional crude oil as an energy source, there is a growing need for petroleomics: the pursuit of more complete and detailed knowledge of the chemical compositions of crude oil. Crude oil has an extremely complex nature; hence, techniques with ultra-high resolving capabilities, such as Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS), are necessary. FT-ICR MS has been successfully applied to the study of heavy and unconventional crude oils such as bitumen and shale oil. However, the analysis of crude oil with FT-ICR MS is not trivial, and it has pushed analysis to the limits of instrumental and methodological capabilities. For example, high-resolution mass spectra of crude oils may contain over 100,000 peaks that require interpretation. To visualize large data sets more effectively, data processing methods such as Kendrick mass defect analysis and statistical analyses have been developed. The successful application of FT-ICR MS to the study of crude oil has been critically dependent on key developments in FT-ICR MS instrumentation and data processing methods. This review offers an introduction to the basic principles, FT-ICR MS instrumentation development, ionization techniques, and data interpretation methods for petroleomics and is intended for readers having no prior experience in this field of study. © 2014 Wiley Periodicals, Inc.

  4. On the validity of Freud's dream interpretations.

    PubMed

    Michael, Michael

    2008-03-01

    In this article I defend Freud's method of dream interpretation against those who criticize it as involving a fallacy-namely, the reverse causal fallacy-and those who criticize it as permitting many interpretations, indeed any that the interpreter wants to put on the dream. The first criticism misconstrues the logic of the interpretative process: it does not involve an unjustified reversal of causal relations, but rather a legitimate attempt at an inference to the best explanation. The judgement of whether or not a particular interpretation is the best explanation depends on the details of the case in question. I outline the kinds of probabilities involved in making the judgement. My account also helps to cash out the metaphors of the jigsaw and crossword puzzles that Freudians have used in response to the 'many interpretations' objection. However, in defending Freud's method of dream interpretation, I do not thereby defend his theory of dreams, which cannot be justified by his interpretations alone.

  5. A method of online quantitative interpretation of diffuse reflection profiles of biological tissues

    NASA Astrophysics Data System (ADS)

    Lisenko, S. A.; Kugeiko, M. M.

    2013-02-01

    We have developed a method of combined interpretation of spectral and spatial characteristics of diffuse reflection of biological tissues, which makes it possible to determine biophysical parameters of the tissue with a high accuracy in real time under conditions of their general variability. Using the Monte Carlo method, we have modeled a statistical ensemble of profiles of diffuse reflection coefficients of skin, which corresponds to a wave variation of its biophysical parameters. On its basis, we have estimated the retrieval accuracy of biophysical parameters using the developed method and investigated the stability of the method to errors of optical measurements. We have showed that it is possible to determine online the concentrations of melanin, hemoglobin, bilirubin, oxygen saturation of blood, and structural parameters of skin from measurements of its diffuse reflection in the spectral range 450-800 nm at three distances between the radiation source and detector.

  6. Comparison of ambulatory blood pressure reference standards in children evaluated for hypertension

    PubMed Central

    Jones, Deborah P.; Richey, Phyllis A.; Alpert, Bruce S.

    2009-01-01

    Objective The purpose of this study was to systematically compare methods for standardization of blood pressure levels obtained by ambulatory blood pressure monitoring (ABPM) in a group of 111 children studied at our institution. Methods Blood pressure indices, blood pressure loads and standard deviation scores were calculated using he original ABPM and the modified reference standards. Bland—Altman plots and kappa statistics for the level of agreement were generated. Results Overall, the agreement between the two methods was excellent; however, approximately 5% of children were classified differently by one as compared with the other method. Conclusion Depending on which version of the German Working Group’s reference standards is used for interpretation of ABPM data, the classification of the individual as having hypertension or normal blood pressure may vary. PMID:19433980

  7. Comparative assessment of three standardized robotic surgery training methods.

    PubMed

    Hung, Andrew J; Jayaratna, Isuru S; Teruya, Kara; Desai, Mihir M; Gill, Inderbir S; Goh, Alvin C

    2013-10-01

    To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo, for their construct validity. To explore the concept of cross-method validity, where the relative performance of each method is compared. Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: 'novice/trainee': urology residents, previous experience <30 cases (n = 38) and 'experts': faculty surgeons, previous experience ≥30 cases (n = 11). Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the Global Evaluative Assessment of Robotic Skills (GEARS) tool. A Kruskal-Wallis test was used to evaluate performance differences between novices and experts (construct validity). Spearman's correlation coefficient (ρ) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross-method validity). Novice and expert surgeons had previously performed a median (range) of 0 (0-20) and 300 (30-2000) robotic cases, respectively (P < 0.001). Construct validity: experts consistently outperformed residents with all three methods (P < 0.001). Cross-method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (ρ = -0.7, P < 0.001) and in vivo robotic performance based on GEARS (ρ = -0.8, P < 0.0001). Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (ρ = 0.6, P < 0.001). We propose the novel concept of cross-method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment. We externally confirmed the construct validity of each featured training tool. © 2013 BJU

  8. Simulations for designing and interpreting intervention trials in infectious diseases.

    PubMed

    Halloran, M Elizabeth; Auranen, Kari; Baird, Sarah; Basta, Nicole E; Bellan, Steven E; Brookmeyer, Ron; Cooper, Ben S; DeGruttola, Victor; Hughes, James P; Lessler, Justin; Lofgren, Eric T; Longini, Ira M; Onnela, Jukka-Pekka; Özler, Berk; Seage, George R; Smith, Thomas A; Vespignani, Alessandro; Vynnycky, Emilia; Lipsitch, Marc

    2017-12-29

    Interventions in infectious diseases can have both direct effects on individuals who receive the intervention as well as indirect effects in the population. In addition, intervention combinations can have complex interactions at the population level, which are often difficult to adequately assess with standard study designs and analytical methods. Herein, we urge the adoption of a new paradigm for the design and interpretation of intervention trials in infectious diseases, particularly with regard to emerging infectious diseases, one that more accurately reflects the dynamics of the transmission process. In an increasingly complex world, simulations can explicitly represent transmission dynamics, which are critical for proper trial design and interpretation. Certain ethical aspects of a trial can also be quantified using simulations. Further, after a trial has been conducted, simulations can be used to explore the possible explanations for the observed effects. Much is to be gained through a multidisciplinary approach that builds collaborations among experts in infectious disease dynamics, epidemiology, statistical science, economics, simulation methods, and the conduct of clinical trials.

  9. Comparison of ambulatory blood pressure reference standards in children evaluated for hypertension.

    PubMed

    Jones, Deborah P; Richey, Phyllis A; Alpert, Bruce S

    2009-06-01

    The purpose of this study was to systematically compare methods for standardization of blood pressure levels obtained by ambulatory blood pressure monitoring (ABPM) in a group of 111 children studied at our institution. Blood pressure indices, blood pressure loads and standard deviation scores were calculated using the original ABPM and the modified reference standards. Bland-Altman plots and kappa statistics for the level of agreement were generated. Overall, the agreement between the two methods was excellent; however, approximately 5% of children were classified differently by one as compared with the other method. Depending on which version of the German Working Group's reference standards is used for interpretation of ABPM data, the classification of the individual as having hypertension or normal blood pressure may vary.

  10. Estimating and Interpreting Latent Variable Interactions: A Tutorial for Applying the Latent Moderated Structural Equations Method

    ERIC Educational Resources Information Center

    Maslowsky, Julie; Jager, Justin; Hemken, Douglas

    2015-01-01

    Latent variables are common in psychological research. Research questions involving the interaction of two variables are likewise quite common. Methods for estimating and interpreting interactions between latent variables within a structural equation modeling framework have recently become available. The latent moderated structural equations (LMS)…

  11. Standards for reporting fish toxicity tests

    USGS Publications Warehouse

    Cope, O.B.

    1961-01-01

    The growing impetus of studies on fish and pesticides focuses attention on the need for standardized reporting procedures. Good methods have been developed for laboratory and field procedures in testing programs and in statistical features of assay experiments; and improvements are being made on methods of collecting and preserving fish, invertebrates, and other materials exposed to economic poisons. On the other had, the reporting of toxicity data in a complete manner has lagged behind, and today's literature is little improved over yesterday's with regard to completeness and susceptibility to interpretation.

  12. A Comparative Study of Two Azimuth Based Non Standard Location Methods

    DTIC Science & Technology

    2017-03-23

    Standard Location Methods Rongsong JIH U.S. Department of State / Arms Control, Verification, and Compliance Bureau, 2201 C Street, NW, Washington...COMPARATIVE STUDY OF TWO AZIMUTH-BASED NON-STANDARD LOCATION METHODS R. Jih Department of State / Arms Control, Verification, and Compliance Bureau...cable. The so-called “Yin Zhong Xian” (“引中线” in Chinese) algorithm, hereafter the YZX method , is an Oriental version of IPB-based procedure. It

  13. Optimal Multicomponent Analysis Using the Generalized Standard Addition Method.

    ERIC Educational Resources Information Center

    Raymond, Margaret; And Others

    1983-01-01

    Describes an experiment on the simultaneous determination of chromium and magnesium by spectophotometry modified to include the Generalized Standard Addition Method computer program, a multivariate calibration method that provides optimal multicomponent analysis in the presence of interference and matrix effects. Provides instructions for…

  14. New methods of MR image intensity standardization via generalized scale

    NASA Astrophysics Data System (ADS)

    Madabhushi, Anant; Udupa, Jayaram K.

    2005-04-01

    Image intensity standardization is a post-acquisition processing operation designed for correcting acquisition-to-acquisition signal intensity variations (non-standardness) inherent in Magnetic Resonance (MR) images. While existing standardization methods based on histogram landmarks have been shown to produce a significant gain in the similarity of resulting image intensities, their weakness is that, in some instances the same histogram-based landmark may represent one tissue, while in other cases it may represent different tissues. This is often true for diseased or abnormal patient studies in which significant changes in the image intensity characteristics may occur. In an attempt to overcome this problem, in this paper, we present two new intensity standardization methods based on the concept of generalized scale. In reference 1 we introduced the concept of generalized scale (g-scale) to overcome the shape, topological, and anisotropic constraints imposed by other local morphometric scale models. Roughly speaking, the g-scale of a voxel in a scene was defined as the largest set of voxels connected to the voxel that satisfy some homogeneity criterion. We subsequently formulated a variant of the generalized scale notion, referred to as generalized ball scale (gB-scale), which, in addition to having the advantages of g-scale, also has superior noise resistance properties. These scale concepts are utilized in this paper to accurately determine principal tissue regions within MR images, and landmarks derived from these regions are used to perform intensity standardization. The new methods were qualitatively and quantitatively evaluated on a total of 67 clinical 3D MR images corresponding to four different protocols and to normal, Multiple Sclerosis (MS), and brain tumor patient studies. The generalized scale-based methods were found to be better than the existing methods, with a significant improvement observed for severely diseased and abnormal patient studies.

  15. 16 CFR 1201.40 - Interpretation concerning bathtub and shower doors and enclosures.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Interpretation concerning bathtub and shower... Policy and Interpretation § 1201.40 Interpretation concerning bathtub and shower doors and enclosures. (a... and enclosures” and “shower door and enclosure” as they are used in the Standard in subpart A. The...

  16. Optimal control of a harmonic oscillator: Economic interpretations

    NASA Astrophysics Data System (ADS)

    Janová, Jitka; Hampel, David

    2013-10-01

    Optimal control is a popular technique for modelling and solving the dynamic decision problems in economics. A standard interpretation of the criteria function and Lagrange multipliers in the profit maximization problem is well known. On a particular example, we aim to a deeper understanding of the possible economic interpretations of further mathematical and solution features of the optimal control problem: we focus on the solution of the optimal control problem for harmonic oscillator serving as a model for Phillips business cycle. We discuss the economic interpretations of arising mathematical objects with respect to well known reasoning for these in other problems.

  17. The emergent Copenhagen interpretation of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Hollowood, Timothy J.

    2014-05-01

    We introduce a new and conceptually simple interpretation of quantum mechanics based on reduced density matrices of sub-systems from which the standard Copenhagen interpretation emerges as an effective description of macroscopically large systems. This interpretation describes a world in which definite measurement results are obtained with probabilities that reproduce the Born rule. Wave function collapse is seen to be a useful but fundamentally unnecessary piece of prudent book keeping which is only valid for macro-systems. The new interpretation lies in a class of modal interpretations in that it applies to quantum systems that interact with a much larger environment. However, we show that it does not suffer from the problems that have plagued similar modal interpretations like macroscopic superpositions and rapid flipping between macroscopically distinct states. We describe how the interpretation fits neatly together with fully quantum formulations of statistical mechanics and that a measurement process can be viewed as a process of ergodicity breaking analogous to a phase transition. The key feature of the new interpretation is that joint probabilities for the ergodic subsets of states of disjoint macro-systems only arise as emergent quantities. Finally we give an account of the EPR-Bohm thought experiment and show that the interpretation implies the violation of the Bell inequality characteristic of quantum mechanics but in a way that is rather novel. The final conclusion is that the Copenhagen interpretation gives a completely satisfactory phenomenology of macro-systems interacting with micro-systems.

  18. Extracting land use information from the earth resources technology satellite data by conventional interpretation methods

    NASA Technical Reports Server (NTRS)

    Vegas, P. L.

    1974-01-01

    A procedure for obtaining land use data from satellite imagery by the use of conventional interpretation methods is presented. The satellite is described briefly, and the advantages of various scales and multispectral scanner bands are discussed. Methods for obtaining satellite imagery and the sources of this imagery are given. Equipment used in the study is described, and samples of land use maps derived from satellite imagery are included together with the land use classification system used. Accuracy percentages are cited and are compared to those of a previous experiment using small scale aerial photography.

  19. Interpretation guidelines of a standard Y-chromosome STR 17-plex PCR-CE assay for crime casework.

    PubMed

    Roewer, Lutz; Geppert, Maria

    2012-01-01

    Y-STR analysis is an invaluable tool to examine evidence in sexual assault cases and in other forensic casework. Unambiguous detection of the male component in DNA mixtures with a high female background is still the main field of application of forensic Y-STR haplotyping. In the last years, powerful technologies including a 17-locus multiplex PCR assay have been introduced in the forensic laboratories. At the same time, statistical methods have been developed and adapted for interpretation of a nonrecombining, linear marker as the Y-chromosome which shows a strongly clustered geographical distribution due to the linear inheritance and the patrilocality of ancestral groups. Large population databases, namely the Y-STR Haplotype Reference Database (YHRD), have been established to assess the evidentiary value of Y-STR matches by means of frequency estimation methods (counting and extrapolation).

  20. An ecological method to understand agricultural standardization in peach orchard ecosystems

    PubMed Central

    Wan, Nian-Feng; Zhang, Ming-Yi; Jiang, Jie-Xian; Ji, Xiang-Yun; Hao-Zhang

    2016-01-01

    While the worldwide standardization of agricultural production has been advocated and recommended, relatively little research has focused on the ecological significance of such a shift. The ecological concerns stemming from the standardization of agricultural production may require new methodology. In this study, we concentrated on how ecological two-sidedness and ecological processes affect the standardization of agricultural production which was divided into three phrases (pre-, mid- and post-production), considering both the positive and negative effects of agricultural processes. We constructed evaluation indicator systems for the pre-, mid- and post-production phases and here we presented a Standardization of Green Production Index (SGPI) based on the Full Permutation Polygon Synthetic Indicator (FPPSI) method which we used to assess the superiority of three methods of standardized production for peaches. The values of SGPI for pre-, mid- and post-production were 0.121 (Level IV, “Excellent” standard), 0.379 (Level III, “Good” standard), and 0.769 × 10−2 (Level IV, “Excellent” standard), respectively. Here we aimed to explore the integrated application of ecological two-sidedness and ecological process in agricultural production. Our results are of use to decision-makers and ecologists focusing on eco-agriculture and those farmers who hope to implement standardized agricultural production practices. PMID:26899360

  1. An ecological method to understand agricultural standardization in peach orchard ecosystems.

    PubMed

    Wan, Nian-Feng; Zhang, Ming-Yi; Jiang, Jie-Xian; Ji, Xiang-Yun; Hao-Zhang

    2016-02-22

    While the worldwide standardization of agricultural production has been advocated and recommended, relatively little research has focused on the ecological significance of such a shift. The ecological concerns stemming from the standardization of agricultural production may require new methodology. In this study, we concentrated on how ecological two-sidedness and ecological processes affect the standardization of agricultural production which was divided into three phrases (pre-, mid- and post-production), considering both the positive and negative effects of agricultural processes. We constructed evaluation indicator systems for the pre-, mid- and post-production phases and here we presented a Standardization of Green Production Index (SGPI) based on the Full Permutation Polygon Synthetic Indicator (FPPSI) method which we used to assess the superiority of three methods of standardized production for peaches. The values of SGPI for pre-, mid- and post-production were 0.121 (Level IV, "Excellent" standard), 0.379 (Level III, "Good" standard), and 0.769 × 10(-2) (Level IV, "Excellent" standard), respectively. Here we aimed to explore the integrated application of ecological two-sidedness and ecological process in agricultural production. Our results are of use to decision-makers and ecologists focusing on eco-agriculture and those farmers who hope to implement standardized agricultural production practices.

  2. Data Analysis and Statistical Methods for the Assessment and Interpretation of Geochronologic Data

    NASA Astrophysics Data System (ADS)

    Reno, B. L.; Brown, M.; Piccoli, P. M.

    2007-12-01

    Ages are traditionally reported as a weighted mean with an uncertainty based on least squares analysis of analytical error on individual dates. This method does not take into account geological uncertainties, and cannot accommodate asymmetries in the data. In most instances, this method will understate uncertainty on a given age, which may lead to over interpretation of age data. Geologic uncertainty is difficult to quantify, but is typically greater than analytical uncertainty. These factors make traditional statistical approaches inadequate to fully evaluate geochronologic data. We propose a protocol to assess populations within multi-event datasets and to calculate age and uncertainty from each population of dates interpreted to represent a single geologic event using robust and resistant statistical methods. To assess whether populations thought to represent different events are statistically separate exploratory data analysis is undertaken using a box plot, where the range of the data is represented by a 'box' of length given by the interquartile range, divided at the median of the data, with 'whiskers' that extend to the furthest datapoint that lies within 1.5 times the interquartile range beyond the box. If the boxes representing the populations do not overlap, they are interpreted to represent statistically different sets of dates. Ages are calculated from statistically distinct populations using a robust tool such as the tanh method of Kelsey et al. (2003, CMP, 146, 326-340), which is insensitive to any assumptions about the underlying probability distribution from which the data are drawn. Therefore, this method takes into account the full range of data, and is not drastically affected by outliers. The interquartile range of each population of dates (the interquartile range) gives a first pass at expressing uncertainty, which accommodates asymmetry in the dataset; outliers have a minor affect on the uncertainty. To better quantify the uncertainty, a

  3. Standard test method for grindability of coal by the hardgrove-machine method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1975-01-01

    This method is used to determine the relative grindability or ease of pulverization of coals in comparison with coals chosen as standards. A prepared sample receives a definite amount of grinding energy in a miniature pulverizer, and the change in size consist is determined by sieving.

  4. An accurate estimation method of kinematic viscosity for standard viscosity liquids

    NASA Astrophysics Data System (ADS)

    Kurano, Y.; Kobayashi, H.; Yoshida, K.; Imai, H.

    1992-07-01

    Deming's method of least squares is introduced to make an accurate kinematic viscosity estimation for a series of 13 standard-viscosity liquids at any desired temperature. The empirical ASTM kinematic viscosity-temperature equation is represented in the form loglog( v+c)=a-b log T, where v (in mm2. s-1) is the kinematic viscosity at temperature T (in K), a and b are the constants for a given liquid, and c has a variable value. In the present application, however, c is assumed to have a constant value for each standard-viscosity liquid, as do a and b in the ASTM equation. This assumption has since been verified experimentally for all standard-viscosity liquids. The kinematic viscosities for the 13 standard-viscosity liquids have been measured with a high accuracy in the temperature range of 20 40°C using a series of the NRLM capillary master viscometers with an automatic flow time detection system. The deviations between measured and estimated kinematic viscosities were less than ±0.04% for the 10 standard-viscosity liquids JS2.5 to JS2000 and ±0.11% for the 3 standard-viscosity liquids JS15H to JS200H, respectively. From the above investigation, it was revealed that the uncertainty in the present estimation method is less than one-third that in the usual ASTM method.

  5. [Hierarchy structuring for mammography technique by interpretive structural modeling method].

    PubMed

    Kudo, Nozomi; Kurowarabi, Kunio; Terashita, Takayoshi; Nishimoto, Naoki; Ogasawara, Katsuhiko

    2009-10-20

    Participation in screening mammography is currently desired in Japan because of the increase in breast cancer morbidity. However, the pain and discomfort of mammography is recognized as a significant deterrent for women considering this examination. Thus quick procedures, sufficient experience, and advanced skills are required for radiologic technologists. The aim of this study was to make the point of imaging techniques explicit and to help understand the complicated procedure. We interviewed 3 technologists who were highly skilled in mammography, and 14 factors were retrieved by using brainstorming and the KJ method. We then applied Interpretive Structural Modeling (ISM) to the factors and developed a hierarchical concept structure. The result showed a six-layer hierarchy whose top node was explanation of the entire procedure on mammography. Male technologists were related to as a negative factor. Factors concerned with explanation were at the upper node. We gave attention to X-ray techniques and considerations. The findings will help beginners improve their skills.

  6. Hawaiian temples and their orientations: issues of method and interpretation

    NASA Astrophysics Data System (ADS)

    Ruggles, Clive L. N.

    2015-08-01

    In 2002 I began a collaboration with Pat Kirch (Berkeley) to survey the temple sites (heiau) in the Kahikinui and Kaupo districts of southern Maui, and study their orientations and potential astronomical significance. Our investigations of over 70 temples in the area were completed in 2011 and are due for publication in 2016. Pat Kirch will present some of our main conclusions in his keynote talk within FM2. In this paper I propose to concentrate on issues of field methodology and procedure that have wider implications for developments in method and practice within archaeoastronomy. Methodologically, temple sites in the Hawaiian Islands constitute a "halfway house" between prehistoric monuments in Europe, where the only evidence is archaeological and studies of orientations tend to follow formal, "data-driven" or statistical, approaches, and Mesoamerica, where the existence of pre-conquest written records and inscriptions and post-conquest ethnohistory relegate "alignment studies" to a secondary role. In Hawai‘i, cultural data, including oral histories recorded after conquest, provide a finer balance between historical accounts and the physical evidence. Selection issues at the Maui temple sites include distinguishing marginal temple sites from house sites and identifying the intended direction of orientation at complex structures. Initial analyses of the principal orientations identified clusterings in orientation which were interpreted as relating to different gods, and particular the war-god Ku and the god of dryland agriculture, Lono. Later, more comprehensive surveys revealed evidence of observing platforms and foresights at some of the Lono temples, suggesting that systematic observations were made of the Pleiades, known from the ethnohistory to be of particular calendrical significance. This type of alignment evidence is too subjective to be sustained on the basis of a formal analysis alone but, given the historical context, provides a more robust cultural

  7. Interpreting psychoanalytic interpretation: a fourfold perspective.

    PubMed

    Schermer, Victor L

    2011-12-01

    Following an overview of psychoanalytic interpretation in theory, practice, and historical context, as well as the question of whether interpretations have scientific validity, the author holds that hermeneutics, the philosophical and psychological study of interpretation, provides a rich understanding of recent developments in self psychology, inter-subjective and relational perspectives, attachment theory, and psycho-spiritual views on psychoanalytic process. He then offers four distinct hermeneutical vantage points regarding interpretation in the psychoanalytic context, including (1) Freud's adaptation of the Aristotelian view of interpretation as the uncovering of a set of predetermined meanings and structures; (2) the phenomenological view of interpretation as the laying bare of "the things themselves," that is, removing the coverings of objectification and concretization imposed by social norms and the conscious ego; (3) the dialogical existential view of interpretation as an ongoing relational process; and (4) the transformational understanding in which interpretation evokes a "presence" that transforms both patient and analyst. He concludes by contending that these perspectives are not mutually exclusive ways of conducting an analysis, but rather that all occur within the analyst's suspended attention, the caregiving and holding essential to good therapeutic outcomes, and the mutuality of the psychoanalytic dialogue.

  8. An Improved Method for Interpretation of Concentration-Discharge Relationships in Riverine Water-Quality Data

    NASA Astrophysics Data System (ADS)

    Zhang, Q.; Harman, C. J.; Ball, W. P.

    2016-12-01

    Riverine concentration-discharge (C-Q) relationships are powerful indicators that can provide important clues toward understanding nutrient and sediment export dynamics from river systems, and the analysis of such relations has been a long-standing topic of importance in hydrologic literature. Proper interpretation of such relationships can be made complex, however, if the relationships of ln(C) ln(Q) are nonlinear or if the relationships change over time, season, or discharge. Methods of addressing these issues by "binning" data or smoothing trends can introduce artifacts and ambiguities that obscure underlying interactions among time, discharge, and season. Here we illustrate these issues with examples and propose an alternative method that uses the regression coefficients of the recently-developed WRTDS ("Weighted Regressions on Time, Discharge, and Season") model for examining riverine C-Q relationships, including their uncertainty. The method is applied to sediment concentration data from Susquehanna River at Conowingo Dam (Maryland, USA) to illustrate how the WRTDS coefficients can be accessed and presented in ways that provide additional insights toward the interpretation of river water-quality data. For this case, the results clearly reveal that sediment concentration in the reservoir effluent has become more sensitive to discharge at moderate and high flows (but not very low flows) as it approaches sediment storage capacity, reaffirming the recently-documented decadal-scale decline in reservoir trapping performance. The study also highlights an additional benefit of the method, which is the ability to perform uncertainty analyses. The proposed approach can be implemented by running additional R codes within the WRTDS software - such codes are made available to users through a DOI-referenced archive site (http://dx.doi.org/10.7281/T18G8HM0) that will be maintained for at least five years after publication.

  9. Modelling Metamorphism by Abstract Interpretation

    NASA Astrophysics Data System (ADS)

    Dalla Preda, Mila; Giacobazzi, Roberto; Debray, Saumya; Coogan, Kevin; Townsend, Gregg M.

    Metamorphic malware apply semantics-preserving transformations to their own code in order to foil detection systems based on signature matching. In this paper we consider the problem of automatically extract metamorphic signatures from these malware. We introduce a semantics for self-modifying code, later called phase semantics, and prove its correctness by showing that it is an abstract interpretation of the standard trace semantics. Phase semantics precisely models the metamorphic code behavior by providing a set of traces of programs which correspond to the possible evolutions of the metamorphic code during execution. We show that metamorphic signatures can be automatically extracted by abstract interpretation of the phase semantics, and that regular metamorphism can be modelled as finite state automata abstraction of the phase semantics.

  10. Intrapartum fetal heart rate monitoring: evaluation of a standardized system of interpretation for prediction of metabolic acidosis at delivery and neonatal neurological morbidity.

    PubMed

    Soncini, Emanuele; Paganelli, Simone; Vezzani, Cristina; Gargano, Giancarlo; Giovanni Battista, La Sala

    2014-09-01

    To assess the ability of the intrapartum fetal heart rate interpretation system developed in 2008 by the National Institute of Child Health and Human Development (NICHD) to predict fetal metabolic acidosis at delivery and neonatal neurological morbidity. We analyzed the intrapartum fetal heart rate tracings of 314 singleton fetuses at ≥ 37 weeks using the NICHD three-tier system of interpretation: Category I (normal), Category II (indeterminate) and Category III (abnormal). Category II was further divided into Category IIA, with moderate fetal heart rate variability or accelerations, and Category IIB, with minimal/absent fetal heart rate variability and no accelerations. The presence and duration of the different patterns were compared with several clinical neonatal outcomes and with umbilical artery acid-base balance at birth. The mean values of pH and base excess decreased proportionally as tracings worsened (p < 0.001). The duration of at least 30 min for Category III tracings was highly predictive of a pH <7.00 and a base excess ≤-12 mmol/L. The same was true for the duration of Category IIB tracings that lasted for at least 50 min. Our study demonstrates that the interpretation of fetal heart rate tracings based on a strictly standardized system is closely associated with umbilical artery acid-base status at delivery.

  11. Data Acceptance Criteria for Standardized Human-Associated Fecal Source Identification Quantitative Real-Time PCR Methods.

    PubMed

    Shanks, Orin C; Kelty, Catherine A; Oshiro, Robin; Haugland, Richard A; Madi, Tania; Brooks, Lauren; Field, Katharine G; Sivaganesan, Mano

    2016-05-01

    There is growing interest in the application of human-associated fecal source identification quantitative real-time PCR (qPCR) technologies for water quality management. The transition from a research tool to a standardized protocol requires a high degree of confidence in data quality across laboratories. Data quality is typically determined through a series of specifications that ensure good experimental practice and the absence of bias in the results due to DNA isolation and amplification interferences. However, there is currently a lack of consensus on how best to evaluate and interpret human fecal source identification qPCR experiments. This is, in part, due to the lack of standardized protocols and information on interlaboratory variability under conditions for data acceptance. The aim of this study is to provide users and reviewers with a complete series of conditions for data acceptance derived from a multiple laboratory data set using standardized procedures. To establish these benchmarks, data from HF183/BacR287 and HumM2 human-associated qPCR methods were generated across 14 laboratories. Each laboratory followed a standardized protocol utilizing the same lot of reference DNA materials, DNA isolation kits, amplification reagents, and test samples to generate comparable data. After removal of outliers, a nested analysis of variance (ANOVA) was used to establish proficiency metrics that include lab-to-lab, replicate testing within a lab, and random error for amplification inhibition and sample processing controls. Other data acceptance measurements included extraneous DNA contamination assessments (no-template and extraction blank controls) and calibration model performance (correlation coefficient, amplification efficiency, and lower limit of quantification). To demonstrate the implementation of the proposed standardized protocols and data acceptance criteria, comparable data from two additional laboratories were reviewed. The data acceptance criteria

  12. 42 CFR 440.260 - Methods and standards to assure quality of services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Methods and standards to assure quality of services. 440.260 Section 440.260 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH... and Limits Applicable to All Services § 440.260 Methods and standards to assure quality of services...

  13. 40 CFR 98.7 - What standardized methods are incorporated by reference into this part?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ....astm.org. (1) ASTM C25-06 Standard Test Method for Chemical Analysis of Limestone, Quicklime, and....194(c), and § 98.334(b). (2) ASTM C114-09 Standard Test Methods for Chemical Analysis of Hydraulic... approved for § 98.6. (4) ASTM D240-02 (Reapproved 2007) Standard Test Method for Heat of Combustion of...

  14. 40 CFR 98.7 - What standardized methods are incorporated by reference into this part?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ....astm.org. (1) ASTM C25-06 Standard Test Method for Chemical Analysis of Limestone, Quicklime, and....194(c), and § 98.334(b). (2) ASTM C114-09 Standard Test Methods for Chemical Analysis of Hydraulic... approved for § 98.6. (4) ASTM D240-02 (Reapproved 2007) Standard Test Method for Heat of Combustion of...

  15. The optimized log interpretation method and sweet-spot prediction of gas-bearing shale reservoirs

    NASA Astrophysics Data System (ADS)

    Tan, Maojin; Bai, Ze; Xu, Jingjing

    2017-04-01

    Shale gas is one of the most important unconventional oil and gas resources, and its lithology and reservoir type are both different from conventional reservoirs [1,2]. "Where are shale reservoirs" "How to determine the hydrocarbon potential" "How to evaluate the reservoir quality", these are some key problems in front of geophysicists. These are sweet spots prediction and quantitative evaluation. As we known, sweet spots of organic shale include geological sweet spot and engineering sweet spot. Geophysical well logging can provide a lot of in-site formation information along the borehole, and all parameters describing the sweet spots of organic shale are attained by geophysical log interpretation[2]. Based on geological and petrophysical characteristics of gas shale, the log response characteristics of gas shales are summarized. Geological sweet spot includes hydrocarbon potential, porosity, fracture, water saturation and total gas content, which can be calculated by using wireline logs[3]. Firstly, the based-logging hydrocarbon potential evaluation is carried out, and the RBF neural network method is developed to estimate the total organic carbon content (TOC), which was proved more effective and suitable than empirical formula and ΔlogR methods [4]. Next, the optimized log interpretation is achieved by using model-searching, and the mineral concentrations of kerogen, clay, feldspar and pyrite and porosity are calculated. On the other hand, engineering sweet spot of shale refers to the rock physical properties and rock mechanism parameters. Some elastic properties including volume module, shear modulus and Poisson's ratio are correspondingly determined from log interpretation, and the brittleness index (BI), effective stress and pore pressure are also estimated. BI is one of the most important engineering sweet spot parameters. A large number of instances show that the summarized log responses can accurately identify the gas-bearing shale, and the proposed RBF

  16. Limited English proficient Hmong- and Spanish-speaking patients’ perceptions of the quality of interpreter services

    PubMed Central

    Lor, Maichou; Xiong, Phia; Schweia, Rebecca J.; Bowers, Barbara; Jacobs, Elizabeth A.

    2015-01-01

    Background Language barriers are a large and growing problem for patients in the U.S. and around the world. Interpreter services are a standard solution for addressing language barriers and most research has focused on utilization of interpreter services and their effect on health outcomes for patients who do not speak the same language as their healthcare providers including nurses. However, there is limited research on patients’ perceptions of these interpreter services. Objective To examine Hmong- and Spanish-speaking patients’ perceptions of interpreter service quality in the context of receiving cancer preventive services. Methods Twenty limited English proficient Hmong (n=10) and Spanish-speaking participants (N=10) ranging in age from 33 to 75 years were interviewed by two bilingual researchers in a Midwestern state. Interviews were audio taped, transcribed verbatim, and translated into English. Analysis was done using conventional content analysis. Results The two groups shared perceptions about the quality of interpreter services as variable along three dimensions. Specifically, both groups evaluated quality of interpreters based on the interpreters’ ability to provide: (a) literal interpretation, (b) cultural interpretation, and (c) emotional interpretation during the health care encounter. The groups differed, however, on how they described the consequences of poor interpretation quality. Hmong participants described how poor quality interpretation could lead to: (a) poor interpersonal relationships among patients, providers, and interpreters, (b) inability of patients to follow through with treatment plans, and (c) emotional distress for patients. Conclusions Our study highlights the fact that patients are discerning consumers of interpreter services; and could be effective partners in efforts to reform and enhance interpreter services. PMID:25865517

  17. Interpretation of biological and mechanical variations between the Lowry versus Bradford method for protein quantification.

    PubMed

    Lu, Tzong-Shi; Yiao, Szu-Yu; Lim, Kenneth; Jensen, Roderick V; Hsiao, Li-Li

    2010-07-01

    The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. MATERIAL #ENTITYSTARTX00026; Differential protein expression patterns was assessed by western blot following protein quantification by the Lowry and Bradford methods. We have observed significant variations in protein concentrations following assessment with the Lowry versus Bradford methods, using identical samples. Greater variations in protein concentration readings were observed over time and in samples with higher concentrations, with the Bradford method. Identical samples quantified using both methods yielded significantly different expression patterns on Western blot. We show for the first time that methodical variations observed in these protein assay techniques, can potentially translate into differential protein expression patterns, that can be falsely taken to be biologically significant. Our study therefore highlights the pivotal need to carefully consider methodical approaches to protein quantification in techniques that report quantitative differences.

  18. Differences in lupus anticoagulant final conclusion through clotting time or Rosner index for mixing test interpretation.

    PubMed

    Depreter, Barbara; Devreese, Katrien M J

    2016-09-01

    Lupus anticoagulant (LAC) testing includes a screening, mixing and confirmation step. Although recently published guidelines on LAC testing are a useful step towards standardization, a lack of consensus remains whether to express mixing tests in clotting time (CT) or index of circulating anticoagulant (ICA). The influence of anticoagulant therapy, e.g. vitamin K antagonists (VKA) or direct oral anticoagulants (DOAC) on both methods of interpretation remains to be investigated. The objective of this study was to contribute to a simplification and standardization of the LAC three-step interpretation on the level of the mixing test. Samples from 148 consecutive patients with LAC request and prolonged screening step, and 77 samples from patients non-suspicious for LAC treated with VKA (n=37) or DOAC (n=30) were retrospectively evaluated. An activated partial thromboplastin time (aPTT) and dilute Russell's viper venom time (dRVVT) were used for routine LAC testing. The supplemental anticoagulant samples were tested with dRVVT only. We focused on the interpretation differences for mixing tests expressed as CT or ICA and compared the final LAC conclusion within each distinct group of concordant and discordant mixing test results. Mixing test interpretation by CT resulted in 10 (dRVVT) and 16 (aPTT) more LAC positive patients compared to interpretation with ICA. Isolated prolonged dRVVT screen mix ICA results were exclusively observed in samples from VKA-treated patients without suspicion for LAC. We recommend using CT in respect to the 99th percentile cut-off for interpretation of mixing steps in order to reach the highest sensitivity and specificity in LAC detection.

  19. Codetype-based interpretation of the MMPI-2 in an outpatient psychotherapy sample.

    PubMed

    Koffmann, Andrew

    2015-01-01

    In an evaluation of the codetype-based interpretation of the MMPI-2, 48 doctoral student psychotherapists rated their clients' (N = 120) standardized interpretations as more accurate when based on the profile's codetype, in comparison with ratings for interpretations based on alternate codetypes. Effect sizes ranged from nonsignificant to large, depending on the degree of proximity between the profile's codetype and the alternate codetype. There was weak evidence to suggest that well-defined profiles yielded more accurate interpretations than undefined profiles. It appears that codetype-based interpretation of the MMPI-2 is generally valid, but there might be little difference in the accuracy of interpretations based on nearby codetypes.

  20. A new framework for the documentation and interpretation of oral food challenges in population-based and clinical research.

    PubMed

    Grabenhenrich, L B; Reich, A; Bellach, J; Trendelenburg, V; Sprikkelman, A B; Roberts, G; Grimshaw, K E C; Sigurdardottir, S; Kowalski, M L; Papadopoulos, N G; Quirce, S; Dubakiene, R; Niggemann, B; Fernández-Rivas, M; Ballmer-Weber, B; van Ree, R; Schnadt, S; Mills, E N C; Keil, T; Beyer, K

    2017-03-01

    The conduct of oral food challenges as the preferred diagnostic standard for food allergy (FA) was harmonized over the last years. However, documentation and interpretation of challenge results, particularly in research settings, are not sufficiently standardized to allow valid comparisons between studies. Our aim was to develop a diagnostic toolbox to capture and report clinical observations in double-blind placebo-controlled food challenges (DBPCFC). A group of experienced allergists, paediatricians, dieticians, epidemiologists and data managers developed generic case report forms and standard operating procedures for DBPCFCs and piloted them in three clinical centres. The follow-up of the EuroPrevall/iFAAM birth cohort and other iFAAM work packages applied these methods. A set of newly developed questionnaire or interview items capture the history of FA. Together with sensitization status, this forms the basis for the decision to perform a DBPCFC, following a standardized decision algorithm. A generic form including details about severity and timing captures signs and symptoms observed during or after the procedures. In contrast to the commonly used dichotomous outcome FA vs no FA, the allergy status is interpreted in multiple categories to reflect the complexity of clinical decision-making. The proposed toolbox sets a standard for improved documentation and harmonized interpretation of DBPCFCs. By a detailed documentation and common terminology for communicating outcomes, these tools hope to reduce the influence of subjective judgment of supervising physicians. All forms are publicly available for further evolution and free use in clinical and research settings. © 2016 The Authors. Allergy Published by John Wiley & Sons Ltd.

  1. Development of Gold Standard Ion-Selective Electrode-Based Methods for Fluoride Analysis

    PubMed Central

    Martínez-Mier, E.A.; Cury, J.A.; Heilman, J.R.; Katz, B.P.; Levy, S.M.; Li, Y.; Maguire, A.; Margineda, J.; O’Mullane, D.; Phantumvanit, P.; Soto-Rojas, A.E.; Stookey, G.K.; Villa, A.; Wefel, J.S.; Whelton, H.; Whitford, G.M.; Zero, D.T.; Zhang, W.; Zohouri, V.

    2011-01-01

    Background/Aims: Currently available techniques for fluoride analysis are not standardized. Therefore, this study was designed to develop standardized methods for analyzing fluoride in biological and nonbiological samples used for dental research. Methods A group of nine laboratories analyzed a set of standardized samples for fluoride concentration using their own methods. The group then reviewed existing analytical techniques for fluoride analysis, identified inconsistencies in the use of these techniques and conducted testing to resolve differences. Based on the results of the testing undertaken to define the best approaches for the analysis, the group developed recommendations for direct and microdiffusion methods using the fluoride ion-selective electrode. Results Initial results demonstrated that there was no consensus regarding the choice of analytical techniques for different types of samples. Although for several types of samples, the results of the fluoride analyses were similar among some laboratories, greater differences were observed for saliva, food and beverage samples. In spite of these initial differences, precise and true values of fluoride concentration, as well as smaller differences between laboratories, were obtained once the standardized methodologies were used. Intraclass correlation coefficients ranged from 0.90 to 0.93, for the analysis of a certified reference material, using the standardized methodologies. Conclusion The results of this study demonstrate that the development and use of standardized protocols for F analysis significantly decreased differences among laboratories and resulted in more precise and true values. PMID:21160184

  2. Interpretable functional principal component analysis.

    PubMed

    Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo

    2016-09-01

    Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.

  3. Method of Fabricating NASA-Standard Macro-Fiber Composite Piezoelectric Actuators

    NASA Technical Reports Server (NTRS)

    High, James W.; Wilkie, W. Keats

    2003-01-01

    The NASA Macro-Fiber Composite actuator is a flexible piezoelectric composite device designed for controlling vibrations and shape deformations in high performance aerospace structures. A complete method for fabricating the standard NASA Macro-Fiber Composite actuator is presented in this document. When followed precisely, these procedures will yield devices with electromechanical properties identical to the standard actuator manufactured by NASA Langley Research Center.

  4. Methods, applications, interpretations and challenges of interrupted time series (ITS) data: protocol for a scoping review.

    PubMed

    Ewusie, Joycelyne E; Blondal, Erik; Soobiah, Charlene; Beyene, Joseph; Thabane, Lehana; Straus, Sharon E; Hamid, Jemila S

    2017-07-02

    Interrupted time series (ITS) design involves collecting data across multiple time points before and after the implementation of an intervention to assess the effect of the intervention on an outcome. ITS designs have become increasingly common in recent times with frequent use in assessing impact of evidence implementation interventions. Several statistical methods are currently available for analysing data from ITS designs; however, there is a lack of guidance on which methods are optimal for different data types and on their implications in interpreting results. Our objective is to conduct a scoping review of existing methods for analysing ITS data, to summarise their characteristics and properties, as well as to examine how the results are reported. We also aim to identify gaps and methodological deficiencies. We will search electronic databases from inception until August 2016 (eg, MEDLINE and JSTOR). Two reviewers will independently screen titles, abstracts and full-text articles and complete the data abstraction. The anticipated outcome will be a summarised description of all the methods that have been used in analysing ITS data in health research, how those methods were applied, their strengths and limitations and the transparency of interpretation/reporting of the results. We will provide summary tables of the characteristics of the included studies. We will also describe the similarities and differences of the various methods. Ethical approval is not required for this study since we are just considering the methods used in the analysis and there will not be identifiable patient data. Results will be disseminated through open access peer-reviewed publications. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Diagnostic electrocardiography in epidemiological studies of Chagas' disease: multicenter evaluation of a standardized method.

    PubMed

    Lázzari, J O; Pereira, M; Antunes, C M; Guimarães, A; Moncayo, A; Chávez Domínguez, R; Hernández Pieretti, O; Macedo, V; Rassi, A; Maguire, J; Romero, A

    1998-11-01

    An electrocardiographic recording method with an associated reading guide, designed for epidemiological studies on Chagas' disease, was tested to assess its diagnostic reproducibility. Six cardiologists from five countries each read 100 electrocardiographic (ECG) tracings, including 30 from chronic chagasic patients, then reread them after an interval of 6 months. The readings were blind, with the tracings numbered randomly for the first reading and renumbered randomly for the second reading. The physicians, all experienced in interpreting ECGs from chagasic patients, followed printed instructions for reading the tracings. Reproducibility of the readings was evaluated using the kappa (kappa) index for concordance. The results showed a high degree of interobserver concordance with respect to the diagnosis of normal vs. abnormal tracings (kappa = 0.66; SE 0.02). While the interpretations of some categories of ECG abnormalities were highly reproducible, others, especially those having a low prevalence, showed lower levels of concordance. Intraobserver concordance was uniformly higher than interobserver concordance. The findings of this study justify the use by specialists of the recording of readings method proposed for epidemiological studies on Chagas' disease, but warrant caution in the interpretation of some categories of electrocardiographic alterations.

  6. 21 CFR 130.3 - Definitions and interpretations.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 2 2011-04-01 2011-04-01 false Definitions and interpretations. 130.3 Section 130.3 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION FOOD STANDARDS: GENERAL General Provisions § 130.3 Definitions and...

  7. 21 CFR 130.3 - Definitions and interpretations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Definitions and interpretations. 130.3 Section 130.3 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION FOOD STANDARDS: GENERAL General Provisions § 130.3 Definitions and...

  8. Vascular disease, ESRD, and death: interpreting competing risk analyses.

    PubMed

    Grams, Morgan E; Coresh, Josef; Segev, Dorry L; Kucirka, Lauren M; Tighiouart, Hocine; Sarnak, Mark J

    2012-10-01

    Vascular disease, a common condition in CKD, is a risk factor for mortality and ESRD. Optimal patient care requires accurate estimation and ordering of these competing risks. This is a prospective cohort study of screened (n=885) and randomized participants (n=837) in the Modification of Diet in Renal Disease study (original study enrollment, 1989-1992), evaluating the association of vascular disease with ESRD and pre-ESRD mortality using standard survival analysis and competing risk regression. The method of analysis resulted in markedly different estimates. Cumulative incidence by standard analysis (censoring at the competing event) implied that, with vascular disease, the 15-year incidence was 66% and 51% for ESRD and pre-ESRD death, respectively. A more accurate representation of absolute risk was estimated with competing risk regression: 15-year incidence was 54% and 29% for ESRD and pre-ESRD death, respectively. For the association of vascular disease with pre-ESRD death, estimates of relative risk by the two methods were similar (standard survival analysis adjusted hazard ratio, 1.63; 95% confidence interval, 1.20-2.20; competing risk regression adjusted subhazard ratio, 1.57; 95% confidence interval, 1.15-2.14). In contrast, the hazard and subhazard ratios differed substantially for other associations, such as GFR and pre-ESRD mortality. When competing events exist, absolute risk is better estimated using competing risk regression, but etiologic associations by this method must be carefully interpreted. The presence of vascular disease in CKD decreases the likelihood of survival to ESRD, independent of age and other risk factors.

  9. How people interpret healthy eating: contributions of qualitative research.

    PubMed

    Bisogni, Carole A; Jastran, Margaret; Seligson, Marc; Thompson, Alyssa

    2012-01-01

    To identify how qualitative research has contributed to understanding the ways people in developed countries interpret healthy eating. Bibliographic database searches identified reports of qualitative, empirical studies published in English, peer-reviewed journals since 1995. Authors coded, discussed, recoded, and analyzed papers reporting qualitative research studies related to participants' interpretations of healthy eating. Studies emphasized a social constructionist approach, and most used focus groups and/or individual, in-depth interviews to collect data. Study participants explained healthy eating in terms of food, food components, food production methods, physical outcomes, psychosocial outcomes, standards, personal goals, and as requiring restriction. Researchers described meanings as specific to life stages and different life experiences, such as parenting and disease onset. Identity (self-concept), social settings, resources, food availability, and conflicting considerations were themes in participants' explanations for not eating according to their ideals for healthy eating. People interpret healthy eating in complex and diverse ways that reflect their personal, social, and cultural experiences, as well as their environments. Their meanings include but are broader than the food composition and health outcomes considered by scientists. The rich descriptions and concepts generated by qualitative research can help practitioners and researchers think beyond their own experiences and be open to audience members' perspectives as they seek to promote healthy ways of eating. Copyright © 2012 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  10. Standardized methods for photography in procedural dermatology using simple equipment.

    PubMed

    Hexsel, Doris; Hexsel, Camile L; Dal'Forno, Taciana; Schilling de Souza, Juliana; Silva, Aline F; Siega, Carolina

    2017-04-01

    Photography is an important tool in dermatology. Reproducing the settings of before photos after interventions allows more accurate evaluation of treatment outcomes. In this article, we describe standardized methods and tips to obtain photographs, both for clinical practice and research procedural dermatology, using common equipment. Standards for the studio, cameras, photographer, patients, and framing are presented in this article. © 2017 The International Society of Dermatology.

  11. 28 CFR 904.2 - Interpretation of the criminal history record screening requirement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Interpretation of the criminal history... PRIVACY COMPACT COUNCIL STATE CRIMINAL HISTORY RECORD SCREENING STANDARDS § 904.2 Interpretation of the criminal history record screening requirement. Compact Article IV(c) provides that “Any record obtained...

  12. 28 CFR 904.2 - Interpretation of the criminal history record screening requirement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Interpretation of the criminal history... PRIVACY COMPACT COUNCIL STATE CRIMINAL HISTORY RECORD SCREENING STANDARDS § 904.2 Interpretation of the criminal history record screening requirement. Compact Article IV(c) provides that “Any record obtained...

  13. 28 CFR 904.2 - Interpretation of the criminal history record screening requirement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Interpretation of the criminal history... PRIVACY COMPACT COUNCIL STATE CRIMINAL HISTORY RECORD SCREENING STANDARDS § 904.2 Interpretation of the criminal history record screening requirement. Compact Article IV(c) provides that “Any record obtained...

  14. 28 CFR 904.2 - Interpretation of the criminal history record screening requirement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Interpretation of the criminal history... PRIVACY COMPACT COUNCIL STATE CRIMINAL HISTORY RECORD SCREENING STANDARDS § 904.2 Interpretation of the criminal history record screening requirement. Compact Article IV(c) provides that “Any record obtained...

  15. 28 CFR 904.2 - Interpretation of the criminal history record screening requirement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Interpretation of the criminal history... PRIVACY COMPACT COUNCIL STATE CRIMINAL HISTORY RECORD SCREENING STANDARDS § 904.2 Interpretation of the criminal history record screening requirement. Compact Article IV(c) provides that “Any record obtained...

  16. Method and platform standardization in MRM-based quantitative plasma proteomics.

    PubMed

    Percy, Andrew J; Chambers, Andrew G; Yang, Juncong; Jackson, Angela M; Domanski, Dominik; Burkhart, Julia; Sickmann, Albert; Borchers, Christoph H

    2013-12-16

    There exists a growing demand in the proteomics community to standardize experimental methods and liquid chromatography-mass spectrometry (LC/MS) platforms in order to enable the acquisition of more precise and accurate quantitative data. This necessity is heightened by the evolving trend of verifying and validating candidate disease biomarkers in complex biofluids, such as blood plasma, through targeted multiple reaction monitoring (MRM)-based approaches with stable isotope-labeled standards (SIS). Considering the lack of performance standards for quantitative plasma proteomics, we previously developed two reference kits to evaluate the MRM with SIS peptide approach using undepleted and non-enriched human plasma. The first kit tests the effectiveness of the LC/MRM-MS platform (kit #1), while the second evaluates the performance of an entire analytical workflow (kit #2). Here, these kits have been refined for practical use and then evaluated through intra- and inter-laboratory testing on 6 common LC/MS platforms. For an identical panel of 22 plasma proteins, similar concentrations were determined, regardless of the kit, instrument platform, and laboratory of analysis. These results demonstrate the value of the kit and reinforce the utility of standardized methods and protocols. The proteomics community needs standardized experimental protocols and quality control methods in order to improve the reproducibility of MS-based quantitative data. This need is heightened by the evolving trend for MRM-based validation of proposed disease biomarkers in complex biofluids such as blood plasma. We have developed two kits to assist in the inter- and intra-laboratory quality control of MRM experiments: the first kit tests the effectiveness of the LC/MRM-MS platform (kit #1), while the second evaluates the performance of an entire analytical workflow (kit #2). In this paper, we report the use of these kits in intra- and inter-laboratory testing on 6 common LC/MS platforms. This

  17. Standardization of 237Np by the CIEMAT/NIST LSC tracer method

    PubMed

    Gunther

    2000-03-01

    The standardization of 237Np presents some difficulties: several groups of alpha, beta and gamma radiation, chemical problems with the daughter nuclide 233Pa, an incomplete radioactive equilibrium after sample preparation, high conversion of some gamma transitions. To solve the chemical problems, a sample composition involving the Ultima Gold AB scintillator and a high concentration of HCl is used. Standardization by the CIEMAT/NIST method and by pulse shape discrimination is described. The results agree within 0.1% with those obtained by two other methods.

  18. 75 FR 14386 - Interpretation of Transmission Planning Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-25

    ... created electronically using word processing software should be filed in native applications or print-to.... FERC, 564 F.3d 1342 (DC Cir. 2009). \\6\\ Mandatory Reliability Standards for the Bulk-Power System... print-to-PDF format and not in a scanned format. Commenters filing electronically do not need to make a...

  19. From SOPs to Reports to Evaluations: Learning and Memory as a Case Study of how Missing Data and Methods Impact Interpretation

    EPA Science Inventory

    In an era of global trade and regulatory cooperation, consistent and scientifically based interpretation of developmental neurotoxicity (DNT) studies is essential. Because there is flexibility in the selection of test method(s), consistency can be especially challenging for lea...

  20. Validating Automated Essay Scoring: A (Modest) Refinement of the "Gold Standard"

    ERIC Educational Resources Information Center

    Powers, Donald E.; Escoffery, David S.; Duchnowski, Matthew P.

    2015-01-01

    By far, the most frequently used method of validating (the interpretation and use of) automated essay scores has been to compare them with scores awarded by human raters. Although this practice is questionable, human-machine agreement is still often regarded as the "gold standard." Our objective was to refine this model and apply it to…

  1. SLAR image interpretation keys for geographic analysis

    NASA Technical Reports Server (NTRS)

    Coiner, J. C.

    1972-01-01

    A means for side-looking airborne radar (SLAR) imagery to become a more widely used data source in geoscience and agriculture is suggested by providing interpretation keys as an easily implemented interpretation model. Interpretation problems faced by the researcher wishing to employ SLAR are specifically described, and the use of various types of image interpretation keys to overcome these problems is suggested. With examples drawn from agriculture and vegetation mapping, direct and associate dichotomous image interpretation keys are discussed and methods of constructing keys are outlined. Initial testing of the keys, key-based automated decision rules, and the role of the keys in an information system for agriculture are developed.

  2. Standardization of Spore Inactivation Method for PMA-PhyloChip Analysis

    NASA Technical Reports Server (NTRS)

    Schrader, Michael

    2011-01-01

    In compliance with the Committee on Space Research (COSPAR) planetary protection policy, National Aeronautics and Space Administration (NASA) monitors the total microbial burden of spacecraft as a means for minimizing the inadvertent transfer of viable contaminant microorganisms to extraterrestrial environments (forward contamination). NASA standard assay-based counts are used both as a proxy for relative surface cleanliness and to estimate overall microbial burden as well as to assess whether forward planetary protection risk criteria are met for a given mission, which vary by the planetary body to be explored and whether or not life detection missions are present. Despite efforts to reduce presence of microorganisms from spacecraft prior to launch, microbes have been isolated from spacecraft and associated surfaces within the extreme conditions of clean room facilities using state of the art molecular technologies. Development of a more sensitive method that will better enumerate all viable microorganisms from spacecraft and associated surfaces could support future life detection missions. Current culture-based (NASA standard spore assay) and nucleic-acid-based polymerase chain reaction (PCR) methods have significant shortcomings in this type of analysis. The overall goal of this project is to evaluate and validate a new molecular method based on the use of a deoxyribonucleic acid (DNA) intercalating agent propidium monoazide (PMA). This is used in combination with DNA microarray (PhyloChip) which has been shown to identify very low levels of organisms on spacecraft associated surfaces. PMA can only penetrate the membrane of dead cells. Once penetrated, it intercalates the DNA and, upon photolysis using visible light it produces stable DNA monoadducts. This allows DNA to be unavailable for further PCR analysis. The specific aim of this study is to standardize the spore inactivation method for PMA-PhyloChip analysis. We have used the bacterial spores Bacillus

  3. Helping Standards Make the Grade.

    ERIC Educational Resources Information Center

    Guskey, Thomas R.

    2001-01-01

    Educators can develop fair and accurate standards-based grading/reporting by switching to criterion-referenced grading practices; using differentiated criteria (denoting product, process, and progress); clarifying the purpose of each reporting tool; and developing a reporting form that identifies standards, facilitates interpretation, and…

  4. Comparison of a novel fixation device with standard suturing methods for spinal cord stimulators.

    PubMed

    Bowman, Richard G; Caraway, David; Bentley, Ishmael

    2013-01-01

    Spinal cord stimulation is a well-established treatment for chronic neuropathic pain of the trunk or limbs. Currently, the standard method of fixation is to affix the leads of the neuromodulation device to soft tissue, fascia or ligament, through the use of manually tying general suture. A novel semiautomated device is proposed that may be advantageous to the current standard. Comparison testing in an excised caprine spine and simulated bench top model was performed. Three tests were performed: 1) perpendicular pull from fascia of caprine spine; 2) axial pull from fascia of caprine spine; and 3) axial pull from Mylar film. Six samples of each configuration were tested for each scenario. Standard 2-0 Ethibond was compared with a novel semiautomated device (Anulex fiXate). Upon completion of testing statistical analysis was performed for each scenario. For perpendicular pull in the caprine spine, the failure load for standard suture was 8.95 lbs with a standard deviation of 1.39 whereas for fiXate the load was 15.93 lbs with a standard deviation of 2.09. For axial pull in the caprine spine, the failure load for standard suture was 6.79 lbs with a standard deviation of 1.55 whereas for fiXate the load was 12.31 lbs with a standard deviation of 4.26. For axial pull in Mylar film, the failure load for standard suture was 10.87 lbs with a standard deviation of 1.56 whereas for fiXate the load was 19.54 lbs with a standard deviation of 2.24. These data suggest a novel semiautomated device offers a method of fixation that may be utilized in lieu of standard suturing methods as a means of securing neuromodulation devices. Data suggest the novel semiautomated device in fact may provide a more secure fixation than standard suturing methods. © 2012 International Neuromodulation Society.

  5. A roadmap for interpreting the literature on vision and driving.

    PubMed

    Owsley, Cynthia; Wood, Joanne M; McGwin, Gerald

    2015-01-01

    Over the past several decades there has been a sharp increase in the number of studies focused on the relationship between vision and driving. The intensified attention to this topic has most likely been stimulated by the lack of an evidence basis for determining vision standards for driving licensure and a poor understanding about how vision impairment impacts driver safety and performance. Clinicians depend on the literature on vision and driving to advise visually impaired patients appropriately about driving fitness. Policy makers also depend on the scientific literature in order to develop guidelines that are evidence-based and are thus fair to persons who are visually impaired. Thus it is important for clinicians and policy makers alike to understand how various study designs and measurement methods should be interpreted so that the conclusions and recommendations they make are not overly broad, too narrowly constrained, or even misguided. We offer a methodological framework to guide interpretations of studies on vision and driving that can also serve as a heuristic for researchers in the area. Here, we discuss research designs and general measurement methods for the study of vision as they relate to driver safety, driver performance, and driver-centered (self-reported) outcomes. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. a Kml-Based Approach for Distributed Collaborative Interpretation of Remote Sensing Images in the Geo-Browser

    NASA Astrophysics Data System (ADS)

    Huang, L.; Zhu, X.; Guo, W.; Xiang, L.; Chen, X.; Mei, Y.

    2012-07-01

    Existing implementations of collaborative image interpretation have many limitations for very large satellite imageries, such as inefficient browsing, slow transmission, etc. This article presents a KML-based approach to support distributed, real-time, synchronous collaborative interpretation for remote sensing images in the geo-browser. As an OGC standard, KML (Keyhole Markup Language) has the advantage of organizing various types of geospatial data (including image, annotation, geometry, etc.) in the geo-browser. Existing KML elements can be used to describe simple interpretation results indicated by vector symbols. To enlarge its application, this article expands KML elements to describe some complex image processing operations, including band combination, grey transformation, geometric correction, etc. Improved KML is employed to describe and share interpretation operations and results among interpreters. Further, this article develops some collaboration related services that are collaboration launch service, perceiving service and communication service. The launch service creates a collaborative interpretation task and provides a unified interface for all participants. The perceiving service supports interpreters to share collaboration awareness. Communication service provides interpreters with written words communication. Finally, the GeoGlobe geo-browser (an extensible and flexible geospatial platform developed in LIESMARS) is selected to perform experiments of collaborative image interpretation. The geo-browser, which manage and visualize massive geospatial information, can provide distributed users with quick browsing and transmission. Meanwhile in the geo-browser, GIS data (for example DEM, DTM, thematic map and etc.) can be integrated to assist in improving accuracy of interpretation. Results show that the proposed method is available to support distributed collaborative interpretation of remote sensing image

  7. Evaluating the Rank-Ordering Method for Standard Maintaining

    ERIC Educational Resources Information Center

    Bramley, Tom; Gill, Tim

    2010-01-01

    The rank-ordering method for standard maintaining was designed for the purpose of mapping a known cut-score (e.g. a grade boundary mark) on one test to an equivalent point on the test score scale of another test, using holistic expert judgements about the quality of exemplars of examinees' work (scripts). It is a novel application of an old…

  8. Computing tools for implementing standards for single-case designs.

    PubMed

    Chen, Li-Ting; Peng, Chao-Ying Joanne; Chen, Ming-E

    2015-11-01

    In the single-case design (SCD) literature, five sets of standards have been formulated and distinguished: design standards, assessment standards, analysis standards, reporting standards, and research synthesis standards. This article reviews computing tools that can assist researchers and practitioners in meeting the analysis standards recommended by the What Works Clearinghouse: Procedures and Standards Handbook-the WWC standards. These tools consist of specialized web-based calculators or downloadable software for SCD data, and algorithms or programs written in Excel, SAS procedures, SPSS commands/Macros, or the R programming language. We aligned these tools with the WWC standards and evaluated them for accuracy and treatment of missing data, using two published data sets. All tools were tested to be accurate. When missing data were present, most tools either gave an error message or conducted analysis based on the available data. Only one program used a single imputation method. This article concludes with suggestions for an inclusive computing tool or environment, additional research on the treatment of missing data, and reasonable and flexible interpretations of the WWC standards. © The Author(s) 2015.

  9. The use of lower resolution viewing devices for mammographic interpretation: implications for education and training.

    PubMed

    Chen, Yan; James, Jonathan J; Turnbull, Anne E; Gale, Alastair G

    2015-10-01

    To establish whether lower resolution, lower cost viewing devices have the potential to deliver mammographic interpretation training. On three occasions over eight months, fourteen consultant radiologists and reporting radiographers read forty challenging digital mammography screening cases on three different displays: a digital mammography workstation, a standard LCD monitor, and a smartphone. Standard image manipulation software was available for use on all three devices. Receiver operating characteristic (ROC) analysis and ANOVA (Analysis of Variance) were used to determine the significance of differences in performance between the viewing devices with/without the application of image manipulation software. The effect of reader's experience was also assessed. Performance was significantly higher (p < .05) on the mammography workstation compared to the other two viewing devices. When image manipulation software was applied to images viewed on the standard LCD monitor, performance improved to mirror levels seen on the mammography workstation with no significant difference between the two. Image interpretation on the smartphone was uniformly poor. Film reader experience had no significant effect on performance across all three viewing devices. Lower resolution standard LCD monitors combined with appropriate image manipulation software are capable of displaying mammographic pathology, and are potentially suitable for delivering mammographic interpretation training. • This study investigates potential devices for training in mammography interpretation. • Lower resolution standard LCD monitors are potentially suitable for mammographic interpretation training. • The effect of image manipulation tools on mammography workstation viewing is insignificant. • Reader experience had no significant effect on performance in all viewing devices. • Smart phones are not suitable for displaying mammograms.

  10. Estimating extreme stream temperatures by the standard deviate method

    NASA Astrophysics Data System (ADS)

    Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz

    2006-02-01

    It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.

  11. Influence relevance voting: an accurate and interpretable virtual high throughput screening method.

    PubMed

    Swamidass, S Joshua; Azencott, Chloé-Agathe; Lin, Ting-Wan; Gramajo, Hugo; Tsai, Shiou-Chuan; Baldi, Pierre

    2009-04-01

    Given activity training data from high-throughput screening (HTS) experiments, virtual high-throughput screening (vHTS) methods aim to predict in silico the activity of untested chemicals. We present a novel method, the Influence Relevance Voter (IRV), specifically tailored for the vHTS task. The IRV is a low-parameter neural network which refines a k-nearest neighbor classifier by nonlinearly combining the influences of a chemical's neighbors in the training set. Influences are decomposed, also nonlinearly, into a relevance component and a vote component. The IRV is benchmarked using the data and rules of two large, open, competitions, and its performance compared to the performance of other participating methods, as well as of an in-house support vector machine (SVM) method. On these benchmark data sets, IRV achieves state-of-the-art results, comparable to the SVM in one case, and significantly better than the SVM in the other, retrieving three times as many actives in the top 1% of its prediction-sorted list. The IRV presents several other important advantages over SVMs and other methods: (1) the output predictions have a probabilistic semantic; (2) the underlying inferences are interpretable; (3) the training time is very short, on the order of minutes even for very large data sets; (4) the risk of overfitting is minimal, due to the small number of free parameters; and (5) additional information can easily be incorporated into the IRV architecture. Combined with its performance, these qualities make the IRV particularly well suited for vHTS.

  12. Interpreters, Interpreting, and the Study of Bilingualism.

    ERIC Educational Resources Information Center

    Valdes, Guadalupe; Angelelli, Claudia

    2003-01-01

    Discusses research on interpreting focused specifically on issues raised by this literature about the nature of bilingualism. Suggests research carried out on interpreting--while primarily produced with a professional audience in mind and concerned with improving the practice of interpreting--provides valuable insights about complex aspects of…

  13. Standardizing terms, definitions and concepts for describing and interpreting unwanted immunogenicity of biopharmaceuticals: recommendations of the Innovative Medicines Initiative ABIRISK consortium.

    PubMed

    Rup, B; Pallardy, M; Sikkema, D; Albert, T; Allez, M; Broet, P; Carini, C; Creeke, P; Davidson, J; De Vries, N; Finco, D; Fogdell-Hahn, A; Havrdova, E; Hincelin-Mery, A; C Holland, M; H Jensen, P E; Jury, E C; Kirby, H; Kramer, D; Lacroix-Desmazes, S; Legrand, J; Maggi, E; Maillère, B; Mariette, X; Mauri, C; Mikol, V; Mulleman, D; Oldenburg, J; Paintaud, G; R Pedersen, C; Ruperto, N; Seitz, R; Spindeldreher, S; Deisenhammer, F

    2015-09-01

    Biopharmaceuticals (BPs) represent a rapidly growing class of approved and investigational drug therapies that is contributing significantly to advancing treatment in multiple disease areas, including inflammatory and autoimmune diseases, genetic deficiencies and cancer. Unfortunately, unwanted immunogenic responses to BPs, in particular those affecting clinical safety or efficacy, remain among the most common negative effects associated with this important class of drugs. To manage and reduce risk of unwanted immunogenicity, diverse communities of clinicians, pharmaceutical industry and academic scientists are involved in: interpretation and management of clinical and biological outcomes of BP immunogenicity, improvement of methods for describing, predicting and mitigating immunogenicity risk and elucidation of underlying causes. Collaboration and alignment of efforts across these communities is made difficult due to lack of agreement on concepts, practices and standardized terms and definitions related to immunogenicity. The Innovative Medicines Initiative (IMI; www.imi-europe.org), ABIRISK consortium [Anti-Biopharmaceutical (BP) Immunization Prediction and Clinical Relevance to Reduce the Risk; www.abirisk.eu] was formed by leading clinicians, academic scientists and EFPIA (European Federation of Pharmaceutical Industries and Associations) members to elucidate underlying causes, improve methods for immunogenicity prediction and mitigation and establish common definitions around terms and concepts related to immunogenicity. These efforts are expected to facilitate broader collaborations and lead to new guidelines for managing immunogenicity. To support alignment, an overview of concepts behind the set of key terms and definitions adopted to date by ABIRISK is provided herein along with a link to access and download the ABIRISK terms and definitions and provide comments (http://www.abirisk.eu/index_t_and_d.asp). © 2015 British Society for Immunology.

  14. Standardizing terms, definitions and concepts for describing and interpreting unwanted immunogenicity of biopharmaceuticals: recommendations of the Innovative Medicines Initiative ABIRISK consortium

    PubMed Central

    Rup, B; Pallardy, M; Sikkema, D; Albert, T; Allez, M; Broet, P; Carini, C; Creeke, P; Davidson, J; De Vries, N; Finco, D; Fogdell-Hahn, A; Havrdova, E; Hincelin-Mery, A; C Holland, M; H Jensen, P E; Jury, E C; Kirby, H; Kramer, D; Lacroix-Desmazes, S; Legrand, J; Maggi, E; Maillère, B; Mariette, X; Mauri, C; Mikol, V; Mulleman, D; Oldenburg, J; Paintaud, G; R Pedersen, C; Ruperto, N; Seitz, R; Spindeldreher, S; Deisenhammer, F

    2015-01-01

    Biopharmaceuticals (BPs) represent a rapidly growing class of approved and investigational drug therapies that is contributing significantly to advancing treatment in multiple disease areas, including inflammatory and autoimmune diseases, genetic deficiencies and cancer. Unfortunately, unwanted immunogenic responses to BPs, in particular those affecting clinical safety or efficacy, remain among the most common negative effects associated with this important class of drugs. To manage and reduce risk of unwanted immunogenicity, diverse communities of clinicians, pharmaceutical industry and academic scientists are involved in: interpretation and management of clinical and biological outcomes of BP immunogenicity, improvement of methods for describing, predicting and mitigating immunogenicity risk and elucidation of underlying causes. Collaboration and alignment of efforts across these communities is made difficult due to lack of agreement on concepts, practices and standardized terms and definitions related to immunogenicity. The Innovative Medicines Initiative (IMI; http://www.imi-europe.org), ABIRISK consortium [Anti-Biopharmaceutical (BP) Immunization Prediction and Clinical Relevance to Reduce the Risk; http://www.abirisk.eu] was formed by leading clinicians, academic scientists and EFPIA (European Federation of Pharmaceutical Industries and Associations) members to elucidate underlying causes, improve methods for immunogenicity prediction and mitigation and establish common definitions around terms and concepts related to immunogenicity. These efforts are expected to facilitate broader collaborations and lead to new guidelines for managing immunogenicity. To support alignment, an overview of concepts behind the set of key terms and definitions adopted to date by ABIRISK is provided herein along with a link to access and download the ABIRISK terms and definitions and provide comments (http://www.abirisk.eu/index_t_and_d.asp). PMID:25959571

  15. The Development of Quality Measures for the Performance and Interpretation of Esophageal Manometry

    PubMed Central

    Yadlapati, Rena; Gawron, Andrew J.; Keswani, Rajesh N.; Bilimoria, Karl; Castell, Donald O.; Dunbar, Kerry B.; Gyawali, Chandra P.; Jobe, Blair A.; Katz, Philip O.; Katzka, David A.; Lacy, Brian E.; Massey, Benson T.; Richter, Joel E.; Schnoll-Sussman, Felice; Spechler, Stuart J.; Tatum, Roger; Vela, Marcelo F.; Pandolfino, John E.

    2016-01-01

    Background and Aims Esophageal manometry (EM) is the gold standard for the diagnosis of esophageal motility disorders. Variations in the performance and interpretation of EM result in discrepant diagnoses and unnecessary repeated procedures, and may negatively impact patient outcomes. A method to benchmark the procedural quality of EM is needed. The primary aim of this study was to develop quality measures for performing and interpreting EM. Methods The RAND/University of California, Los Angeles Appropriateness Methodology (RAM) was utilized. Fifteen experts in esophageal manometry were invited to be a part of the panel. Potential quality measures were identified through a literature search and interviews with experts. The expert panel ranked the proposed quality measures for appropriateness via a two-round process on the basis of RAM. Results Fourteen experts participated in all processes. A total of 29 measures were considered; 17 of these measures were ranked as appropriate and related to competency (2), pre-procedure (2), procedure (3) and interpretation (10). The latter 10 were integrated into a single composite measure. Thus, 8 final measures were determined to be appropriate quality measures for EM. Five strong recommendations were also endorsed by the experts, however they were not ranked as appropriate quality measures. Conclusions Eight formally validated quality measures for the performance and interpretation of EM were developed on the basis of RAM. These measures represent key aspects of a high-quality EM study and should be uniformly adopted. Evaluation of these measures in clinical practice is needed to assess their impact on outcomes. PMID:26499925

  16. Central Core Laboratory versus Site Interpretation of Coronary CT Angiography: Agreement and Association with Cardiovascular Events in the PROMISE Trial.

    PubMed

    Lu, Michael T; Meyersohn, Nandini M; Mayrhofer, Thomas; Bittner, Daniel O; Emami, Hamed; Puchner, Stefan B; Foldyna, Borek; Mueller, Martin E; Hearne, Steven; Yang, Clifford; Achenbach, Stephan; Truong, Quynh A; Ghoshhajra, Brian B; Patel, Manesh R; Ferencik, Maros; Douglas, Pamela S; Hoffmann, Udo

    2018-04-01

    Purpose To assess concordance and relative prognostic utility between central core laboratory and local site interpretation for significant coronary artery disease (CAD) and cardiovascular events. Materials and Methods In the Prospective Multicenter Imaging Study for Evaluation of Chest Pain (PROMISE) trial, readers at 193 North American sites interpreted coronary computed tomographic (CT) angiography as part of the clinical evaluation of stable chest pain. Readers at a central core laboratory also interpreted CT angiography blinded to clinical data, site interpretation, and outcomes. Significant CAD was defined as stenosis greater than or equal to 50%; cardiovascular events were defined as a composite of cardiovascular death or myocardial infarction. Results In 4347 patients (51.8% women; mean age ± standard deviation, 60.4 years ± 8.2), core laboratory and site interpretations were discordant in 16% (683 of 4347), most commonly because of a finding of significant CAD by site but not by core laboratory interpretation (80%, 544 of 683). Overall, core laboratory interpretation resulted in 41% fewer patients being reported as having significant CAD (14%, 595 of 4347 vs 23%, 1000 of 4347; P < .001). Over a median follow-up period of 25 months, 1.3% (57 of 4347) sustained myocardial infarction or cardiovascular death. The C statistic for future myocardial infarction or cardiovascular death was 0.61 (95% confidence interval [CI]: 0.54, 0.68) for the core laboratory and 0.63 (95% CI: 0.56, 0.70) for the sites. Conclusion Compared with interpretation by readers at 193 North American sites, standardized core laboratory interpretation classified 41% fewer patients as having significant CAD. © RSNA, 2017 Online supplemental material is available for this article. Clinical trial registration no. NCT01174550.

  17. Quantifying viruses and bacteria in wastewater—Results, interpretation methods, and quality control

    USGS Publications Warehouse

    Francy, Donna S.; Stelzer, Erin A.; Bushon, Rebecca N.; Brady, Amie M.G.; Mailot, Brian E.; Spencer, Susan K.; Borchardt, Mark A.; Elber, Ashley G.; Riddell, Kimberly R.; Gellner, Terry M.

    2011-01-01

    Membrane bioreactors (MBR), used for wastewater treatment in Ohio and elsewhere in the United States, have pore sizes small enough to theoretically reduce concentrations of protozoa and bacteria, but not viruses. Sampling for viruses in wastewater is seldom done and not required. Instead, the bacterial indicators Escherichia coli (E. coli) and fecal coliforms are the required microbial measures of effluents for wastewater-discharge permits. Information is needed on the effectiveness of MBRs in removing human enteric viruses from wastewaters, particularly as compared to conventional wastewater treatment before and after disinfection. A total of 73 regular and 28 quality-control (QC) samples were collected at three MBR and two conventional wastewater plants in Ohio during 23 regular and 3 QC sampling trips in 2008-10. Samples were collected at various stages in the treatment processes and analyzed for bacterial indicators E. coli, fecal coliforms, and enterococci by membrane filtration; somatic and F-specific coliphage by the single agar layer (SAL) method; adenovirus, enterovirus, norovirus GI and GII, rotavirus, and hepatitis A virus by molecular methods; and viruses by cell culture. While addressing the main objective of the study-comparing removal of viruses and bacterial indicators in MBR and conventional plants-it was realized that work was needed to identify data analysis and quantification methods for interpreting enteric virus and QC data. Therefore, methods for quantifying viruses, qualifying results, and applying QC data to interpretations are described in this report. During each regular sampling trip, samples were collected (1) before conventional or MBR treatment (post-preliminary), (2) after secondary or MBR treatment (post-secondary or post-MBR), (3) after tertiary treatment (one conventional plant only), and (4) after disinfection (post-disinfection). Glass-wool fiber filtration was used to concentrate enteric viruses from large volumes, and small

  18. Measuring and monitoring biological diversity: Standard methods for mammals

    USGS Publications Warehouse

    Wilson, Don E.; Cole, F. Russell; Nichols, James D.; Rudran, Rasanayagam; Foster, Mercedes S.

    1996-01-01

    Measuring and Monitoring Biological Diversity: Standard Methods for Mammals provides a comprehensive manual for designing and implementing inventories of mammalian biodiversity anywhere in the world and for any group, from rodents to open-country grazers. The book emphasizes formal estimation approaches, which supply data that can be compared across habitats and over time. Beginning with brief natural histories of the twenty-six orders of living mammals, the book details the field techniques—observation, capture, and sign interpretation—appropriate to different species. The contributors provide guidelines for study design, discuss survey planning, describe statistical techniques, and outline methods of translating field data into electronic formats. Extensive appendixes address such issues as the ethical treatment of animals in research, human health concerns, preserving voucher specimens, and assessing age, sex, and reproductive condition in mammals.Useful in both developed and developing countries, this volume and the Biological Diversity Handbook Series as a whole establish essential standards for a key aspect of conservation biology and resource management.

  19. 40 CFR 98.7 - What standardized methods are incorporated by reference into this part?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-B2959, (800) 262-1373, http://www.astm.org. (1) ASTM C25-06 Standard Test Method for Chemical Analysis...), § 98.174(b), § 98.184(b), § 98.194(c), and § 98.334(b). (2) ASTM C114-09 Standard Test Methods for... Dry Cleaning Solvent), IBR approved for § 98.6. (4) ASTM D240-02 (Reapproved 2007) Standard Test...

  20. 76 FR 51993 - Draft Guidance for Industry on Standards for Clinical Trial Imaging Endpoints; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-19

    ... assist the office in processing your requests. See the SUPPLEMENTARY INFORMATION section for electronic... considerations for standardization of image acquisition, image interpretation methods, and other procedures to help ensure imaging data quality. The draft guidance describes two categories of image acquisition and...

  1. Applied photo interpretation for airbrush cartography

    NASA Technical Reports Server (NTRS)

    Inge, J. L.; Bridges, P. M.

    1976-01-01

    New techniques of cartographic portrayal have been developed for the compilation of maps of lunar and planetary surfaces. Conventional photo interpretation methods utilizing size, shape, shadow, tone, pattern, and texture are applied to computer processed satellite television images. The variety of the image data allows the illustrator to interpret image details by inter-comparison and intra-comparison of photographs. Comparative judgements are affected by illumination, resolution, variations in surface coloration, and transmission or processing artifacts. The validity of the interpretation process is tested by making a representational drawing by an airbrush portrayal technique. Production controls insure the consistency of a map series. Photo interpretive cartographic portrayal skills are used to prepare two kinds of map series and are adaptable to map products of different kinds and purposes.

  2. Leaping from Discrete to Continuous Independent Variables: Sixth Graders' Science Line Graph Interpretations

    ERIC Educational Resources Information Center

    Boote, Stacy K.; Boote, David N.

    2017-01-01

    Students often struggle to interpret graphs correctly, despite emphasis on graphic literacy in U.S. education standards documents. The purpose of this study was to describe challenges sixth graders with varying levels of science and mathematics achievement encounter when transitioning from interpreting graphs having discrete independent variables…

  3. Vascular Disease, ESRD, and Death: Interpreting Competing Risk Analyses

    PubMed Central

    Coresh, Josef; Segev, Dorry L.; Kucirka, Lauren M.; Tighiouart, Hocine; Sarnak, Mark J.

    2012-01-01

    Summary Background and objectives Vascular disease, a common condition in CKD, is a risk factor for mortality and ESRD. Optimal patient care requires accurate estimation and ordering of these competing risks. Design, setting, participants, & measurements This is a prospective cohort study of screened (n=885) and randomized participants (n=837) in the Modification of Diet in Renal Disease study (original study enrollment, 1989–1992), evaluating the association of vascular disease with ESRD and pre-ESRD mortality using standard survival analysis and competing risk regression. Results The method of analysis resulted in markedly different estimates. Cumulative incidence by standard analysis (censoring at the competing event) implied that, with vascular disease, the 15-year incidence was 66% and 51% for ESRD and pre-ESRD death, respectively. A more accurate representation of absolute risk was estimated with competing risk regression: 15-year incidence was 54% and 29% for ESRD and pre-ESRD death, respectively. For the association of vascular disease with pre-ESRD death, estimates of relative risk by the two methods were similar (standard survival analysis adjusted hazard ratio, 1.63; 95% confidence interval, 1.20–2.20; competing risk regression adjusted subhazard ratio, 1.57; 95% confidence interval, 1.15–2.14). In contrast, the hazard and subhazard ratios differed substantially for other associations, such as GFR and pre-ESRD mortality. Conclusions When competing events exist, absolute risk is better estimated using competing risk regression, but etiologic associations by this method must be carefully interpreted. The presence of vascular disease in CKD decreases the likelihood of survival to ESRD, independent of age and other risk factors. PMID:22859747

  4. Diagnostic Profiles: A Standard Setting Method for Use with a Cognitive Diagnostic Model

    ERIC Educational Resources Information Center

    Skaggs, Gary; Hein, Serge F.; Wilkins, Jesse L. M.

    2016-01-01

    This article introduces the Diagnostic Profiles (DP) standard setting method for setting a performance standard on a test developed from a cognitive diagnostic model (CDM), the outcome of which is a profile of mastered and not-mastered skills or attributes rather than a single test score. In the DP method, the key judgment task for panelists is a…

  5. Temporal similarity perfusion mapping: A standardized and model-free method for detecting perfusion deficits in stroke

    PubMed Central

    Song, Sunbin; Luby, Marie; Edwardson, Matthew A.; Brown, Tyler; Shah, Shreyansh; Cox, Robert W.; Saad, Ziad S.; Reynolds, Richard C.; Glen, Daniel R.; Cohen, Leonardo G.; Latour, Lawrence L.

    2017-01-01

    Introduction Interpretation of the extent of perfusion deficits in stroke MRI is highly dependent on the method used for analyzing the perfusion-weighted signal intensity time-series after gadolinium injection. In this study, we introduce a new model-free standardized method of temporal similarity perfusion (TSP) mapping for perfusion deficit detection and test its ability and reliability in acute ischemia. Materials and methods Forty patients with an ischemic stroke or transient ischemic attack were included. Two blinded readers compared real-time generated interactive maps and automatically generated TSP maps to traditional TTP/MTT maps for presence of perfusion deficits. Lesion volumes were compared for volumetric inter-rater reliability, spatial concordance between perfusion deficits and healthy tissue and contrast-to-noise ratio (CNR). Results Perfusion deficits were correctly detected in all patients with acute ischemia. Inter-rater reliability was higher for TSP when compared to TTP/MTT maps and there was a high similarity between the lesion volumes depicted on TSP and TTP/MTT (r(18) = 0.73). The Pearson's correlation between lesions calculated on TSP and traditional maps was high (r(18) = 0.73, p<0.0003), however the effective CNR was greater for TSP compared to TTP (352.3 vs 283.5, t(19) = 2.6, p<0.03.) and MTT (228.3, t(19) = 2.8, p<0.03). Discussion TSP maps provide a reliable and robust model-free method for accurate perfusion deficit detection and improve lesion delineation compared to traditional methods. This simple method is also computationally faster and more easily automated than model-based methods. This method can potentially improve the speed and accuracy in perfusion deficit detection for acute stroke treatment and clinical trial inclusion decision-making. PMID:28973000

  6. Optimizing the interpretation of CT for appendicitis: modeling health utilities for clinical practice.

    PubMed

    Blackmore, C Craig; Terasawa, Teruhiko

    2006-02-01

    Error in radiology can be reduced by standardizing the interpretation of imaging studies to the optimum sensitivity and specificity. In this report, the authors demonstrate how the optimal interpretation of appendiceal computed tomography (CT) can be determined and how it varies in different clinical scenarios. Utility analysis and receiver operating characteristic (ROC) curve modeling were used to determine the trade-off between false-positive and false-negative test results to determine the optimal operating point on the ROC curve for the interpretation of appendicitis CT. Modeling was based on a previous meta-analysis for the accuracy of CT and on literature estimates of the utilities of various health states. The posttest probability of appendicitis was derived using Bayes's theorem. At a low prevalence of disease (screening), appendicitis CT should be interpreted at high specificity (97.7%), even at the expense of lower sensitivity (75%). Conversely, at a high probability of disease, high sensitivity (97.4%) is preferred (specificity 77.8%). When the clinical diagnosis of appendicitis is equivocal, CT interpretation should emphasize both sensitivity and specificity (sensitivity 92.3%, specificity 91.5%). Radiologists can potentially decrease medical error and improve patient health by varying the interpretation of appendiceal CT on the basis of the clinical probability of appendicitis. This report is an example of how utility analysis can be used to guide radiologists in the interpretation of imaging studies and provide guidance on appropriate targets for the standardization of interpretation.

  7. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, B.; Wood, R.T.

    1997-04-22

    A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.

  8. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, Brian; Wood, Richard T.

    1997-01-01

    A method for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system.

  9. Does periodic lung screening of films meets standards?

    PubMed

    Binay, Songul; Arbak, Peri; Safak, Alp Alper; Balbay, Ege Gulec; Bilgin, Cahit; Karatas, Naciye

    2016-01-01

    To determine whether the workers' periodic chest x-ray screening techniques in accordance with the quality standards is the responsibility of physicians. Evaluation of differences of interpretations by physicians in different levels of education and the importance of standardization of interpretation. Previously taken chest radiographs of 400 workers who are working in a factory producing the glass run channels were evaluated according to technical and quality standards by three observers (pulmonologist, radiologist, pulmonologist assistant). There was a perfect concordance between radiologist and pulmonologist for the underpenetrated films. Whereas there was perfect concordance between pulmonologist and pulmonologist assistant for over penetrated films. Pulmonologist (52%) has interpreted the dose of the films as regular more than other observers (radiologist; 44.3%, pulmonologist assistant; 30.4%). The frequency of interpretation of the films as taken in inspiratory phase by the pulmonologist (81.7%) was less than other observers (radiologist; 92.1%, pulmonologist assistant; 92.6%). The rate of the pulmonologist (53.5%) was higher than the other observers (radiologist; 44.6%, pulmonologist assistant; 41.8%) for the assessment of the positioning of the patients as symmetrical. Pulmonologist assistant (15.3%) was the one who most commonly reported the parenchymal findings (radiologist; 2.2%, pulmonologist; 12.9%). It is necessary to reorganize the technical standards and exposure procedures for improving the quality of the chest radiographs. The reappraisal of all interpreters and continuous training of technicians is required.

  10. Training Translators and Conference Interpreters. Language in Education: Theory and Practice, No. 58.

    ERIC Educational Resources Information Center

    Weber, Wilhelm K.

    An examination of translation and conference interpretation as well-established academic professions focuses on how they should be taught in order to maintain the integrity of the two professions and the highest standards in their exercise. An introductory section answers the question, "Can translation and interpretation be taught?,"…

  11. Comparing Standard Deviation Effects across Contexts

    ERIC Educational Resources Information Center

    Ost, Ben; Gangopadhyaya, Anuj; Schiman, Jeffrey C.

    2017-01-01

    Studies using tests scores as the dependent variable often report point estimates in student standard deviation units. We note that a standard deviation is not a standard unit of measurement since the distribution of test scores can vary across contexts. As such, researchers should be cautious when interpreting differences in the numerical size of…

  12. Quantum mechanics without the projection postulate and its realistic interpretation

    NASA Astrophysics Data System (ADS)

    Dieks, D.

    1989-11-01

    It is widely held that quantum mechanics is the first scientific theory to present scientifically internal, fundamental difficulties for a realistic interpretation (in the philosophical sense). The standard (Copenhagen) interpretation of the quantum theory is often described as the inevitable instrumentalistic response. It is the purpose of the present article to argue that quantum theory does not present fundamental new problems to a realistic interpretation. The formalism of quantum theory has the same states—it will be argued—as the formalisms of older physical theories and is capable of the same kinds of philosophical interpretation. This result is reached via an analysis of what it means to give a realistic interpretation to a theory. The main point of difference between quantum mechanics and other theories—as far as the possibilities of interpretation are concerned—is the special treatment given to measurement by the “projection postulate.” But it is possible to do without this postulate. Moreover, rejection of the projection postulate does not, in spite of what is often maintained in the literature, automatically lead to the many-worlds interpretation of quantum mechanics. A realistic interpretation is possible in which only the reality of one (our) world is recognized. It is argued that the Copenhagen interpretation as expounded by Bohr is not in conflict with the here proposed realistic interpretation of quantum theory.

  13. Portero versus portador: Spanish interpretation of genomic terminology during whole exome sequencing results disclosure.

    PubMed

    Gutierrez, Amanda M; Robinson, Jill O; Statham, Emily E; Scollon, Sarah; Bergstrom, Katie L; Slashinski, Melody J; Parsons, Donald W; Plon, Sharon E; McGuire, Amy L; Street, Richard L

    2017-11-01

    Describe modifications to technical genomic terminology made by interpreters during disclosure of whole exome sequencing (WES) results. Using discourse analysis, we identified and categorized interpretations of genomic terminology in 42 disclosure sessions where Spanish-speaking parents received their child's WES results either from a clinician using a medical interpreter, or directly from a bilingual physician. Overall, 76% of genomic terms were interpreted accordantly, 11% were misinterpreted and 13% were omitted. Misinterpretations made by interpreters and bilingual physicians included using literal and nonmedical terminology to interpret genomic concepts. Modifications to genomic terminology made during interpretation highlight the need to standardize bilingual genomic lexicons. We recommend Spanish terms that can be used to refer to genomic concepts.

  14. A Mapmark method of standard setting as implemented for the National Assessment Governing Board.

    PubMed

    Schulz, E Matthew; Mitzel, Howard C

    2011-01-01

    This article describes a Mapmark standard setting procedure, developed under contract with the National Assessment Governing Board (NAGB). The procedure enhances the bookmark method with spatially representative item maps, holistic feedback, and an emphasis on independent judgment. A rationale for these enhancements, and the bookmark method, is presented, followed by a detailed description of the materials and procedures used in a meeting to set standards for the 2005 National Assessment of Educational Progress (NAEP) in Grade 12 mathematics. The use of difficulty-ordered content domains to provide holistic feedback is a particularly novel feature of the method. Process evaluation results comparing Mapmark to Anghoff-based methods previously used for NAEP standard setting are also presented.

  15. Evaluating the Performance of the IEEE Standard 1366 Method for Identifying Major Event Days

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eto, Joseph H.; LaCommare, Kristina Hamachi; Sohn, Michael D.

    IEEE Standard 1366 offers a method for segmenting reliability performance data to isolate the effects of major events from the underlying year-to-year trends in reliability. Recent analysis by the IEEE Distribution Reliability Working Group (DRWG) has found that reliability performance of some utilities differs from the expectations that helped guide the development of the Standard 1366 method. This paper proposes quantitative metrics to evaluate the performance of the Standard 1366 method in identifying major events and in reducing year-to-year variability in utility reliability. The metrics are applied to a large sample of utility-reported reliability data to assess performance of themore » method with alternative specifications that have been considered by the DRWG. We find that none of the alternatives perform uniformly 'better' than the current Standard 1366 method. That is, none of the modifications uniformly lowers the year-to-year variability in System Average Interruption Duration Index without major events. Instead, for any given alternative, while it may lower the value of this metric for some utilities, it also increases it for other utilities (sometimes dramatically). Thus, we illustrate some of the trade-offs that must be considered in using the Standard 1366 method and highlight the usefulness of the metrics we have proposed in conducting these evaluations.« less

  16. Radiologist Uncertainty and the Interpretation of Screening

    PubMed Central

    Carney, Patricia A.; Elmore, Joann G.; Abraham, Linn A.; Gerrity, Martha S.; Hendrick, R. Edward; Taplin, Stephen H.; Barlow, William E.; Cutter, Gary R.; Poplack, Steven P.; D’Orsi, Carl J.

    2011-01-01

    Objective To determine radiologists’ reactions to uncertainty when interpreting mammography and the extent to which radiologist uncertainty explains variability in interpretive performance. Methods The authors used a mailed survey to assess demographic and clinical characteristics of radiologists and reactions to uncertainty associated with practice. Responses were linked to radiologists’ actual interpretive performance data obtained from 3 regionally located mammography registries. Results More than 180 radiologists were eligible to participate, and 139 consented for a response rate of 76.8%. Radiologist gender, more years interpreting, and higher volume were associated with lower uncertainty scores. Positive predictive value, recall rates, and specificity were more affected by reactions to uncertainty than sensitivity or negative predictive value; however, none of these relationships was statistically significant. Conclusion Certain practice factors, such as gender and years of interpretive experience, affect uncertainty scores. Radiologists’ reactions to uncertainty do not appear to affect interpretive performance. PMID:15155014

  17. Translators and Interpreters Certification in Australia, Canada, the USA and Ukraine: Comparative Analysis

    ERIC Educational Resources Information Center

    Skyba, Kateryna

    2014-01-01

    The article presents an overview of the certification process by which potential translators and interpreters demonstrate minimum standards of performance to warrant official or professional recognition of their ability to translate or interpret and to practice professionally in Australia, Canada, the USA and Ukraine. The aim of the study is to…

  18. Interpreter services in emergency medicine.

    PubMed

    Chan, Yu-Feng; Alagappan, Kumar; Rella, Joseph; Bentley, Suzanne; Soto-Greene, Marie; Martin, Marcus

    2010-02-01

    Emergency physicians are routinely confronted with problems associated with language barriers. It is important for emergency health care providers and the health system to strive for cultural competency when communicating with members of an increasingly diverse society. Possible solutions that can be implemented include appropriate staffing, use of new technology, and efforts to develop new kinds of ties to the community served. Linguistically specific solutions include professional interpretation, telephone interpretation, the use of multilingual staff members, the use of ad hoc interpreters, and, more recently, the use of mobile computer technology at the bedside. Each of these methods carries a specific set of advantages and disadvantages. Although professionally trained medical interpreters offer improved communication, improved patient satisfaction, and overall cost savings, they are often underutilized due to their perceived inefficiency and the inconclusive results of their effect on patient care outcomes. Ultimately, the best solution for each emergency department will vary depending on the population served and available resources. Access to the multiple interpretation options outlined above and solid support and commitment from hospital institutions are necessary to provide proper and culturally competent care for patients. Appropriate communications inclusive of interpreter services are essential for culturally and linguistically competent provider/health systems and overall improved patient care and satisfaction. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  19. Comments on the New International Criteria for Electrocardiographic Interpretation in Athletes.

    PubMed

    Serratosa-Fernández, Luis; Pascual-Figal, Domingo; Masiá-Mondéjar, María Dolores; Sanz-de la Garza, María; Madaria-Marijuan, Zigor; Gimeno-Blanes, Juan Ramón; Adamuz, Carmen

    2017-11-01

    Sudden cardiac death is the most common medical cause of death during the practice of sports. Several structural and electrical cardiac conditions are associated with sudden cardiac death in athletes, most of them showing abnormal findings on resting electrocardiogram (ECG). However, because of the similarity between some ECG findings associated with physiological adaptations to exercise training and those of certain cardiac conditions, ECG interpretation in athletes is often challenging. Other factors related to ECG findings are race, age, sex, sports discipline, training intensity, and athletic background. Specific training and experience in ECG interpretation in athletes are therefore necessary. Since 2005, when the first recommendations of the European Society of Cardiology were published, growing scientific evidence has increased the specificity of ECG standards, thus lowering the false-positive rate while maintaining sensitivity. New international consensus guidelines have recently been published on ECG interpretation in athletes, which are the result of consensus among a group of experts in cardiology and sports medicine who gathered for the first time in February 2015 in Seattle, in the United States. The document is an important milestone because, in addition to updating the standards for ECG interpretation, it includes recommendations on appropriate assessment of athletes with abnormal ECG findings. The present article reports and discusses the most novel and relevant aspects of the new standards. Nevertheless, a complete reading of the original consensus document is highly recommended. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  20. A collaborative comparison of objective structured clinical examination (OSCE) standard setting methods at Australian medical schools.

    PubMed

    Malau-Aduli, Bunmi Sherifat; Teague, Peta-Ann; D'Souza, Karen; Heal, Clare; Turner, Richard; Garne, David L; van der Vleuten, Cees

    2017-12-01

    A key issue underpinning the usefulness of the OSCE assessment to medical education is standard setting, but the majority of standard-setting methods remain challenging for performance assessment because they produce varying passing marks. Several studies have compared standard-setting methods; however, most of these studies are limited by their experimental scope, or use data on examinee performance at a single OSCE station or from a single medical school. This collaborative study between 10 Australian medical schools investigated the effect of standard-setting methods on OSCE cut scores and failure rates. This research used 5256 examinee scores from seven shared OSCE stations to calculate cut scores and failure rates using two different compromise standard-setting methods, namely the Borderline Regression and Cohen's methods. The results of this study indicate that Cohen's method yields similar outcomes to the Borderline Regression method, particularly for large examinee cohort sizes. However, with lower examinee numbers on a station, the Borderline Regression method resulted in higher cut scores and larger difference margins in the failure rates. Cohen's method yields similar outcomes as the Borderline Regression method and its application for benchmarking purposes and in resource-limited settings is justifiable, particularly with large examinee numbers.

  1. Reliable structural interpretation of small-angle scattering data from bio-molecules in solution--the importance of quality control and a standard reporting framework.

    PubMed

    Jacques, David A; Guss, Jules Mitchell; Trewhella, Jill

    2012-05-17

    Small-angle scattering is becoming an increasingly popular tool for the study of bio-molecular structures in solution. The large number of publications with 3D-structural models generated from small-angle solution scattering data has led to a growing consensus for the need to establish a standard reporting framework for their publication. The International Union of Crystallography recently established a set of guidelines for the necessary information required for the publication of such structural models. Here we describe the rationale for these guidelines and the importance of standardising the way in which small-angle scattering data from bio-molecules and associated structural interpretations are reported.

  2. 21 CFR 868.1900 - Diagnostic pulmonary-function interpretation calculator.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Diagnostic pulmonary-function interpretation calculator. 868.1900 Section 868.1900 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND... pulmonary-function values. (b) Classification. Class II (performance standards). ...

  3. 21 CFR 868.1900 - Diagnostic pulmonary-function interpretation calculator.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Diagnostic pulmonary-function interpretation calculator. 868.1900 Section 868.1900 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND... pulmonary-function values. (b) Classification. Class II (performance standards). ...

  4. A Roadmap for Interpreting the Literature on Vision and Driving

    PubMed Central

    Owsley, Cynthia; Wood, Joanne M.; McGwin, Gerald

    2015-01-01

    Over the past several decades there has been a sharp increase in the number of studies focused on the relationship between vision and driving. The intensified scientific attention to this topic has most likely been stimulated by the lack of an evidence-basis for determining vision standards for driving licensure and a poor understanding about how vision impairment impacts driver safety and performance. Clinicians depend on the scientific literature on vision and driving as a resource to appropriately advise visually impaired patients about driving fitness. Policy makers also depend on the scientific literature in order to develop guidelines that are evidence-based and are thus fair to persons who are visually impaired. Thus it is important for clinicians and policy makers alike to understand how various study designs and measurement methods should be appropriately interpreted so that the conclusions and recommendations they make based on this literature are not overly broad, too narrowly constrained, or even misguided. In this overview, based on our 25 years of experience in this field, we offer a methodological framework to guide interpretations of studies on vision and driving, which can also serve as a heuristic for researchers in the area. Here we discuss research designs and general measurement methods for the study of vision as they relate to driver safety, driver performance, and driver-centered (self-reported) outcomes. PMID:25753389

  5. Emerging interpretations of quantum mechanics and recent progress in quantum measurement

    NASA Astrophysics Data System (ADS)

    Clarke, M. L.

    2014-01-01

    The focus of this paper is to provide a brief discussion on the quantum measurement process, by reviewing select examples highlighting recent progress towards its understanding. The areas explored include an outline of the measurement problem, the standard interpretation of quantum mechanics, quantum to classical transition, types of measurement (including weak and projective measurements) and newly emerging interpretations of quantum mechanics (decoherence theory, objective reality, quantum Darwinism and quantum Bayesianism).

  6. Revisiting Interpretation of Canonical Correlation Analysis: A Tutorial and Demonstration of Canonical Commonality Analysis

    ERIC Educational Resources Information Center

    Nimon, Kim; Henson, Robin K.; Gates, Michael S.

    2010-01-01

    In the face of multicollinearity, researchers face challenges interpreting canonical correlation analysis (CCA) results. Although standardized function and structure coefficients provide insight into the canonical variates produced, they fall short when researchers want to fully report canonical effects. This article revisits the interpretation of…

  7. Standardized Method for High-throughput Sterilization of Arabidopsis Seeds.

    PubMed

    Lindsey, Benson E; Rivero, Luz; Calhoun, Chistopher S; Grotewold, Erich; Brkljacic, Jelena

    2017-10-17

    Arabidopsis thaliana (Arabidopsis) seedlings often need to be grown on sterile media. This requires prior seed sterilization to prevent the growth of microbial contaminants present on the seed surface. Currently, Arabidopsis seeds are sterilized using two distinct sterilization techniques in conditions that differ slightly between labs and have not been standardized, often resulting in only partially effective sterilization or in excessive seed mortality. Most of these methods are also not easily scalable to a large number of seed lines of diverse genotypes. As technologies for high-throughput analysis of Arabidopsis continue to proliferate, standardized techniques for sterilizing large numbers of seeds of different genotypes are becoming essential for conducting these types of experiments. The response of a number of Arabidopsis lines to two different sterilization techniques was evaluated based on seed germination rate and the level of seed contamination with microbes and other pathogens. The treatments included different concentrations of sterilizing agents and times of exposure, combined to determine optimal conditions for Arabidopsis seed sterilization. Optimized protocols have been developed for two different sterilization methods: bleach (liquid-phase) and chlorine (Cl2) gas (vapor-phase), both resulting in high seed germination rates and minimal microbial contamination. The utility of these protocols was illustrated through the testing of both wild type and mutant seeds with a range of germination potentials. Our results show that seeds can be effectively sterilized using either method without excessive seed mortality, although detrimental effects of sterilization were observed for seeds with lower than optimal germination potential. In addition, an equation was developed to enable researchers to apply the standardized chlorine gas sterilization conditions to airtight containers of different sizes. The protocols described here allow easy, efficient, and

  8. Standardized Method for High-throughput Sterilization of Arabidopsis Seeds

    PubMed Central

    Calhoun, Chistopher S.; Grotewold, Erich; Brkljacic, Jelena

    2017-01-01

    Arabidopsis thaliana (Arabidopsis) seedlings often need to be grown on sterile media. This requires prior seed sterilization to prevent the growth of microbial contaminants present on the seed surface. Currently, Arabidopsis seeds are sterilized using two distinct sterilization techniques in conditions that differ slightly between labs and have not been standardized, often resulting in only partially effective sterilization or in excessive seed mortality. Most of these methods are also not easily scalable to a large number of seed lines of diverse genotypes. As technologies for high-throughput analysis of Arabidopsis continue to proliferate, standardized techniques for sterilizing large numbers of seeds of different genotypes are becoming essential for conducting these types of experiments. The response of a number of Arabidopsis lines to two different sterilization techniques was evaluated based on seed germination rate and the level of seed contamination with microbes and other pathogens. The treatments included different concentrations of sterilizing agents and times of exposure, combined to determine optimal conditions for Arabidopsis seed sterilization. Optimized protocols have been developed for two different sterilization methods: bleach (liquid-phase) and chlorine (Cl2) gas (vapor-phase), both resulting in high seed germination rates and minimal microbial contamination. The utility of these protocols was illustrated through the testing of both wild type and mutant seeds with a range of germination potentials. Our results show that seeds can be effectively sterilized using either method without excessive seed mortality, although detrimental effects of sterilization were observed for seeds with lower than optimal germination potential. In addition, an equation was developed to enable researchers to apply the standardized chlorine gas sterilization conditions to airtight containers of different sizes. The protocols described here allow easy, efficient, and

  9. Interpreting Abstract Interpretations in Membership Equational Logic

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Rosu, Grigore

    2001-01-01

    We present a logical framework in which abstract interpretations can be naturally specified and then verified. Our approach is based on membership equational logic which extends equational logics by membership axioms, asserting that a term has a certain sort. We represent an abstract interpretation as a membership equational logic specification, usually as an overloaded order-sorted signature with membership axioms. It turns out that, for any term, its least sort over this specification corresponds to its most concrete abstract value. Maude implements membership equational logic and provides mechanisms to calculate the least sort of a term efficiently. We first show how Maude can be used to get prototyping of abstract interpretations "for free." Building on the meta-logic facilities of Maude, we further develop a tool that automatically checks and abstract interpretation against a set of user-defined properties. This can be used to select an appropriate abstract interpretation, to characterize the specified loss of information during abstraction, and to compare different abstractions with each other.

  10. Alignment of Standards and Assessment: A Theoretical and Empirical Study of Methods for Alignment

    ERIC Educational Resources Information Center

    Nasstrom, Gunilla; Henriksson, Widar

    2008-01-01

    Introduction: In a standards-based school-system alignment of policy documents with standards and assessment is important. To be able to evaluate whether schools and students have reached the standards, the assessment should focus on the standards. Different models and methods can be used for measuring alignment, i.e. the correspondence between…

  11. Conventionalism and Methodological Standards in Contending with Skepticism about Uncertainty

    NASA Astrophysics Data System (ADS)

    Brumble, K. C.

    2012-12-01

    What it means to measure and interpret confidence and uncertainty in a result is often particular to a specific scientific community and its methodology of verification. Additionally, methodology in the sciences varies greatly across disciplines and scientific communities. Understanding the accuracy of predictions of a particular science thus depends largely upon having an intimate working knowledge of the methods, standards, and conventions utilized and underpinning discoveries in that scientific field. Thus, valid criticism of scientific predictions and discoveries must be conducted by those who are literate in the field in question: they must have intimate working knowledge of the methods of the particular community and of the particular research under question. The interpretation and acceptance of uncertainty is one such shared, community-based convention. In the philosophy of science, this methodological and community-based way of understanding scientific work is referred to as conventionalism. By applying the conventionalism of historian and philosopher of science Thomas Kuhn to recent attacks upon methods of multi-proxy mean temperature reconstructions, I hope to illuminate how climate skeptics and their adherents fail to appreciate the need for community-based fluency in the methodological standards for understanding uncertainty shared by the wider climate science community. Further, I will flesh out a picture of climate science community standards of evidence and statistical argument following the work of philosopher of science Helen Longino. I will describe how failure to appreciate the conventions of professionalism and standards of evidence accepted in the climate science community results in the application of naïve falsification criteria. Appeal to naïve falsification in turn has allowed scientists outside the standards and conventions of the mainstream climate science community to consider themselves and to be judged by climate skeptics as valid

  12. ANSI/ASHRAE/IES Standard 90.1-2016 Performance Rating Method Reference Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goel, Supriya; Rosenberg, Michael I.; Eley, Charles

    This document is intended to be a reference manual for the Appendix G Performance Rating Method (PRM) of ANSI/ASHRAE/IES Standard 90.1-2016 (Standard 90.1-2016). The PRM can be used to demonstrate compliance with the standard and to rate the energy efficiency of commercial and high-rise residential buildings with designs that exceed the requirements of Standard 90.1. Use of the PRM for demonstrating compliance with Standard 90.1 is a new feature of the 2016 edition. The procedures and processes described in this manual are designed to provide consistency and accuracy by filling in gaps and providing additional details needed by users ofmore » the PRM.« less

  13. Emerging Trends in the Volume and Format of Outside Examinations Submitted for Secondary Interpretation

    PubMed Central

    Hunt, Christopher H.; Wood, Christopher P.; Diehn, Felix E.; Eckel, Laurence J.; Schwartz, Kara M.; Erickson, Bradley J.

    2014-01-01

    OBJECTIVE The purpose of this article is to describe the trends of secondary interpretations, including the total volume and format of cases. MATERIALS AND METHODS This retrospective study involved all outside neuroradiology examinations submitted for secondary interpretation from November 2006 through December 2010. This practice utilizes consistent criteria and includes all images that cover the brain, neck, and spine. For each month, the total number of outside examinations and their format (i.e., hard-copy film, DICOM CD-ROM, or non-DICOM CD-ROM) were recorded. RESULTS There was no significant change in the volume of cases (1043 ± 131 cases/month; p = 0.46, two-sided Student t test). There was a significant decrease in the volume of hard-copy films submitted, with the mean number of examinations submitted per month on hard-copy film declining from 297 in 2007 to 57 in 2010 (p < 0.0001, Student t test). This decrease was mirrored by an increase in the mean number of cases submitted on CD-ROM (753 cases/month in 2007 and 1036 cases/month in 2010; p < 0.0001). Although most were submitted in DICOM format, there was almost a doubling of the volume of cases submitted on non-DICOM CD-ROM (mean number of non-DICOM CD-ROMs, nine cases/month in 2007 and 17 cases/month in 2010; p < 0.001). CONCLUSION There has been a significant decrease in the number of hard-copy films submitted for secondary interpretation. There has been almost a doubling of the volume of cases submitted in non-DICOM formats, which is unfortunate, given the many advantages of the internationally derived DICOM standard, including ease of archiving, standardized display, efficient review, improved interpretation, and quality of patient care. PMID:22451538

  14. A Preliminary Investigation of the Direct Standard Setting Method.

    ERIC Educational Resources Information Center

    Jones, J. Patrick; And Others

    Three studies assessed the psychometric characteristics of the Direct Standard Setting Method (DSSM). The Angoff technique was also used in each study. The DSSM requires judges to consider an examination 10 items at a time and determine the minimum items in that set a candidate should answer correctly to receive the credential. Nine judges set a…

  15. Philosophical perspectives on quantum chaos: Models and interpretations

    NASA Astrophysics Data System (ADS)

    Bokulich, Alisa Nicole

    2001-09-01

    The problem of quantum chaos is a special case of the larger problem of understanding how the classical world emerges from quantum mechanics. While we have learned that chaos is pervasive in classical systems, it appears to be almost entirely absent in quantum systems. The aim of this dissertation is to determine what implications the interpretation of quantum mechanics has for attempts to explain the emergence of classical chaos. There are three interpretations of quantum mechanics that have set out programs for solving the problem of quantum chaos: the standard interpretation, the statistical interpretation, and the deBroglie-Bohm causal interpretation. One of the main conclusions of this dissertation is that an interpretation alone is insufficient for solving the problem of quantum chaos and that the phenomenon of decoherence must be taken into account. Although a completely satisfactory solution of the problem of quantum chaos is still outstanding, I argue that the deBroglie-Bohm interpretation with the help of decoherence outlines the most promising research program to pursue. In addition to making a contribution to the debate in the philosophy of physics concerning the interpretation of quantum mechanics, this dissertation reveals two important methodological lessons for the philosophy of science. First, issues of reductionism and intertheoretic relations cannot be divorced from questions concerning the interpretation of the theories involved. Not only is the exploration of intertheoretic relations a central part of the articulation and interpretation of an individual theory, but the very terms used to discuss intertheoretic relations, such as `state' and `classical limit', are themselves defined by particular interpretations of the theory. The second lesson that emerges is that, when it comes to characterizing the relationship between classical chaos and quantum mechanics, the traditional approaches to intertheoretic relations, namely reductionism and

  16. Yielding physically-interpretable emulators - A Sparse PCA approach

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.

    2015-12-01

    Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.

  17. Data Mining of Chemogenomics Data Using Bi-Modal PLS Methods and Chemical Interpretation for Molecular Design.

    PubMed

    Hasegawa, Kiyoshi; Funatsu, Kimito

    2014-12-01

    Chemogenomics is a new strategy in drug discovery for interrogating all molecules capable of interacting with all biological targets. Because of the almost infinite number of drug-like organic molecules, bench-based experimental chemogenomics methods are not generally feasible. Several in silico chemogenomics models have therefore been developed for high-throughput screening of large numbers of drug candidate compounds and target proteins. In previous studies, we described two novel bi-modal PLS approaches. These methods provide a significant advantage in that they enable direct connections to be made between biological activities and ligand and protein descriptors. In this special issue, we review these two PLS-based approaches using two different chemogenomics datasets for illustration. We then compare the predictive and interpretive performance of the two methods using the same congeneric data set. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Uncertainty in structural interpretation: Lessons to be learnt

    NASA Astrophysics Data System (ADS)

    Bond, Clare E.

    2015-05-01

    Uncertainty in the interpretation of geological data is an inherent element of geology. Datasets from different sources: remotely sensed seismic imagery, field data and borehole data, are often combined and interpreted to create a geological model of the sub-surface. The data have limited resolution and spatial distribution that results in uncertainty in the interpretation of the data and in the subsequent geological model(s) created. Methods to determine the extent of interpretational uncertainty of a dataset, how to capture and express that uncertainty, and consideration of uncertainties in terms of risk have been investigated. Here I review the work that has taken place and discuss best practice in accounting for uncertainties in structural interpretation workflows. Barriers to best practice are reflected on, including the use of software packages for interpretation. Experimental evidence suggests that minimising interpretation error through the use of geological reasoning and rules can help decrease interpretation uncertainty; through identification of inadmissible interpretations and in highlighting areas of uncertainty. Understanding expert thought processes and reasoning, including the use of visuospatial skills, during interpretation may aid in the identification of uncertainties, and in the education of new geoscientists.

  19. Interpretation of Blood Microbiology Results - Function of the Clinical Microbiologist.

    PubMed

    Kristóf, Katalin; Pongrácz, Júlia

    2016-04-01

    The proper use and interpretation of blood microbiology results may be one of the most challenging and one of the most important functions of clinical microbiology laboratories. Effective implementation of this function requires careful consideration of specimen collection and processing, pathogen detection techniques, and prompt and precise reporting of identification and susceptibility results. The responsibility of the treating physician is proper formulation of the analytical request and to provide the laboratory with complete and precise patient information, which are inevitable prerequisites of a proper testing and interpretation. The clinical microbiologist can offer advice concerning the differential diagnosis, sampling techniques and detection methods to facilitate diagnosis. Rapid detection methods are essential, since the sooner a pathogen is detected, the better chance the patient has of getting cured. Besides the gold-standard blood culture technique, microbiologic methods that decrease the time in obtaining a relevant result are more and more utilized today. In the case of certain pathogens, the pathogen can be identified directly from the blood culture bottle after propagation with serological or automated/semi-automated systems or molecular methods or with MALDI-TOF MS (matrix-assisted laser desorption-ionization time of flight mass spectrometry). Molecular biology methods are also suitable for the rapid detection and identification of pathogens from aseptically collected blood samples. Another important duty of the microbiology laboratory is to notify the treating physician immediately about all relevant information if a positive sample is detected. The clinical microbiologist may provide important guidance regarding the clinical significance of blood isolates, since one-third to one-half of blood culture isolates are contaminants or isolates of unknown clinical significance. To fully exploit the benefits of blood culture and other (non- culture

  20. Puzzle based teaching versus traditional instruction in electrocardiogram interpretation for medical students – a pilot study

    PubMed Central

    Rubinstein, Jack; Dhoble, Abhijeet; Ferenchick, Gary

    2009-01-01

    Background Most medical professionals are expected to possess basic electrocardiogram (EKG) interpretation skills. But, published data suggests that residents' and physicians' EKG interpretation skills are suboptimal. Learning styles differ among medical students; individualization of teaching methods has been shown to be viable and may result in improved learning. Puzzles have been shown to facilitate learning in a relaxed environment. The objective of this study was to assess efficacy of teaching puzzle in EKG interpretation skills among medical students. Methods This is a reader blinded crossover trial. Third year medical students from College of Human Medicine, Michigan State University participated in this study. Two groups (n = 9) received two traditional EKG interpretation skills lectures followed by a standardized exam and two extra sessions with the teaching puzzle and a different exam. Two other groups (n = 6) received identical courses and exams with the puzzle session first followed by the traditional teaching. EKG interpretation scores on final test were used as main outcome measure. Results The average score after only traditional teaching was 4.07 ± 2.08 while after only the puzzle session was 4.04 ± 2.36 (p = 0.97). The average improvement after the traditional session was followed up with a puzzle session was 2.53 ± 1.94 while the average improvement after the puzzle session was followed with the traditional session was 2.08 ± 1.73 (p = 0.67). The final EKG exam score for this cohort (n = 15) was 84.1 compared to 86.6 (p = 0.22) for a comparable sample of medical students (n = 15) at a different campus. Conclusion Teaching EKG interpretation with puzzles is comparable to traditional teaching and may be particularly useful for certain subgroups of students. Puzzle session are more interactive and relaxing, and warrant further investigations on larger scale. PMID:19144134

  1. A data-driven approach for quality assessment of radiologic interpretations.

    PubMed

    Hsu, William; Han, Simon X; Arnold, Corey W; Bui, Alex At; Enzmann, Dieter R

    2016-04-01

    Given the increasing emphasis on delivering high-quality, cost-efficient healthcare, improved methodologies are needed to measure the accuracy and utility of ordered diagnostic examinations in achieving the appropriate diagnosis. Here, we present a data-driven approach for performing automated quality assessment of radiologic interpretations using other clinical information (e.g., pathology) as a reference standard for individual radiologists, subspecialty sections, imaging modalities, and entire departments. Downstream diagnostic conclusions from the electronic medical record are utilized as "truth" to which upstream diagnoses generated by radiology are compared. The described system automatically extracts and compares patient medical data to characterize concordance between clinical sources. Initial results are presented in the context of breast imaging, matching 18 101 radiologic interpretations with 301 pathology diagnoses and achieving a precision and recall of 84% and 92%, respectively. The presented data-driven method highlights the challenges of integrating multiple data sources and the application of information extraction tools to facilitate healthcare quality improvement. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Standards for Clinical Grade Genomic Databases.

    PubMed

    Yohe, Sophia L; Carter, Alexis B; Pfeifer, John D; Crawford, James M; Cushman-Vokoun, Allison; Caughron, Samuel; Leonard, Debra G B

    2015-11-01

    Next-generation sequencing performed in a clinical environment must meet clinical standards, which requires reproducibility of all aspects of the testing. Clinical-grade genomic databases (CGGDs) are required to classify a variant and to assist in the professional interpretation of clinical next-generation sequencing. Applying quality laboratory standards to the reference databases used for sequence-variant interpretation presents a new challenge for validation and curation. To define CGGD and the categories of information contained in CGGDs and to frame recommendations for the structure and use of these databases in clinical patient care. Members of the College of American Pathologists Personalized Health Care Committee reviewed the literature and existing state of genomic databases and developed a framework for guiding CGGD development in the future. Clinical-grade genomic databases may provide different types of information. This work group defined 3 layers of information in CGGDs: clinical genomic variant repositories, genomic medical data repositories, and genomic medicine evidence databases. The layers are differentiated by the types of genomic and medical information contained and the utility in assisting with clinical interpretation of genomic variants. Clinical-grade genomic databases must meet specific standards regarding submission, curation, and retrieval of data, as well as the maintenance of privacy and security. These organizing principles for CGGDs should serve as a foundation for future development of specific standards that support the use of such databases for patient care.

  3. Equivalent statistics and data interpretation.

    PubMed

    Francis, Gregory

    2017-08-01

    Recent reform efforts in psychological science have led to a plethora of choices for scientists to analyze their data. A scientist making an inference about their data must now decide whether to report a p value, summarize the data with a standardized effect size and its confidence interval, report a Bayes Factor, or use other model comparison methods. To make good choices among these options, it is necessary for researchers to understand the characteristics of the various statistics used by the different analysis frameworks. Toward that end, this paper makes two contributions. First, it shows that for the case of a two-sample t test with known sample sizes, many different summary statistics are mathematically equivalent in the sense that they are based on the very same information in the data set. When the sample sizes are known, the p value provides as much information about a data set as the confidence interval of Cohen's d or a JZS Bayes factor. Second, this equivalence means that different analysis methods differ only in their interpretation of the empirical data. At first glance, it might seem that mathematical equivalence of the statistics suggests that it does not matter much which statistic is reported, but the opposite is true because the appropriateness of a reported statistic is relative to the inference it promotes. Accordingly, scientists should choose an analysis method appropriate for their scientific investigation. A direct comparison of the different inferential frameworks provides some guidance for scientists to make good choices and improve scientific practice.

  4. 40 CFR 1043.50 - Approval of methods to meet Tier 1 retrofit NOX standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Approval of methods to meet Tier 1... SUBJECT TO THE MARPOL PROTOCOL § 1043.50 Approval of methods to meet Tier 1 retrofit NOX standards... enable Pre-Tier 1 engines to meet the Tier 1 NOX standard of regulation 13 of Annex VI. Any person may...

  5. Identifying and processing the gap between perceived and actual agreement in breast pathology interpretation.

    PubMed

    Carney, Patricia A; Allison, Kimberly H; Oster, Natalia V; Frederick, Paul D; Morgan, Thomas R; Geller, Berta M; Weaver, Donald L; Elmore, Joann G

    2016-07-01

    We examined how pathologists' process their perceptions of how their interpretations on diagnoses for breast pathology cases agree with a reference standard. To accomplish this, we created an individualized self-directed continuing medical education program that showed pathologists interpreting breast specimens how their interpretations on a test set compared with a reference diagnosis developed by a consensus panel of experienced breast pathologists. After interpreting a test set of 60 cases, 92 participating pathologists were asked to estimate how their interpretations compared with the standard for benign without atypia, atypia, ductal carcinoma in situ and invasive cancer. We then asked pathologists their thoughts about learning about differences in their perceptions compared with actual agreement. Overall, participants tended to overestimate their agreement with the reference standard, with a mean difference of 5.5% (75.9% actual agreement; 81.4% estimated agreement), especially for atypia and were least likely to overestimate it for invasive breast cancer. Non-academic affiliated pathologists were more likely to more closely estimate their performance relative to academic affiliated pathologists (77.6 vs 48%; P=0.001), whereas participants affiliated with an academic medical center were more likely to underestimate agreement with their diagnoses compared with non-academic affiliated pathologists (40 vs 6%). Before the continuing medical education program, nearly 55% (54.9%) of participants could not estimate whether they would overinterpret the cases or underinterpret them relative to the reference diagnosis. Nearly 80% (79.8%) reported learning new information from this individualized web-based continuing medical education program, and 23.9% of pathologists identified strategies they would change their practice to improve. In conclusion, when evaluating breast pathology specimens, pathologists do a good job of estimating their diagnostic agreement with a

  6. A comparison of cover pole with standard vegetation monitoring methods

    USDA-ARS?s Scientific Manuscript database

    The ability of resource managers to make informed decisions regarding wildlife habitat could be improved with the use of existing datasets and the use of cost effective, standardized methods to simultaneously quantify vertical and horizontal cover. The objectives of this study were to (1) characteri...

  7. Effective use of interpreters by family nurse practitioner students: is didactic curriculum enough?

    PubMed

    Phillips, Susanne J; Lie, Desiree; Encinas, Jennifer; Ahearn, Carol Sue; Tiso, Susan

    2011-05-01

    Nurse practitioners (NPs) care for patients with limited English proficiency (LEP). However, NP education for improving communication in interpreted encounters is not well reported. We report a single school study using standardized encounters within a clinical practice examination (CPX) to assess the adequacy of current curriculum. Entering family NP (FNP) students (n=26) participated in a baseline CPX case. They were assessed by standardized patients using the validated Interpreter Impact Rating Scale (IIRS) and Physician-Patient Interaction (PPI) scale, and by interpreters using the Interpreter Scale (IS).The case was re-administered to 31 graduating students following completion of existing curriculum. Primary outcome was aggregate change in skills comprising global IIRS, PPI and IS scores. Pre- and post-performance data were available for one class of 10 students. Secondary outcome was change in skill scores for this class. Mean aggregate global scores showed no significant improvement between scores at entry and graduation. For 10 students with pre- and post-performance data, there was no improvement in skill scores for any measure. Skill assessed on one measure worsened. FNP students show no improvement in skills in working with interpreters with the current curriculum. An enhanced curriculum is needed. ©2011 The Author(s) Journal compilation ©2011 American Academy of Nurse Practitioners.

  8. Using Interpretative Phenomenological Analysis in a Mixed Methods Research Design to Explore Music in the Lives of Mature Age Amateur Keyboard Players

    ERIC Educational Resources Information Center

    Taylor, Angela

    2015-01-01

    This article discusses the use of interpretative phenomenological analysis (IPA) in a mixed methods research design with reference to five recent publications about music in the lives of mature age amateur keyboard players. It explores the links between IPA and the data-gathering methods of "Rivers of Musical Experience",…

  9. Automatic wound infection interpretation for postoperative wound image

    NASA Astrophysics Data System (ADS)

    Hsu, Jui-Tse; Ho, Te-Wei; Shih, Hsueh-Fu; Chang, Chun-Che; Lai, Feipei; Wu, Jin-Ming

    2017-02-01

    With the growing demand for more efficient wound care after surgery, there is a necessity to develop a machine learning based image analysis approach to reduce the burden for health care professionals. The aim of this study was to propose a novel approach to recognize wound infection on the postsurgical site. Firstly, we proposed an optimal clustering method based on unimodal-rosin threshold algorithm to extract the feature points from a potential wound area into clusters for regions of interest (ROI). Each ROI was regarded as a suture site of the wound area. The automatic infection interpretation based on the support vector machine is available to assist physicians doing decision-making in clinical practice. According to clinical physicians' judgment criteria and the international guidelines for wound infection interpretation, we defined infection detector modules as the following: (1) Swelling Detector, (2) Blood Region Detector, (3) Infected Detector, and (4) Tissue Necrosis Detector. To validate the capability of the proposed system, a retrospective study using the confirmation wound pictures that were used for diagnosis by surgical physicians as the gold standard was conducted to verify the classification models. Currently, through cross validation of 42 wound images, our classifiers achieved 95.23% accuracy, 93.33% sensitivity, 100% specificity, and 100% positive predictive value. We believe this ability could help medical practitioners in decision making in clinical practice.

  10. A proposed standard method for polarimetric calibration and calibration verification

    NASA Astrophysics Data System (ADS)

    Persons, Christopher M.; Jones, Michael W.; Farlow, Craig A.; Morell, L. Denise; Gulley, Michael G.; Spradley, Kevin D.

    2007-09-01

    Accurate calibration of polarimetric sensors is critical to reducing and analyzing phenomenology data, producing uniform polarimetric imagery for deployable sensors, and ensuring predictable performance of polarimetric algorithms. It is desirable to develop a standard calibration method, including verification reporting, in order to increase credibility with customers and foster communication and understanding within the polarimetric community. This paper seeks to facilitate discussions within the community on arriving at such standards. Both the calibration and verification methods presented here are performed easily with common polarimetric equipment, and are applicable to visible and infrared systems with either partial Stokes or full Stokes sensitivity. The calibration procedure has been used on infrared and visible polarimetric imagers over a six year period, and resulting imagery has been presented previously at conferences and workshops. The proposed calibration method involves the familiar calculation of the polarimetric data reduction matrix by measuring the polarimeter's response to a set of input Stokes vectors. With this method, however, linear combinations of Stokes vectors are used to generate highly accurate input states. This allows the direct measurement of all system effects, in contrast with fitting modeled calibration parameters to measured data. This direct measurement of the data reduction matrix allows higher order effects that are difficult to model to be discovered and corrected for in calibration. This paper begins with a detailed tutorial on the proposed calibration and verification reporting methods. Example results are then presented for a LWIR rotating half-wave retarder polarimeter.

  11. A New Method for Interpreting Nonstationary Running Correlations and Its Application to the ENSO-EAWM Relationship

    NASA Astrophysics Data System (ADS)

    Geng, Xin; Zhang, Wenjun; Jin, Fei-Fei; Stuecker, Malte F.

    2018-01-01

    We here propose a new statistical method to interpret nonstationary running correlations by decomposing them into a stationary part and a first-order Taylor expansion approximation for the nonstationary part. Then, this method is applied to investigate the nonstationary behavior of the El Niño-Southern Oscillation (ENSO)-East Asian winter monsoon (EAWM) relationship, which exhibits prominent multidecadal variations. It is demonstrated that the first-order approximation of the nonstationary part can be expressed to a large extent by the impact of the nonlinear interaction between the Atlantic Multidecadal Oscillation (AMO) and ENSO (AMO*Niño3.4) on the EAWM. Therefore, the nonstationarity in the ENSO-EAWM relationship comes predominantly from the impact of an AMO modulation on the ENSO-EAWM teleconnection via this key nonlinear interaction. This general method can be applied to investigate nonstationary relationships that are often observed between various different climate phenomena.

  12. Measuring Primary Students' Graph Interpretation Skills Via a Performance Assessment: A case study in instrument development

    NASA Astrophysics Data System (ADS)

    Peterman, Karen; Cranston, Kayla A.; Pryor, Marie; Kermish-Allen, Ruth

    2015-11-01

    This case study was conducted within the context of a place-based education project that was implemented with primary school students in the USA. The authors and participating teachers created a performance assessment of standards-aligned tasks to examine 6-10-year-old students' graph interpretation skills as part of an exploratory research project. Fifty-five students participated in a performance assessment interview at the beginning and end of a place-based investigation. Two forms of the assessment were created and counterbalanced within class at pre and post. In situ scoring was conducted such that responses were scored as correct versus incorrect during the assessment's administration. Criterion validity analysis demonstrated an age-level progression in student scores. Tests of discriminant validity showed that the instrument detected variability in interpretation skills across each of three graph types (line, bar, dot plot). Convergent validity was established by correlating in situ scores with those from the Graph Interpretation Scoring Rubric. Students' proficiency with interpreting different types of graphs matched expectations based on age and the standards-based progression of graphs across primary school grades. The assessment tasks were also effective at detecting pre-post gains in students' interpretation of line graphs and dot plots after the place-based project. The results of the case study are discussed in relation to the common challenges associated with performance assessment. Implications are presented in relation to the need for authentic and performance-based instructional and assessment tasks to respond to the Common Core State Standards and the Next Generation Science Standards.

  13. Validation of a standardized extraction method for formalin-fixed paraffin-embedded tissue samples.

    PubMed

    Lagheden, Camilla; Eklund, Carina; Kleppe, Sara Nordqvist; Unger, Elizabeth R; Dillner, Joakim; Sundström, Karin

    2016-07-01

    Formalin-fixed paraffin-embedded (FFPE) samples can be DNA-extracted and used for human papillomavirus (HPV) genotyping. The xylene-based gold standard for extracting FFPE samples is laborious, suboptimal and involves health hazards for the personnel involved. To compare extraction with the standard xylene method to a xylene-free method used in an HPV LabNet Global Reference Laboratory at the Centers for Disease Control (CDC); based on a commercial method with an extra heating step. Fifty FFPE samples were randomly selected from a national audit of all cervical cancer cases diagnosed in Sweden during 10 years. For each case-block, a blank-block was sectioned, as a control for contamination. For xylene extraction, the standard WHO Laboratory Manual protocol was used. For the CDC method, the manufacturers' protocol was followed except for an extra heating step, 120°C for 20min. Samples were extracted and tested in parallel with β-globin real-time PCR, HPV16 real-time PCR and HPV typing using modified general primers (MGP)-PCR and Luminex assays. For a valid result the blank-block had to be betaglobin-negative in all tests and the case-block positive for beta-globin. Overall, detection was improved with the heating method and the amount of HPV-positive samples increased from 70% to 86% (p=0.039). For all samples where HPV type concordance could be evaluated, there was 100% type concordance. A xylene-free and robust extraction method for HPV-DNA typing in FFPE material is currently in great demand. Our proposed standardized protocol appears to be generally useful. Copyright © 2016. Published by Elsevier B.V.

  14. Interpreting Quantum Logic as a Pragmatic Structure

    NASA Astrophysics Data System (ADS)

    Garola, Claudio

    2017-12-01

    Many scholars maintain that the language of quantum mechanics introduces a quantum notion of truth which is formalized by (standard, sharp) quantum logic and is incompatible with the classical (Tarskian) notion of truth. We show that quantum logic can be identified (up to an equivalence relation) with a fragment of a pragmatic language LGP of assertive formulas, that are justified or unjustified rather than trueor false. Quantum logic can then be interpreted as an algebraic structure that formalizes properties of the notion of empirical justification according to quantum mechanics rather than properties of a quantum notion of truth. This conclusion agrees with a general integrationist perspective that interprets nonstandard logics as theories of metalinguistic notions different from truth, thus avoiding incompatibility with classical notions and preserving the globality of logic.

  15. Interpreting Meta-Analyses of Genome-Wide Association Studies

    PubMed Central

    Han, Buhm; Eskin, Eleazar

    2012-01-01

    Meta-analysis is an increasingly popular tool for combining multiple genome-wide association studies in a single analysis to identify associations with small effect sizes. The effect sizes between studies in a meta-analysis may differ and these differences, or heterogeneity, can be caused by many factors. If heterogeneity is observed in the results of a meta-analysis, interpreting the cause of heterogeneity is important because the correct interpretation can lead to a better understanding of the disease and a more effective design of a replication study. However, interpreting heterogeneous results is difficult. The standard approach of examining the association p-values of the studies does not effectively predict if the effect exists in each study. In this paper, we propose a framework facilitating the interpretation of the results of a meta-analysis. Our framework is based on a new statistic representing the posterior probability that the effect exists in each study, which is estimated utilizing cross-study information. Simulations and application to the real data show that our framework can effectively segregate the studies predicted to have an effect, the studies predicted to not have an effect, and the ambiguous studies that are underpowered. In addition to helping interpretation, the new framework also allows us to develop a new association testing procedure taking into account the existence of effect. PMID:22396665

  16. James Madison's "Public" As Interpreter of the Constitution.

    ERIC Educational Resources Information Center

    Dewey, Donald O.

    James Madison's thoughts on various interpretations of the Constitution maintain that public opinion is the ultimate method of legitimizing the document. The Constitution must prevail against mere public opinion, but public opinion may be used to establish the meaning of the Constitution when conflicting interpretations exist. The public good and…

  17. Pressure balance cross-calibration method using a pressure transducer as transfer standard

    PubMed Central

    Olson, D; Driver, R. G.; Yang, Y

    2016-01-01

    Piston gauges or pressure balances are widely used to realize the SI unit of pressure, the pascal, and to calibrate pressure sensing devices. However, their calibration is time consuming and requires a lot of technical expertise. In this paper, we propose an alternate method of performing a piston gauge cross calibration that incorporates a pressure transducer as an immediate in-situ transfer standard. For a sufficiently linear transducer, the requirement to exactly balance the weights on the two pressure gauges under consideration is greatly relaxed. Our results indicate that this method can be employed without a significant increase in measurement uncertainty. Indeed, in the test case explored here, our results agreed with the traditional method within standard uncertainty, which was less than 6 parts per million. PMID:28303167

  18. TOWARDS A STANDARD METHOD FOR THE MEASUREMENT OF ORGANIC CARBON IN SEDIMENTS

    EPA Science Inventory

    The precisions achieved by two different methods for analysis of organic carbon in soils and sediments were determined and compared. The first method is a rapid dichromate oxidation technique (Walkley-Black) that has long been a standard in soil chemistry. The second is an automa...

  19. Statistical methods of fracture characterization using acoustic borehole televiewer log interpretation

    NASA Astrophysics Data System (ADS)

    Massiot, Cécile; Townend, John; Nicol, Andrew; McNamara, David D.

    2017-08-01

    Acoustic borehole televiewer (BHTV) logs provide measurements of fracture attributes (orientations, thickness, and spacing) at depth. Orientation, censoring, and truncation sampling biases similar to those described for one-dimensional outcrop scanlines, and other logging or drilling artifacts specific to BHTV logs, can affect the interpretation of fracture attributes from BHTV logs. K-means, fuzzy K-means, and agglomerative clustering methods provide transparent means of separating fracture groups on the basis of their orientation. Fracture spacing is calculated for each of these fracture sets. Maximum likelihood estimation using truncated distributions permits the fitting of several probability distributions to the fracture attribute data sets within truncation limits, which can then be extrapolated over the entire range where they naturally occur. Akaike Information Criterion (AIC) and Schwartz Bayesian Criterion (SBC) statistical information criteria rank the distributions by how well they fit the data. We demonstrate these attribute analysis methods with a data set derived from three BHTV logs acquired from the high-temperature Rotokawa geothermal field, New Zealand. Varying BHTV log quality reduces the number of input data points, but careful selection of the quality levels where fractures are deemed fully sampled increases the reliability of the analysis. Spacing data analysis comprising up to 300 data points and spanning three orders of magnitude can be approximated similarly well (similar AIC rankings) with several distributions. Several clustering configurations and probability distributions can often characterize the data at similar levels of statistical criteria. Thus, several scenarios should be considered when using BHTV log data to constrain numerical fracture models.

  20. Electrocardiography: A Technologist's Guide to Interpretation.

    PubMed

    Tso, Colin; Currie, Geoffrey M; Gilmore, David; Kiat, Hosen

    2015-12-01

    The nuclear medicine technologist works with electrocardiography when performing cardiac stress testing and gated cardiac imaging and when monitoring critical patients. To enhance patient care, basic electrocardiogram interpretation skills and recognition of key arrhythmias are essential for the nuclear medicine technologist. This article provides insight into the anatomy of an electrocardiogram trace, covers basic electrocardiogram interpretation methods, and describes an example case typical in the nuclear medicine environment. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  1. Standard care quality determines treatment outcomes in control groups of HAART-adherence intervention studies: implications for the interpretation and comparison of intervention effects.

    PubMed

    de Bruin, Marijn; Viechtbauer, Wolfgang; Hospers, Harm J; Schaalma, Herman P; Kok, Gerjo

    2009-11-01

    Clinical trials of behavioral interventions seek to enhance evidence-based health care. However, in case the quality of standard care provided to control conditions varies between studies and affects outcomes, intervention effects cannot be directly interpreted or compared. The objective of the present study was to examine whether standard care quality (SCQ) could be reliably assessed, varies between studies of highly active antiretroviral HIV-adherence interventions, and is related to the proportion of patients achieving an undetectable viral load ("success rate"). Databases were searched for relevant articles. Authors of selected studies retrospectively completed a checklist with standard care activities, which were coded to compute SCQ scores. The relationship between SCQ and the success rates was examined using meta-regression. Cronbach's alpha, variability in SCQ, and relation between SCQ and success rate. Reliability of the SCQ instrument was high (Cronbach's alpha = .91). SCQ scores ranged from 3.7 to 27.8 (total range = 0-30) and were highly predictive of success rate (p = .002). Variation in SCQ provided to control groups may substantially influence effect sizes of behavior change interventions. Future trials should therefore assess and report SCQ, and meta-analyses should control for variability in SCQ, thereby producing more accurate estimates of the effectiveness of behavior change interventions. PsycINFO Database Record (c) 2009 APA, all rights reserved.

  2. Interpretation miniatures

    NASA Astrophysics Data System (ADS)

    Nikolić, Hrvoje

    Most physicists do not have patience for reading long and obscure interpretation arguments and disputes. Hence, to attract attention of a wider physics community, in this paper various old and new aspects of quantum interpretations are explained in a concise and simple (almost trivial) form. About the “Copenhagen” interpretation, we note that there are several different versions of it and explain how to make sense of “local nonreality” interpretation. About the many-world interpretation (MWI), we explain that it is neither local nor nonlocal, that it cannot explain the Born rule, that it suffers from the preferred basis problem, and that quantum suicide cannot be used to test it. About the Bohmian interpretation, we explain that it is analogous to dark matter, use it to explain that there is no big difference between nonlocal correlation and nonlocal causation, and use some condensed-matter ideas to outline how nonrelativistic Bohmian theory could be a theory of everything. We also explain how different interpretations can be used to demystify the delayed choice experiment, to resolve the problem of time in quantum gravity, and to provide alternatives to quantum nonlocality. Finally, we explain why is life compatible with the second law.

  3. Accurate determination of arsenic in arsenobetaine standard solutions of BCR-626 and NMIJ CRM 7901-a by neutron activation analysis coupled with internal standard method.

    PubMed

    Miura, Tsutomu; Chiba, Koichi; Kuroiwa, Takayoshi; Narukawa, Tomohiro; Hioki, Akiharu; Matsue, Hideaki

    2010-09-15

    Neutron activation analysis (NAA) coupled with an internal standard method was applied for the determination of As in the certified reference material (CRM) of arsenobetaine (AB) standard solutions to verify their certified values. Gold was used as an internal standard to compensate for the difference of the neutron exposure in an irradiation capsule and to improve the sample-to-sample repeatability. Application of the internal standard method significantly improved linearity of the calibration curve up to 1 microg of As, too. The analytical reliability of the proposed method was evaluated by k(0)-standardization NAA. The analytical results of As in AB standard solutions of BCR-626 and NMIJ CRM 7901-a were (499+/-55)mgkg(-1) (k=2) and (10.16+/-0.15)mgkg(-1) (k=2), respectively. These values were found to be 15-20% higher than the certified values. The between-bottle variation of BCR-626 was much larger than the expanded uncertainty of the certified value, although that of NMIJ CRM 7901-a was almost negligible. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  4. The relationship between symbolic interactionism and interpretive description.

    PubMed

    Oliver, Carolyn

    2012-03-01

    In this article I explore the relationship between symbolic interactionist theory and interpretive description methodology. The two are highly compatible, making symbolic interactionism an excellent theoretical framework for interpretive description studies. The pragmatism underlying interpretive description supports locating the methodology within this cross-disciplinary theory to make it more attractive to nonnursing researchers and expand its potential to address practice problems across the applied disciplines. The theory and method are so compatible that symbolic interactionism appears to be part of interpretive description's epistemological foundations. Interpretive description's theoretical roots have, to date, been identified only very generally in interpretivism and the philosophy of nursing. A more detailed examination of its symbolic interactionist heritage furthers the contextualization or forestructuring of the methodology to meet one of its own requirements for credibility.

  5. [Carl Friedrich von Weizsäcker and the interpretations of quantum theory].

    PubMed

    Stöckler, Manfred

    2014-01-01

    What are 'interpretations' of quantum theory? What are the differences between Carl Friedrich von Weizsäkcker's approach and contemporary views? The various interpretations of quantum mechanics give diverse answers to questions concerning the relation between measuring process and standard time development, the embedding of quantum objects in space ('wave-particle-dualism'), and the reference of state vectors. Does the wave function describe states in the real world or does it refer to our knowledge about nature? First, some relevant conceptions in Weizsäcker's book The Structure of Physics (Der Aufbau der Physik, 1985) are introduced. In a second step I point out why his approach is not any longer present in contemporary debates. One reason is that Weizsäcker is mainly affected by classical philosophy (Platon, Aristoteles, Kant). He could not esteem the philosophy of science that was developed in the spirit of logical empiricism. So he lost interest in disputes with Anglo-Saxon philosophy of quantum mechanics. Especially his interpretation of probability and his analysis of the collapse of the state function as change in knowledge differ from contemporary standard views. In recent years, however, epistemic interpretations of quantum mechanics are proposed that share some of Weizsäcker's intuitions.

  6. A Stable Whole Building Performance Method for Standard 90.1-Part II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, Michael I.; Eley, Charles

    2016-06-01

    In May of 2013 we introduced a new approach for compliance with Standard 90.1 that was under development based on the Performance Rating Method of Appendix G to Standard 90.11. Since then, the approach has been finalized through Addendum BM to Standard 90.1-2013 and will be published in the 2016 edition of the Standard. In the meantime, ASHRAE has published an advanced copy of Appendix G including Addendum BM and several other addenda so that software developers and energy program administrators can get a preview of what is coming in the 2016 edition of the Standard2. This article is anmore » update on Addendum BM, summarizes changes made to the original concept as introduced in May of 2013, and provides an approach for developing performance targets for code compliance and beyond code programs.« less

  7. Interpreting Medical Information Using Machine Learning and Individual Conditional Expectation.

    PubMed

    Nohara, Yasunobu; Wakata, Yoshifumi; Nakashima, Naoki

    2015-01-01

    Recently, machine-learning techniques have spread many fields. However, machine-learning is still not popular in medical research field due to difficulty of interpreting. In this paper, we introduce a method of interpreting medical information using machine learning technique. The method gave new explanation of partial dependence plot and individual conditional expectation plot from medical research field.

  8. 2D Potential Theory using Complex Algebra: New Perspectives for Interpretation of Marine Magnetic Anomaly

    NASA Astrophysics Data System (ADS)

    Le Maire, P.; Munschy, M.

    2017-12-01

    Interpretation of marine magnetic anomalies enable to perform accurate global kinematic models. Several methods have been proposed to compute the paleo-latitude of the oceanic crust as its formation. A model of the Earth's magnetic field is used to determine a relationship between the apparent inclination of the magnetization and the paleo-latitude. Usually, the estimation of the apparent inclination is qualitative, with the fit between magnetic data and forward models. We propose to apply a new method using complex algebra to obtain the apparent inclination of the magnetization of the oceanic crust. For two dimensional bodies, we rewrite Talwani's equations using complex algebra; the corresponding complex function of the complex variable, called CMA (complex magnetic anomaly) is easier to use for forward modelling and inversion of the magnetic data. This complex equation allows to visualize the data in the complex plane (Argand diagram) and offers a new way to interpret data (curves to the right of the figure (B), while the curves to the left represent the standard display of magnetic anomalies (A) for the model displayed (C) at the bottom of the figure). In the complex plane, the effect of the apparent inclination is to rotate the curves, while on the standard display the evolution of the shape of the anomaly is more complicated (figure). This innovative method gives the opportunity to study a set of magnetic profiles (provided by the Geological Survey of Norway) acquired in the Norwegian Sea, near the Jan Mayen fracture zone. In this area, the age of the oceanic crust ranges from 40 to 55 Ma and the apparent inclination of the magnetization is computed.

  9. The urgency for optimization and harmonization of thyroid hormone analyses and their interpretation in developmental and reproductive toxicology studies.

    PubMed

    Beekhuijzen, Manon; Schneider, Steffen; Barraclough, Narinder; Hallmark, Nina; Hoberman, Alan; Lordi, Sheri; Moxon, Mary; Perks, Deborah; Piersma, Aldert H; Makris, Susan L

    2018-05-02

    In recent years several OECD test guidelines have been updated and some will be updated shortly with the requirement to measure thyroid hormone levels in the blood of mammalian laboratory species. There is, however, an imperative need for clarification and guidance regarding the collection, assessment, and interpretation of thyroid hormone data for regulatory toxicology and risk assessment. Clarification and guidance is needed for 1) timing and methods of blood collection, 2) standardization and validation of the analytical methods, 3) triggers for additional measurements, 4) the need for T4 measurements in postnatal day (PND) 4 pups, and 5) the interpretation of changes in thyroid hormone levels regarding adversity. Discussions on these topics have already been initiated, and involve expert scientists from a number of international multisector organizations. This paper provides an overview of existing issues, current activities and recommendations for moving forward. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Proposed Standards for Medical Education Submissions to the Journal of General Internal Medicine

    PubMed Central

    Bowen, Judith L.; Gerrity, Martha S.; Kalet, Adina L.; Kogan, Jennifer R.; Spickard, Anderson; Wayne, Diane B.

    2008-01-01

    To help authors design rigorous studies and prepare clear and informative manuscripts, improve the transparency of editorial decisions, and raise the bar on educational scholarship, the Deputy Editors of the Journal of General Internal Medicine articulate standards for medical education submissions to the Journal. General standards include: (1) quality questions, (2) quality methods to match the questions, (3) insightful interpretation of findings, (4) transparent, unbiased reporting, and (5) attention to human subjects’ protection and ethical research conduct. Additional standards for specific study types are described. We hope these proposed standards will generate discussion that will foster their continued evolution. Electronic supplementary material The online version of this article (doi:10.1007/s11606-008-0676-z) contains supplementary material, which is available to authorized users. PMID:18612716

  11. Narratives in Mind and Media: A Cognitive Semiotic Account of Novices Interpreting Visual Science Media

    NASA Astrophysics Data System (ADS)

    Matuk, Camillia Faye

    Visual representations are central to expert scientific thinking. Meanwhile, novices tend toward narrative conceptions of scientific phenomena. Until recently, however, relationships between visual design, narrative thinking, and their impacts on learning science have only been theoretically pursued. This dissertation first synthesizes different disciplinary perspectives, then offers a mixed-methods investigation into interpretations of scientific representations. Finally, it considers design issues associated with narrative and visual imagery, and explores the possibilities of a pedagogical notation to scaffold the understanding of a standard scientific notation. Throughout, I distinguish two categories of visual media by their relation to narrative: Narrative visual media, which convey content via narrative structure, and Conceptual visual media, which convey states of relationships among objects. Given the role of narrative in framing conceptions of scientific phenomena and perceptions of its representations, I suggest that novices are especially prone to construe both kinds of media in narrative terms. To illustrate, I first describe how novices make meaning of the science conveyed in narrative visual media. Vignettes of an undergraduate student's interpretation of a cartoon about natural selection; and of four 13-year olds' readings of a comic book about human papillomavirus infection, together demonstrate conditions under which designed visual narrative elements facilitate or hinder understanding. I next consider the interpretation of conceptual visual media with an example of an expert notation from evolutionary biology, the cladogram. By combining clinical interview methods with experimental design, I show how undergraduate students' narrative theories of evolution frame perceptions of the diagram (Study 1); I demonstrate the flexibility of symbolic meaning, both with the content assumed (Study 2A), and with alternate manners of presenting the diagram

  12. Trail Orienteering: An Effective Way To Practice Map Interpretation.

    ERIC Educational Resources Information Center

    Horizons, 1999

    1999-01-01

    Discusses a type of orienteering developed in Great Britain to allow people with physical disabilities to compete on equal terms. Sites are viewed from a wheelchair-accessible main route. The main skill is interpreting the maps at each site, not finding the sites. Describes differences from standard orienteering, how sites work, and essential…

  13. The standard centrifuge method accurately measures vulnerability curves of long-vesselled olive stems.

    PubMed

    Hacke, Uwe G; Venturas, Martin D; MacKinnon, Evan D; Jacobsen, Anna L; Sperry, John S; Pratt, R Brandon

    2015-01-01

    The standard centrifuge method has been frequently used to measure vulnerability to xylem cavitation. This method has recently been questioned. It was hypothesized that open vessels lead to exponential vulnerability curves, which were thought to be indicative of measurement artifact. We tested this hypothesis in stems of olive (Olea europea) because its long vessels were recently claimed to produce a centrifuge artifact. We evaluated three predictions that followed from the open vessel artifact hypothesis: shorter stems, with more open vessels, would be more vulnerable than longer stems; standard centrifuge-based curves would be more vulnerable than dehydration-based curves; and open vessels would cause an exponential shape of centrifuge-based curves. Experimental evidence did not support these predictions. Centrifuge curves did not vary when the proportion of open vessels was altered. Centrifuge and dehydration curves were similar. At highly negative xylem pressure, centrifuge-based curves slightly overestimated vulnerability compared to the dehydration curve. This divergence was eliminated by centrifuging each stem only once. The standard centrifuge method produced accurate curves of samples containing open vessels, supporting the validity of this technique and confirming its utility in understanding plant hydraulics. Seven recommendations for avoiding artefacts and standardizing vulnerability curve methodology are provided. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  14. Three-dimensional interpretation of TEM soundings

    NASA Astrophysics Data System (ADS)

    Barsukov, P. O.; Fainberg, E. B.

    2013-07-01

    We describe the approach to the interpretation of electromagnetic (EM) sounding data which iteratively adjusts the three-dimensional (3D) model of the environment by local one-dimensional (1D) transformations and inversions and reconstructs the geometrical skeleton of the model. The final 3D inversion is carried out with the minimal number of the sought parameters. At each step of the interpretation, the model of the medium is corrected according to the geological information. The practical examples of the suggested method are presented.

  15. The cancer precision medicine knowledge base for structured clinical-grade mutations and interpretations.

    PubMed

    Huang, Linda; Fernandes, Helen; Zia, Hamid; Tavassoli, Peyman; Rennert, Hanna; Pisapia, David; Imielinski, Marcin; Sboner, Andrea; Rubin, Mark A; Kluk, Michael; Elemento, Olivier

    2017-05-01

    This paper describes the Precision Medicine Knowledge Base (PMKB; https://pmkb.weill.cornell.edu ), an interactive online application for collaborative editing, maintenance, and sharing of structured clinical-grade cancer mutation interpretations. PMKB was built using the Ruby on Rails Web application framework. Leveraging existing standards such as the Human Genome Variation Society variant description format, we implemented a data model that links variants to tumor-specific and tissue-specific interpretations. Key features of PMKB include support for all major variant types, standardized authentication, distinct user roles including high-level approvers, and detailed activity history. A REpresentational State Transfer (REST) application-programming interface (API) was implemented to query the PMKB programmatically. At the time of writing, PMKB contains 457 variant descriptions with 281 clinical-grade interpretations. The EGFR, BRAF, KRAS, and KIT genes are associated with the largest numbers of interpretable variants. PMKB's interpretations have been used in over 1500 AmpliSeq tests and 750 whole-exome sequencing tests. The interpretations are accessed either directly via the Web interface or programmatically via the existing API. An accurate and up-to-date knowledge base of genomic alterations of clinical significance is critical to the success of precision medicine programs. The open-access, programmatically accessible PMKB represents an important attempt at creating such a resource in the field of oncology. The PMKB was designed to help collect and maintain clinical-grade mutation interpretations and facilitate reporting for clinical cancer genomic testing. The PMKB was also designed to enable the creation of clinical cancer genomics automated reporting pipelines via an API. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  16. Wake vortex separation standards : analysis methods

    DOT National Transportation Integrated Search

    1997-01-01

    Wake vortex separation standards are used to prevent hazardous wake vortex encounters. A "safe" separation model can be used to assess the safety of proposed changes in the standards. A safe separation model can be derived from an encounter hazard mo...

  17. Conflicting Interpretations of Scientific Pedagogy

    NASA Astrophysics Data System (ADS)

    Galamba, Arthur

    2016-05-01

    Not surprisingly historical studies have suggested that there is a distance between concepts of teaching methods, their interpretations and their actual use in the classroom. This issue, however, is not always pitched to the personal level in historical studies, which may provide an alternative insight on how teachers conceptualise and engage with concepts of teaching methods. This article provides a case study on this level of conceptualisation by telling the story of Rómulo de Carvalho, an educator from mid-twentieth century Portugal, who for over 40 years engaged with the heuristic and Socratic methods. The overall argument is that concepts of teaching methods are open to different interpretations and are conceptualised within the melting pot of external social pressures and personal teaching preferences. The practice and thoughts of Carvalho about teaching methods are scrutinised to unveil his conflicting stances: Carvalho was a man able to question the tenets of heurism, but who publicly praised the heurism-like "discovery learning" method years later. The first part of the article contextualises the arrival of heurism in Portugal and how Carvalho attacked its philosophical tenets. In the second part, it dwells on his conflicting positions in relation to pupil-centred approaches. The article concludes with an appreciation of the embedded conflicting nature of the appropriation of concepts of teaching methods, and of Carvalho's contribution to the development of the philosophy of practical work in school science.

  18. Understanding Students' Reasoning: Argumentation Schemes as an Interpretation Method in Science Education

    NASA Astrophysics Data System (ADS)

    Konstantinidou, Aikaterini; Macagno, Fabrizio

    2013-05-01

    The purpose of this paper is to investigate the argumentative structure of students' arguments using argumentation schemes as an instrument for reconstructing the missing premises underlying their reasoning. Building on the recent literature in science education, in order for an explanation to be persuasive and achieve a conceptual change it needs to proceed from the interlocutor's background knowledge to the analysis of the unknown or wrongly interpreted phenomena. Argumentation schemes represent the abstract forms of the most used and common forms of human reasoning, combining logical principles with semantic concepts. By identifying the argument structure it is possible to retrieve the missing premises and the crucial concepts and definition on which the conclusion is based. This method of analysis will be shown to provide the teacher with an instrument to improve his or her explanations by taking into consideration the students' intuitions and deep background knowledge on a specific issue. In this fashion the teacher can advance counterarguments or propose new perspectives on the subject matter in order to persuade the students to accept new scientific concepts.

  19. Combining the Best of Two Standard Setting Methods: The Ordered Item Booklet Angoff

    ERIC Educational Resources Information Center

    Smith, Russell W.; Davis-Becker, Susan L.; O'Leary, Lisa S.

    2014-01-01

    This article describes a hybrid standard setting method that combines characteristics of the Angoff (1971) and Bookmark (Mitzel, Lewis, Patz & Green, 2001) methods. The proposed approach utilizes strengths of each method while addressing weaknesses. An ordered item booklet, with items sorted based on item difficulty, is used in combination…

  20. Teachers' Interpretations of Exit Exam Scores and College Readiness

    ERIC Educational Resources Information Center

    McIntosh, Shelby

    2013-01-01

    This study examined teachers' interpretations of Virginia's high school exit exam policy through the teachers' responses to a survey. The survey was administered to teachers from one school district in Northern Virginia. The teachers selected for the survey taught a subject in which students must pass a Standards of Learning (SOL) test in order to…

  1. Physical interpretation of antigravity

    NASA Astrophysics Data System (ADS)

    Bars, Itzhak; James, Albin

    2016-02-01

    Geodesic incompleteness is a problem in both general relativity and string theory. The Weyl-invariant Standard Model coupled to general relativity (SM +GR ), and a similar treatment of string theory, are improved theories that are geodesically complete. A notable prediction of this approach is that there must be antigravity regions of spacetime connected to gravity regions through gravitational singularities such as those that occur in black holes and cosmological bang/crunch. Antigravity regions introduce apparent problems of ghosts that raise several questions of physical interpretation. It was shown that unitarity is not violated, but there may be an instability associated with negative kinetic energies in the antigravity regions. In this paper we show that the apparent problems can be resolved with the interpretation of the theory from the perspective of observers strictly in the gravity region. Such observers cannot experience the negative kinetic energy in antigravity directly, but can only detect in and out signals that interact with the antigravity region. This is no different from a spacetime black box for which the information about its interior is encoded in scattering amplitudes for in/out states at its exterior. Through examples we show that negative kinetic energy in antigravity presents no problems of principles but is an interesting topic for physical investigations of fundamental significance.

  2. Method for Estimating Evaporative Potential (IM/CLO) from ASTM Standard Single Wind Velocity Measures

    DTIC Science & Technology

    2016-08-10

    IM/CLO) FROM ASTM STANDARD SINGLE WIND VELOCITY MEASURES DISCLAIMER The opinions or assertions contained herein are the private views of the...USARIEM TECHNICAL REPORT T16-14 METHOD FOR ESTIMATING EVAPORATIVE POTENTIAL (IM/CLO) FROM ASTM STANDARD SINGLE WIND VELOCITY...ASTM STANDARD SINGLE WIND VELOCITY MEASURES Adam W. Potter Biophysics and Biomedical Modeling Division U.S. Army Research Institute of Environmental

  3. Overcoming Language Barriers in Health Care: Costs and Benefits of Interpreter Services

    PubMed Central

    Jacobs, Elizabeth A.; Shepard, Donald S.; Suaya, Jose A.; Stone, Esta-Lee

    2004-01-01

    Objectives. We assessed the impact of interpreter services on the cost and the utilization of health care services among patients with limited English proficiency. Methods. We measured the change in delivery and cost of care provided to patients enrolled in a health maintenance organization before and after interpreter services were implemented. Results. Compared with English-speaking patients, patients who used the interpreter services received significantly more recommended preventive services, made more office visits, and had more prescriptions written and filled. The estimated cost of providing interpreter services was $279 per person per year. Conclusions. Providing interpreter services is a financially viable method for enhancing delivery of health care to patients with limited English proficiency. PMID:15117713

  4. An Investigation of Undefined Cut Scores with the Hofstee Standard-Setting Method

    ERIC Educational Resources Information Center

    Wyse, Adam E.; Babcock, Ben

    2017-01-01

    This article provides an overview of the Hofstee standard-setting method and illustrates several situations where the Hofstee method will produce undefined cut scores. The situations where the cut scores will be undefined involve cases where the line segment derived from the Hofstee ratings does not intersect the score distribution curve based on…

  5. On sample size and different interpretations of snow stability datasets

    NASA Astrophysics Data System (ADS)

    Schirmer, M.; Mitterer, C.; Schweizer, J.

    2009-04-01

    Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar

  6. The Use of Spatial Cognition in Graph Interpretation

    DTIC Science & Technology

    2007-08-01

    Mathematics has emphasized the importance of proactively teaching students of all ages to interpret graphs and use them to make inferences ( NCTM ... Mathematics . Reston, VA: National Council of Teachers of Mathematics . Oh, S., & Kim, M. (2004). The role of spatial working memory in visual...in learning science (Schunn et al, in press). Not coincidentally, in developing its recent national standards, the National Council of Teachers of

  7. Determine equilibrium dissociation constant of drug-membrane receptor affinity using the cell membrane chromatography relative standard method.

    PubMed

    Ma, Weina; Yang, Liu; Lv, Yanni; Fu, Jia; Zhang, Yanmin; He, Langchong

    2017-06-23

    The equilibrium dissociation constant (K D ) of drug-membrane receptor affinity is the basic parameter that reflects the strength of interaction. The cell membrane chromatography (CMC) method is an effective technique to study the characteristics of drug-membrane receptor affinity. In this study, the K D value of CMC relative standard method for the determination of drug-membrane receptor affinity was established to analyze the relative K D values of drugs binding to the membrane receptors (Epidermal growth factor receptor and angiotensin II receptor). The K D values obtained by the CMC relative standard method had a strong correlation with those obtained by the frontal analysis method. Additionally, the K D values obtained by CMC relative standard method correlated with pharmacological activity of the drug being evaluated. The CMC relative standard method is a convenient and effective method to evaluate drug-membrane receptor affinity. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. An Automated, High-Throughput Method for Interpreting the Tandem Mass Spectra of Glycosaminoglycans

    NASA Astrophysics Data System (ADS)

    Duan, Jiana; Jonathan Amster, I.

    2018-05-01

    The biological interactions between glycosaminoglycans (GAGs) and other biomolecules are heavily influenced by structural features of the glycan. The structure of GAGs can be assigned using tandem mass spectrometry (MS2), but analysis of these data, to date, requires manually interpretation, a slow process that presents a bottleneck to the broader deployment of this approach to solving biologically relevant problems. Automated interpretation remains a challenge, as GAG biosynthesis is not template-driven, and therefore, one cannot predict structures from genomic data, as is done with proteins. The lack of a structure database, a consequence of the non-template biosynthesis, requires a de novo approach to interpretation of the mass spectral data. We propose a model for rapid, high-throughput GAG analysis by using an approach in which candidate structures are scored for the likelihood that they would produce the features observed in the mass spectrum. To make this approach tractable, a genetic algorithm is used to greatly reduce the search-space of isomeric structures that are considered. The time required for analysis is significantly reduced compared to an approach in which every possible isomer is considered and scored. The model is coded in a software package using the MATLAB environment. This approach was tested on tandem mass spectrometry data for long-chain, moderately sulfated chondroitin sulfate oligomers that were derived from the proteoglycan bikunin. The bikunin data was previously interpreted manually. Our approach examines glycosidic fragments to localize SO3 modifications to specific residues and yields the same structures reported in literature, only much more quickly.

  9. Penultimate interpretation.

    PubMed

    Neuman, Yair

    2010-10-01

    Interpretation is at the center of psychoanalytic activity. However, interpretation is always challenged by that which is beyond our grasp, the 'dark matter' of our mind, what Bion describes as ' O'. O is one of the most central and difficult concepts in Bion's thought. In this paper, I explain the enigmatic nature of O as a high-dimensional mental space and point to the price one should pay for substituting the pre-symbolic lexicon of the emotion-laden and high-dimensional unconscious for a low-dimensional symbolic representation. This price is reification--objectifying lived experience and draining it of vitality and complexity. In order to address the difficulty of approaching O through symbolization, I introduce the term 'Penultimate Interpretation'--a form of interpretation that seeks 'loopholes' through which the analyst and the analysand may reciprocally save themselves from the curse of reification. Three guidelines for 'Penultimate Interpretation' are proposed and illustrated through an imaginary dialogue. Copyright © 2010 Institute of Psychoanalysis.

  10. Interpretive and Critical Phenomenological Crime Studies: A Model Design

    ERIC Educational Resources Information Center

    Miner-Romanoff, Karen

    2012-01-01

    The critical and interpretive phenomenological approach is underutilized in the study of crime. This commentary describes this approach, guided by the question, "Why are interpretive phenomenological methods appropriate for qualitative research in criminology?" Therefore, the purpose of this paper is to describe a model of the interpretive…

  11. Writing standard operating procedures (SOPs) for cryostorage protocols: using shoot meristem cryopreservation as an example.

    PubMed

    Harding, Keith; Benson, Erica E

    2015-01-01

    Standard operating procedures are a systematic way of making sure that biopreservation processes, tasks, protocols, and operations are correctly and consistently performed. They are the basic documents of biorepository quality management systems and are used in quality assurance, control, and improvement. Methodologies for constructing workflows and writing standard operating procedures and work instructions are described using a plant cryopreservation protocol as an example. This chapter is pertinent to other biopreservation sectors because how methods are written, interpreted, and implemented can affect the quality of storage outcomes.

  12. Setting a standard: the limulus amebocyte lysate assay and the assessment of microbial contamination on spacecraft surfaces.

    PubMed

    Morris, Heather C; Monaco, Lisa A; Steele, Andrew; Wainwright, Norm

    2010-10-01

    Historically, colony-forming units as determined by plate cultures have been the standard unit for microbiological analysis of environmental samples, medical diagnostics, and products for human use. However, the time and materials required make plate cultures expensive and potentially hazardous in the closed environments of future NASA missions aboard the International Space Station and missions to other Solar System targets. The Limulus Amebocyte Lysate (LAL) assay is an established method for ensuring the sterility and cleanliness of samples in the meat-packing and pharmaceutical industries. Each of these industries has verified numerical requirements for the correct interpretation of results from this assay. The LAL assay is a rapid, point-of-use, verified assay that has already been approved by NASA Planetary Protection as an alternate, molecular method for the examination of outbound spacecraft. We hypothesize that standards for molecular techniques, similar to those used by the pharmaceutical and meat-packing industries, need to be set by space agencies to ensure accurate data interpretation and subsequent decision making. In support of this idea, we present research that has been conducted to relate the LAL assay to plate cultures, and we recommend values obtained from these investigations that could assist in interpretation and analysis of data obtained from the LAL assay.

  13. 29 CFR 780.606 - Interpretation of term “agriculture.”

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGRICULTURE, PROCESSING OF AGRICULTURAL COMMODITIES, AND RELATED SUBJECTS UNDER THE FAIR LABOR STANDARDS ACT Employment in Agriculture and Livestock Auction Operations Under the Section 13(b)(13) Exemption Requirements for Exemption § 780.606 Interpretation of term “agriculture.” Section 3(f) of the Act, which defines...

  14. 29 CFR 780.606 - Interpretation of term “agriculture.”

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... AGRICULTURE, PROCESSING OF AGRICULTURAL COMMODITIES, AND RELATED SUBJECTS UNDER THE FAIR LABOR STANDARDS ACT Employment in Agriculture and Livestock Auction Operations Under the Section 13(b)(13) Exemption Requirements for Exemption § 780.606 Interpretation of term “agriculture.” Section 3(f) of the Act, which defines...

  15. 29 CFR 780.606 - Interpretation of term “agriculture.”

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... AGRICULTURE, PROCESSING OF AGRICULTURAL COMMODITIES, AND RELATED SUBJECTS UNDER THE FAIR LABOR STANDARDS ACT Employment in Agriculture and Livestock Auction Operations Under the Section 13(b)(13) Exemption Requirements for Exemption § 780.606 Interpretation of term “agriculture.” Section 3(f) of the Act, which defines...

  16. 29 CFR 780.606 - Interpretation of term “agriculture.”

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... AGRICULTURE, PROCESSING OF AGRICULTURAL COMMODITIES, AND RELATED SUBJECTS UNDER THE FAIR LABOR STANDARDS ACT Employment in Agriculture and Livestock Auction Operations Under the Section 13(b)(13) Exemption Requirements for Exemption § 780.606 Interpretation of term “agriculture.” Section 3(f) of the Act, which defines...

  17. 29 CFR 780.606 - Interpretation of term “agriculture.”

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... AGRICULTURE, PROCESSING OF AGRICULTURAL COMMODITIES, AND RELATED SUBJECTS UNDER THE FAIR LABOR STANDARDS ACT Employment in Agriculture and Livestock Auction Operations Under the Section 13(b)(13) Exemption Requirements for Exemption § 780.606 Interpretation of term “agriculture.” Section 3(f) of the Act, which defines...

  18. Deaf Students, Teachers, and Interpreters in the Chemistry Lab

    NASA Astrophysics Data System (ADS)

    Seal, Brenda C.; Wynne, Dorothy H.; MacDonald, Gina

    2002-02-01

    This report describes an undergraduate research program at James Madison University that includes deaf and hard-of-hearing students from Gallaudet University, deaf teachers from schools for the Deaf, and both professional interpreters and students engaged in sign language interpreter training. Methods used over a three-year period to maximize participation and expand research opportunities for the students, teachers, and interpreters are shared with the hope that similar projects might be encouraged and replicated in other programs.

  19. Standardized Evaluation for Multi-National Development Programs.

    ERIC Educational Resources Information Center

    Farrell, W. Timothy

    This paper takes the position that standardized evaluation formats and procedures for multi-national development programs are not only desirable but possible in diverse settings. The key is the localization of standard systems, which involves not only the technical manipulation of items and scales, but also the contextual interpretation of…

  20. Quantitative data standardization of X-ray based densitometry methods

    NASA Astrophysics Data System (ADS)

    Sergunova, K. A.; Petraikin, A. V.; Petrjajkin, F. A.; Akhmad, K. S.; Semenov, D. S.; Potrakhov, N. N.

    2018-02-01

    In the present work is proposed the design of special liquid phantom for assessing the accuracy of quantitative densitometric data. Also are represented the dependencies between the measured bone mineral density values and the given values for different X-ray based densitometry techniques. Shown linear graphs make it possible to introduce correction factors to increase the accuracy of BMD measurement by QCT, DXA and DECT methods, and to use them for standardization and comparison of measurements.

  1. 7 CFR 611.10 - Standards, guidelines, and plans.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE CONSERVATION OPERATIONS SOIL SURVEYS Soil Survey Operations § 611.10 Standards, guidelines, and plans. (a) NRCS conducts soil surveys under national standards and guidelines for naming, classifying, and interpreting soils and for disseminating soil survey information. (b...

  2. 7 CFR 611.10 - Standards, guidelines, and plans.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE CONSERVATION OPERATIONS SOIL SURVEYS Soil Survey Operations § 611.10 Standards, guidelines, and plans. (a) NRCS conducts soil surveys under national standards and guidelines for naming, classifying, and interpreting soils and for disseminating soil survey information. (b...

  3. 7 CFR 611.10 - Standards, guidelines, and plans.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE CONSERVATION OPERATIONS SOIL SURVEYS Soil Survey Operations § 611.10 Standards, guidelines, and plans. (a) NRCS conducts soil surveys under national standards and guidelines for naming, classifying, and interpreting soils and for disseminating soil survey information. (b...

  4. 7 CFR 611.10 - Standards, guidelines, and plans.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE CONSERVATION OPERATIONS SOIL SURVEYS Soil Survey Operations § 611.10 Standards, guidelines, and plans. (a) NRCS conducts soil surveys under national standards and guidelines for naming, classifying, and interpreting soils and for disseminating soil survey information. (b...

  5. 7 CFR 611.10 - Standards, guidelines, and plans.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE CONSERVATION OPERATIONS SOIL SURVEYS Soil Survey Operations § 611.10 Standards, guidelines, and plans. (a) NRCS conducts soil surveys under national standards and guidelines for naming, classifying, and interpreting soils and for disseminating soil survey information. (b...

  6. A review of contemporary methods for the presentation of scientific uncertainty.

    PubMed

    Makinson, K A; Hamby, D M; Edwards, J A

    2012-12-01

    Graphic methods for displaying uncertainty are often the most concise and informative way to communicate abstract concepts. Presentation methods currently in use for the display and interpretation of scientific uncertainty are reviewed. Numerous subjective and objective uncertainty display methods are presented, including qualitative assessments, node and arrow diagrams, standard statistical methods, box-and-whisker plots,robustness and opportunity functions, contribution indexes, probability density functions, cumulative distribution functions, and graphical likelihood functions.

  7. Computer decision support as a source of interpretation error: the case of electrocardiograms.

    PubMed

    Tsai, Theodore L; Fridsma, Douglas B; Gatti, Guido

    2003-01-01

    The aim of this study was to determine the effect that the computer interpretation (CI) of electrocardiograms (EKGs) has on the accuracy of resident (noncardiologist) physicians reading EKGs. A randomized, controlled trial was conducted in a laboratory setting from February through June 2001, using a two-period crossover design with matched pairs of subjects randomly assigned to sequencing groups. Subjects' interpretive accuracy of discrete, cardiologist-determined EKG findings were measured as judged by a board-certified internist. Without the CI, subjects interpreted 48.9% (95% confidence interval, 45.0% to 52.8%) of the findings correctly. With the CI, subjects interpreted 55.4% (51.9% to 58.9%) correctly (p < 0.0001). When the CIs that agreed with the gold standard (Correct CIs) were not included, 53.1% (47.7% to 58.5%) of the findings were interpreted correctly. When the correct CI was included, accuracy increased to 68.1% (63.2% to 72.7%; p < 0.0001). When computer advice that did not agree with the gold standard (Incorrect CI) was not provided to the subjects, 56.7% (48.5% to 64.5%) of findings were interpreted correctly. Accuracy dropped to 48.3% (40.4% to 56.4%) when the incorrect computer advice was provided (p = 0.131). Subjects erroneously agreed with the incorrect CI more often when it was presented with the EKG 67.7% (57.2% to 76.7%) than when it was not 34.6% (23.8% to 47.3%; p < 0.0001). Computer decision support systems can generally improve the interpretive accuracy of internal medicine residents in reading EKGs. However, subjects were influenced significantly by incorrect advice, which tempers the overall usefulness of computer-generated advice in this and perhaps other areas.

  8. Computer Decision Support as a Source of Interpretation Error: The Case of Electrocardiograms

    PubMed Central

    Tsai, Theodore L.; Fridsma, Douglas B.; Gatti, Guido

    2003-01-01

    Objective: The aim of this study was to determine the effect that the computer interpretation (CI) of electrocardiograms (EKGs) has on the accuracy of resident (noncardiologist) physicians reading EKGs. Design: A randomized, controlled trial was conducted in a laboratory setting from February through June 2001, using a two-period crossover design with matched pairs of subjects randomly assigned to sequencing groups. Measurements: Subjects' interpretive accuracy of discrete, cardiologist-determined EKG findings were measured as judged by a board-certified internist. Results: Without the CI, subjects interpreted 48.9% (95% confidence interval, 45.0% to 52.8%) of the findings correctly. With the CI, subjects interpreted 55.4% (51.9% to 58.9%) correctly (p < 0.0001). When the CIs that agreed with the gold standard (Correct CIs) were not included, 53.1% (47.7% to 58.5%) of the findings were interpreted correctly. When the correct CI was included, accuracy increased to 68.1% (63.2% to 72.7%; p < 0.0001). When computer advice that did not agree with the gold standard (Incorrect CI) was not provided to the subjects, 56.7% (48.5% to 64.5%) of findings were interpreted correctly. Accuracy dropped to 48.3% (40.4% to 56.4%) when the incorrect computer advice was provided (p = 0.131). Subjects erroneously agreed with the incorrect CI more often when it was presented with the EKG 67.7% (57.2% to 76.7%) than when it was not 34.6% (23.8% to 47.3%; p < 0.0001). Conclusions: Computer decision support systems can generally improve the interpretive accuracy of internal medicine residents in reading EKGs. However, subjects were influenced significantly by incorrect advice, which tempers the overall usefulness of computer-generated advice in this and perhaps other areas. PMID:12807810

  9. Standard Setting in a Small Scale OSCE: A Comparison of the Modified Borderline-Group Method and the Borderline Regression Method

    ERIC Educational Resources Information Center

    Wood, Timothy J.; Humphrey-Murto, Susan M.; Norman, Geoffrey R.

    2006-01-01

    When setting standards, administrators of small-scale OSCEs often face several challenges, including a lack of resources, a lack of available expertise in statistics, and difficulty in recruiting judges. The Modified Borderline-Group Method is a standard setting procedure that compensates for these challenges by using physician examiners and is…

  10. Investigating a Judgemental Rank-Ordering Method for Maintaining Standards in UK Examinations

    ERIC Educational Resources Information Center

    Black, Beth; Bramley, Tom

    2008-01-01

    A new judgemental method of equating raw scores on two tests, based on rank-ordering scripts from both tests, has been developed by Bramley. The rank-ordering method has potential application as a judgemental standard-maintaining mechanism, because given a mark on one test (e.g. the A grade boundary mark), the equivalent mark (i.e. at the same…

  11. Fissures in Standards Formulation: The Role of Neoconservative and Neoliberal Discourses in Justifying Standards Development in Wisconsin and Minnesota

    ERIC Educational Resources Information Center

    Caughlan, Samantha; Beach, Richard

    2007-01-01

    An analysis of English/language arts standards development in Wisconsin and Minnesota in the late 1990s and early 2000s shows a process of compromise between neoliberal and neoconservative factions involved in promoting and writing standards, with the voices of educators conspicuously absent. Interpretive and critical discourse analyses of…

  12. CrowdMapping: A Crowdsourcing-Based Terminology Mapping Method for Medical Data Standardization.

    PubMed

    Mao, Huajian; Chi, Chenyang; Huang, Boyu; Meng, Haibin; Yu, Jinghui; Zhao, Dongsheng

    2017-01-01

    Standardized terminology is the prerequisite of data exchange in analysis of clinical processes. However, data from different electronic health record systems are based on idiosyncratic terminology systems, especially when the data is from different hospitals and healthcare organizations. Terminology standardization is necessary for the medical data analysis. We propose a crowdsourcing-based terminology mapping method, CrowdMapping, to standardize the terminology in medical data. CrowdMapping uses a confidential model to determine how terminologies are mapped to a standard system, like ICD-10. The model uses mappings from different health care organizations and evaluates the diversity of the mapping to determine a more sophisticated mapping rule. Further, the CrowdMapping model enables users to rate the mapping result and interact with the model evaluation. CrowdMapping is a work-in-progress system, we present initial results mapping terminologies.

  13. Standardized Methods for Enhanced Quality and Comparability of Tuberculous Meningitis Studies.

    PubMed

    Marais, Ben J; Heemskerk, Anna D; Marais, Suzaan S; van Crevel, Reinout; Rohlwink, Ursula; Caws, Maxine; Meintjes, Graeme; Misra, Usha K; Mai, Nguyen T H; Ruslami, Rovina; Seddon, James A; Solomons, Regan; van Toorn, Ronald; Figaji, Anthony; McIlleron, Helen; Aarnoutse, Robert; Schoeman, Johan F; Wilkinson, Robert J; Thwaites, Guy E

    2017-02-15

    Tuberculous meningitis (TBM) remains a major cause of death and disability in tuberculosis-endemic areas, especially in young children and immunocompromised adults. Research aimed at improving outcomes is hampered by poor standardization, which limits study comparison and the generalizability of results. We propose standardized methods for the conduct of TBM clinical research that were drafted at an international tuberculous meningitis research meeting organized by the Oxford University Clinical Research Unit in Vietnam. We propose a core dataset including demographic and clinical information to be collected at study enrollment, important aspects related to patient management and monitoring, and standardized reporting of patient outcomes. The criteria proposed for the conduct of observational and intervention TBM studies should improve the quality of future research outputs, can facilitate multicenter studies and meta-analyses of pooled data, and could provide the foundation for a global TBM data repository.

  14. Near-infrared fluorescence image quality test methods for standardized performance evaluation

    NASA Astrophysics Data System (ADS)

    Kanniyappan, Udayakumar; Wang, Bohan; Yang, Charles; Ghassemi, Pejhman; Wang, Quanzeng; Chen, Yu; Pfefer, Joshua

    2017-03-01

    Near-infrared fluorescence (NIRF) imaging has gained much attention as a clinical method for enhancing visualization of cancers, perfusion and biological structures in surgical applications where a fluorescent dye is monitored by an imaging system. In order to address the emerging need for standardization of this innovative technology, it is necessary to develop and validate test methods suitable for objective, quantitative assessment of device performance. Towards this goal, we develop target-based test methods and investigate best practices for key NIRF imaging system performance characteristics including spatial resolution, depth of field and sensitivity. Characterization of fluorescence properties was performed by generating excitation-emission matrix properties of indocyanine green and quantum dots in biological solutions and matrix materials. A turbid, fluorophore-doped target was used, along with a resolution target for assessing image sharpness. Multi-well plates filled with either liquid or solid targets were generated to explore best practices for evaluating detection sensitivity. Overall, our results demonstrate the utility of objective, quantitative, target-based testing approaches as well as the need to consider a wide range of factors in establishing standardized approaches for NIRF imaging system performance.

  15. Analysis of biochemical genetic data on Jewish populations: II. Results and interpretations of heterogeneity indices and distance measures with respect to standards.

    PubMed

    Karlin, S; Kenett, R; Bonné-Tamir, B

    1979-05-01

    A nonparametric statistical methodology is used for the analysis of biochemical frequency data observed on a series of nine Jewish and six non-Jewish populations. Two categories of statistics are used: heterogeneity indices and various distance measures with respect to a standard. The latter are more discriminating in exploiting historical, geographical and culturally relevant information. A number of partial orderings and distance relationships among the populations are determined. Our concern in this study is to analyze similarities and differences among the Jewish populations, in terms of the gene frequency distributions for a number of genetic markers. Typical questions discussed are as follows: These Jewish populations differ in certain morphological and anthropometric traits. Are there corresponding differences in biochemical genetic constitution? How can we assess the extent of heterogeneity between and within groupings? Which class of markers (blood typings or protein loci) discriminates better among the separate populations? The results are quite surprising. For example, we found the Ashkenazi, Sephardi and Iraqi Jewish populations to be consistently close in genetic constitution and distant from all the other populations, namely the Yemenite and Cochin Jews, the Arabs, and the non-Jewish German and Russian populations. We found the Polish Jewish community the most heterogeneous among all Jewish populations. The blood loci discriminate better than the protein loci. A number of possible interpretations and hypotheses for these and other results are offered. The method devised for this analysis should prove useful in studying similarities and differences for other groups of populations for which substantial biochemical polymorphic data are available.

  16. Analysis of biochemical genetic data on Jewish populations: II. Results and interpretations of heterogeneity indices and distance measures with respect to standards.

    PubMed Central

    Karlin, S; Kenett, R; Bonné-Tamir, B

    1979-01-01

    A nonparametric statistical methodology is used for the analysis of biochemical frequency data observed on a series of nine Jewish and six non-Jewish populations. Two categories of statistics are used: heterogeneity indices and various distance measures with respect to a standard. The latter are more discriminating in exploiting historical, geographical and culturally relevant information. A number of partial orderings and distance relationships among the populations are determined. Our concern in this study is to analyze similarities and differences among the Jewish populations, in terms of the gene frequency distributions for a number of genetic markers. Typical questions discussed are as follows: These Jewish populations differ in certain morphological and anthropometric traits. Are there corresponding differences in biochemical genetic constitution? How can we assess the extent of heterogeneity between and within groupings? Which class of markers (blood typings or protein loci) discriminates better among the separate populations? The results are quite surprising. For example, we found the Ashkenazi, Sephardi and Iraqi Jewish populations to be consistently close in genetic constitution and distant from all the other populations, namely the Yemenite and Cochin Jews, the Arabs, and the non-Jewish German and Russian populations. We found the Polish Jewish community the most heterogeneous among all Jewish populations. The blood loci discriminate better than the protein loci. A number of possible interpretations and hypotheses for these and other results are offered. The method devised for this analysis should prove useful in studying similarities and differences for other groups of populations for which substantial biochemical polymorphic data are available. PMID:380330

  17. U-interpreter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arvind; Gostelow, K.P.

    1982-02-01

    The author argues that by giving a unique name to every activity generated during a computation, the u-interpreter can provide greater concurrency in the interpretation of data flow graphs. 19 references.

  18. Evaluation of the methods for enumerating coliform bacteria from water samples using precise reference standards.

    PubMed

    Wohlsen, T; Bates, J; Vesey, G; Robinson, W A; Katouli, M

    2006-04-01

    To use BioBall cultures as a precise reference standard to evaluate methods for enumeration of Escherichia coli and other coliform bacteria in water samples. Eight methods were evaluated including membrane filtration, standard plate count (pour and spread plate methods), defined substrate technology methods (Colilert and Colisure), the most probable number method and the Petrifilm disposable plate method. Escherichia coli and Enterobacter aerogenes BioBall cultures containing 30 organisms each were used. All tests were performed using 10 replicates. The mean recovery of both bacteria varied with the different methods employed. The best and most consistent results were obtained with Petrifilm and the pour plate method. Other methods either yielded a low recovery or showed significantly high variability between replicates. The BioBall is a very suitable quality control tool for evaluating the efficiency of methods for bacterial enumeration in water samples.

  19. Wavelength selection method with standard deviation: application to pulse oximetry.

    PubMed

    Vazquez-Jaccaud, Camille; Paez, Gonzalo; Strojnik, Marija

    2011-07-01

    Near-infrared spectroscopy provides useful biological information after the radiation has penetrated through the tissue, within the therapeutic window. One of the significant shortcomings of the current applications of spectroscopic techniques to a live subject is that the subject may be uncooperative and the sample undergoes significant temporal variations, due to his health status that, from radiometric point of view, introduce measurement noise. We describe a novel wavelength selection method for monitoring, based on a standard deviation map, that allows low-noise sensitivity. It may be used with spectral transillumination, transmission, or reflection signals, including those corrupted by noise and unavoidable temporal effects. We apply it to the selection of two wavelengths for the case of pulse oximetry. Using spectroscopic data, we generate a map of standard deviation that we propose as a figure-of-merit in the presence of the noise introduced by the living subject. Even in the presence of diverse sources of noise, we identify four wavelength domains with standard deviation, minimally sensitive to temporal noise, and two wavelengths domains with low sensitivity to temporal noise.

  20. An ROC-type measure of diagnostic accuracy when the gold standard is continuous-scale.

    PubMed

    Obuchowski, Nancy A

    2006-02-15

    ROC curves and summary measures of accuracy derived from them, such as the area under the ROC curve, have become the standard for describing and comparing the accuracy of diagnostic tests. Methods for estimating ROC curves rely on the existence of a gold standard which dichotomizes patients into disease present or absent. There are, however, many examples of diagnostic tests whose gold standards are not binary-scale, but rather continuous-scale. Unnatural dichotomization of these gold standards leads to bias and inconsistency in estimates of diagnostic accuracy. In this paper, we propose a non-parametric estimator of diagnostic test accuracy which does not require dichotomization of the gold standard. This estimator has an interpretation analogous to the area under the ROC curve. We propose a confidence interval for test accuracy and a statistical test for comparing accuracies of tests from paired designs. We compare the performance (i.e. CI coverage, type I error rate, power) of the proposed methods with several alternatives. An example is presented where the accuracies of two quick blood tests for measuring serum iron concentrations are estimated and compared.

  1. Standard Methods for Bolt-Bearing Testing of Textile Composites

    NASA Technical Reports Server (NTRS)

    Portanova, M. A.; Masters, J. E.

    1995-01-01

    The response of three 2-D braided materials to bolt bearing loading was evaluated using data generated by Boeing Defense and Space Group in Philadelphia, PA. Three test methods, stabilized single shear, unstabilized single shear, and double shear, were compared. In general, these textile composites were found to be sensitive to bolt bearing test methods. The stabilized single shear method yielded higher strengths than the unstabilized single shear method in all cases. The double shear test method always produced the highest strengths but these results may be somewhat misleading. It is therefore recommended that standard material comparisons be made using the stabilized single shear test method. The effects of two geometric parameters, W/D and e/D, were also studied. An evaluation of the effect of the specimen width (W) to hole diameter (D) ratio concluded that bolt bearing responses were consistent with open hole tension results. A W/D ratio of 6 or greater should be maintained. The proximity of the hole to the specimen edge significantly affected strength. In all cases, strength was improved by increasing the ratio of the distance from the hole center to the specimen edge (e) to the hole diameter (D) above 2. An e/D ratio of 3 or greater is recommended.

  2. Convex Regression with Interpretable Sharp Partitions

    PubMed Central

    Petersen, Ashley; Simon, Noah; Witten, Daniela

    2016-01-01

    We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set. PMID:27635120

  3. Dental Students' Interpretations of Digital Panoramic Radiographs on Completely Edentate Patients.

    PubMed

    Kratz, Richard J; Nguyen, Caroline T; Walton, Joanne N; MacDonald, David

    2018-03-01

    The ability of dental students to interpret digital panoramic radiographs (PANs) of edentulous patients has not been documented. The aim of this retrospective study was to compare the ability of second-year (D2) dental students with that of third- and fourth-year (D3-D4) dental students to interpret and identify positional errors in digital PANs obtained from patients with complete edentulism. A total of 169 digital PANs from edentulous patients were assessed by D2 (n=84) and D3-D4 (n=85) dental students at one Canadian dental school. The correctness of the students' interpretations was determined by comparison to a gold standard established by assessments of the same PANs by two experts (a graduate student in prosthodontics and an oral and maxillofacial radiologist). Data collected were from September 1, 2006, when digital radiography was implemented at the university, to December 31, 2012. Nearly all (95%) of the PANs were acceptable diagnostically despite a high proportion (92%) of positional errors detected. A total of 301 positional errors were identified in the sample. The D2 students identified significantly more (p=0.002) positional errors than the D3-D4 students. There was no significant difference (p=0.059) in the distribution of radiographic interpretation errors between the two student groups when compared to the gold standard. Overall, the category of extragnathic findings had the highest number of false negatives (43) reported. In this study, dental students interpreted digital PANs of edentulous patients satisfactorily, but they were more adept at identifying radiographic findings compared to positional errors. Students should be reminded to examine the entire radiograph thoroughly to ensure extragnathic findings are not missed and to recognize and report patient positional errors.

  4. A survey of current practices for genomic sequencing test interpretation and reporting processes in US laboratories.

    PubMed

    O'Daniel, Julianne M; McLaughlin, Heather M; Amendola, Laura M; Bale, Sherri J; Berg, Jonathan S; Bick, David; Bowling, Kevin M; Chao, Elizabeth C; Chung, Wendy K; Conlin, Laura K; Cooper, Gregory M; Das, Soma; Deignan, Joshua L; Dorschner, Michael O; Evans, James P; Ghazani, Arezou A; Goddard, Katrina A; Gornick, Michele; Farwell Hagman, Kelly D; Hambuch, Tina; Hegde, Madhuri; Hindorff, Lucia A; Holm, Ingrid A; Jarvik, Gail P; Knight Johnson, Amy; Mighion, Lindsey; Morra, Massimo; Plon, Sharon E; Punj, Sumit; Richards, C Sue; Santani, Avni; Shirts, Brian H; Spinner, Nancy B; Tang, Sha; Weck, Karen E; Wolf, Susan M; Yang, Yaping; Rehm, Heidi L

    2017-05-01

    While the diagnostic success of genomic sequencing expands, the complexity of this testing should not be overlooked. Numerous laboratory processes are required to support the identification, interpretation, and reporting of clinically significant variants. This study aimed to examine the workflow and reporting procedures among US laboratories to highlight shared practices and identify areas in need of standardization. Surveys and follow-up interviews were conducted with laboratories offering exome and/or genome sequencing to support a research program or for routine clinical services. The 73-item survey elicited multiple choice and free-text responses that were later clarified with phone interviews. Twenty-one laboratories participated. Practices highly concordant across all groups included consent documentation, multiperson case review, and enabling patient opt-out of incidental or secondary findings analysis. Noted divergence included use of phenotypic data to inform case analysis and interpretation and reporting of case-specific quality metrics and methods. Few laboratory policies detailed procedures for data reanalysis, data sharing, or patient access to data. This study provides an overview of practices and policies of experienced exome and genome sequencing laboratories. The results enable broader consideration of which practices are becoming standard approaches, where divergence remains, and areas of development in best practice guidelines that may be helpful.Genet Med advance online publication 03 Novemeber 2016.

  5. Conflicting Interpretations of Scientific Pedagogy

    ERIC Educational Resources Information Center

    Galamba, Arthur

    2016-01-01

    Not surprisingly historical studies have suggested that there is a distance between concepts of teaching methods, their interpretations and their actual use in the classroom. This issue, however, is not always pitched to the personal level in historical studies, which may provide an alternative insight on how teachers conceptualise and engage with…

  6. Interpretable Deep Models for ICU Outcome Prediction

    PubMed Central

    Che, Zhengping; Purushotham, Sanjay; Khemani, Robinder; Liu, Yan

    2016-01-01

    Exponential surge in health care data, such as longitudinal data from electronic health records (EHR), sensor data from intensive care unit (ICU), etc., is providing new opportunities to discover meaningful data-driven characteristics and patterns ofdiseases. Recently, deep learning models have been employedfor many computational phenotyping and healthcare prediction tasks to achieve state-of-the-art performance. However, deep models lack interpretability which is crucial for wide adoption in medical research and clinical decision-making. In this paper, we introduce a simple yet powerful knowledge-distillation approach called interpretable mimic learning, which uses gradient boosting trees to learn interpretable models and at the same time achieves strong prediction performance as deep learning models. Experiment results on Pediatric ICU dataset for acute lung injury (ALI) show that our proposed method not only outperforms state-of-the-art approaches for morality and ventilator free days prediction tasks but can also provide interpretable models to clinicians. PMID:28269832

  7. Research on the Construction of Remote Sensing Automatic Interpretation Symbol Big Data

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Liu, R.; Liu, J.; Cheng, T.

    2018-04-01

    Remote sensing automatic interpretation symbol (RSAIS) is an inexpensive and fast method in providing precise in-situ information for image interpretation and accuracy. This study designed a scientific and precise RSAIS data characterization method, as well as a distributed and cloud architecture massive data storage method. Additionally, it introduced an offline and online data update mode and a dynamic data evaluation mechanism, with the aim to create an efficient approach for RSAIS big data construction. Finally, a national RSAIS database with more than 3 million samples covering 86 land types was constructed during 2013-2015 based on the National Geographic Conditions Monitoring Project of China and then annually updated since the 2016 period. The RSAIS big data has proven to be a good method for large scale image interpretation and field validation. It is also notable that it has the potential to solve image automatic interpretation with the assistance of deep learning technology in the remote sensing big data era.

  8. MUSQA: a CS method to build a multi-standard quality management system

    NASA Astrophysics Data System (ADS)

    Cros, Elizabeth; Sneed, Isabelle

    2002-07-01

    CS Communication & Systèmes, through its long quality management experience, has been able to build and evolve its Quality Management System according to clients requirements, norms, standards and models (ISO, DO178, ECSS, CMM, ...), evolving norms (transition from ISO 9001:1994 to ISO 9001:2000) and the TQM approach, being currently deployed. The aim of this paper is to show how, from this enriching and instructive experience, CS has defined and formalised its method: MuSQA (Multi-Standard Quality Approach). This method allows to built a new Quality Management System or simplify and unify an existing one. MuSQA objective is to provide any organisation with an open Quality Management System, which is able to evolve easily and turns to be a useful instrument for everyone, operational as well as non-operational staff.

  9. Preparation of pyrolysis reference samples: evaluation of a standard method using a tube furnace.

    PubMed

    Sandercock, P Mark L

    2012-05-01

    A new, simple method for the reproducible creation of pyrolysis products from different materials that may be found at a fire scene is described. A temperature programmable steady-state tube furnace was used to generate pyrolysis products from different substrates, including softwoods, paper, vinyl sheet flooring, and carpet. The temperature profile of the tube furnace was characterized, and the suitability of the method to reproducibly create pyrolysates similar to those found in real fire debris was assessed. The use of this method to create proficiency tests to realistically test an examiner's ability to interpret complex gas chromatograph-mass spectrometric fire debris data, and to create a library of pyrolsates generated from materials commonly found at a fire scene, is demonstrated. © 2011 American Academy of Forensic Sciences.

  10. Instructional Basics: Oppelt Standard Method of Therapeutic and Recreational Ice Skating.

    ERIC Educational Resources Information Center

    Oppelt, Kurt

    Detailed in the booklet is the standard ice skating method and considered are the benefits of therapeutic ice skating for the handicapped and aged. Values for the mentally retarded and physically handicapped are seen to include physiological (such as increased flexibility and improved posture), psychological (including satifaction and enhanced…

  11. On Measuring Quantitative Interpretations of Reasonable Doubt

    ERIC Educational Resources Information Center

    Dhami, Mandeep K.

    2008-01-01

    Beyond reasonable doubt represents a probability value that acts as the criterion for conviction in criminal trials. I introduce the membership function (MF) method as a new tool for measuring quantitative interpretations of reasonable doubt. Experiment 1 demonstrated that three different methods (i.e., direct rating, decision theory based, and…

  12. The development and standardization of testing methods for genetically modified organisms and their derived products.

    PubMed

    Zhang, Dabing; Guo, Jinchao

    2011-07-01

    As the worldwide commercialization of genetically modified organisms (GMOs) increases and consumers concern the safety of GMOs, many countries and regions are issuing labeling regulations on GMOs and their products. Analytical methods and their standardization for GM ingredients in foods and feed are essential for the implementation of labeling regulations. To date, the GMO testing methods are mainly based on the inserted DNA sequences and newly produced proteins in GMOs. This paper presents an overview of GMO testing methods as well as their standardization. © 2011 Institute of Botany, Chinese Academy of Sciences.

  13. Critical appraisal of rigour in interpretive phenomenological nursing research.

    PubMed

    de Witt, Lorna; Ploeg, Jenny

    2006-07-01

    This paper reports a critical review of published nursing research for expressions of rigour in interpretive phenomenology, and a new framework of rigour specific to this methodology is proposed. The rigour of interpretive phenomenology is an important nursing research methods issue that has direct implications for the legitimacy of nursing science. The use of a generic set of qualitative criteria of rigour for interpretive phenomenological studies is problematic because it is philosophically inconsistent with the methodology and creates obstacles to full expression of rigour in such studies. A critical review was conducted of the published theoretical interpretive phenomenological nursing literature from 1994 to 2004 and the expressions of rigour in this literature identified. We used three sources to inform the derivation of a proposed framework of expressions of rigour for interpretive phenomenology: the phenomenological scholar van Manen, the theoretical interpretive phenomenological nursing literature, and Madison's criteria of rigour for hermeneutic phenomenology. The nursing literature reveals a broad range of criteria for judging the rigour of interpretive phenomenological research. The proposed framework for evaluating rigour in this kind of research contains the following five expressions: balanced integration, openness, concreteness, resonance, and actualization. Balanced integration refers to the intertwining of philosophical concepts in the study methods and findings and a balance between the voices of study participants and the philosophical explanation. Openness is related to a systematic, explicit process of accounting for the multiple decisions made throughout the study process. Concreteness relates to usefulness for practice of study findings. Resonance encompasses the experiential or felt effect of reading study findings upon the reader. Finally, actualization refers to the future realization of the resonance of study findings. Adoption of this

  14. Radiographers' performance in chest X-ray interpretation: the Nigerian experience

    PubMed Central

    Egbe, N O; Akpan, B E

    2015-01-01

    Objective: To assess the performance of Nigerian radiographers in interpretation of plain chest radiographs and to assess whether age, years since qualification and sector of practice are associated with performance. Methods: A test set of 50 radiographs containing 23 cases with no pathology (normal) and 27 abnormal cases (cardiopulmonary conditions) independently confirmed by 3 radiologists were presented to 51 radiographers in a random order. Readers independently evaluated radiographs for absence or presence of disease and stated the location, radiographic features and diagnosis. Readers self-reported their age, years since qualification and sector of practice. Receiver operating characteristic was used to assess the performance. Mann–Whitney U test was used to assess whether age, years since qualification and sector of practice were associated with performance. Results: Mean location sensitivity was 88.9 [95% confidence interval (CI), 0.787–0.980]. Mean sensitivity and specificity were 76.9 (95% CI, 0.658–0.864) and 79.8 (95% CI, 0.658–0.864), respectively. Age was not associated with performance (p = 0.07). Number of years qualified as radiographer (p = 0.005) and private practice (p = 0.004) were positively associated with performance. Conclusion: Nigerian radiographers can correctly report chest radiographs to a reasonable standard, and performance is associated with number of years since qualification and the sector of practice. Advances in knowledge: There are less than 300 radiologists serving a Nigerian population of about 170 million; therefore, X-ray interpretation by radiographers deserves consideration. Nigerian radiographers have potential to interpret chest X-ray in the clinical setting, and this may significantly improve radiology service delivery in this region. PMID:25966290

  15. A simple web-based tool to compare freshwater fish data collected using AFS standard methods

    USGS Publications Warehouse

    Bonar, Scott A.; Mercado-Silva, Norman; Rahr, Matt; Torrey, Yuta T.; Cate, Averill

    2016-01-01

    The American Fisheries Society (AFS) recently published Standard Methods for Sampling North American Freshwater Fishes. Enlisting the expertise of 284 scientists from 107 organizations throughout Canada, Mexico, and the United States, this text was developed to facilitate comparisons of fish data across regions or time. Here we describe a user-friendly web tool that automates among-sample comparisons in individual fish condition, population length-frequency distributions, and catch per unit effort (CPUE) data collected using AFS standard methods. Currently, the web tool (1) provides instantaneous summaries of almost 4,000 data sets of condition, length frequency, and CPUE of common freshwater fishes collected using standard gears in 43 states and provinces; (2) is easily appended with new standardized field data to update subsequent queries and summaries; (3) compares fish data from a particular water body with continent, ecoregion, and state data summaries; and (4) provides additional information about AFS standard fish sampling including benefits, ongoing validation studies, and opportunities to comment on specific methods. The web tool—programmed in a PHP-based Drupal framework—was supported by several AFS Sections, agencies, and universities and is freely available from the AFS website and fisheriesstandardsampling.org. With widespread use, the online tool could become an important resource for fisheries biologists.

  16. Computer enhancement through interpretive techniques

    NASA Technical Reports Server (NTRS)

    Foster, G.; Spaanenburg, H. A. E.; Stumpf, W. E.

    1972-01-01

    The improvement in the usage of the digital computer through the use of the technique of interpretation rather than the compilation of higher ordered languages was investigated by studying the efficiency of coding and execution of programs written in FORTRAN, ALGOL, PL/I and COBOL. FORTRAN was selected as the high level language for examining programs which were compiled, and A Programming Language (APL) was chosen for the interpretive language. It is concluded that APL is competitive, not because it and the algorithms being executed are well written, but rather because the batch processing is less efficient than has been admitted. There is not a broad base of experience founded on trying different implementation strategies which have been targeted at open competition with traditional processing methods.

  17. Reproducibility Between Brain Uptake Ratio Using Anatomic Standardization and Patlak-Plot Methods.

    PubMed

    Shibutani, Takayuki; Onoguchi, Masahisa; Noguchi, Atsushi; Yamada, Tomoki; Tsuchihashi, Hiroko; Nakajima, Tadashi; Kinuya, Seigo

    2015-12-01

    The Patlak-plot and conventional methods of determining brain uptake ratio (BUR) have some problems with reproducibility. We formulated a method of determining BUR using anatomic standardization (BUR-AS) in a statistical parametric mapping algorithm to improve reproducibility. The objective of this study was to demonstrate the inter- and intraoperator reproducibility of mean cerebral blood flow as determined using BUR-AS in comparison to the conventional-BUR (BUR-C) and Patlak-plot methods. The images of 30 patients who underwent brain perfusion SPECT were retrospectively used in this study. The images were reconstructed using ordered-subset expectation maximization and processed using an automatic quantitative analysis for cerebral blood flow of ECD tool. The mean SPECT count was calculated from axial basal ganglia slices of the normal side (slices 31-40) drawn using a 3-dimensional stereotactic region-of-interest template after anatomic standardization. The mean cerebral blood flow was calculated from the mean SPECT count. Reproducibility was evaluated using coefficient of variation and Bland-Altman plotting. For both inter- and intraoperator reproducibility, the BUR-AS method had the lowest coefficient of variation and smallest error range about the Bland-Altman plot. Mean CBF obtained using the BUR-AS method had the highest reproducibility. Compared with the Patlak-plot and BUR-C methods, the BUR-AS method provides greater inter- and intraoperator reproducibility of cerebral blood flow measurement. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  18. An isotope-dilution standard GC/MS/MS method for steroid hormones in water

    USGS Publications Warehouse

    Foreman, William T.; Gray, James L.; ReVello, Rhiannon C.; Lindley, Chris E.; Losche, Scott A.

    2013-01-01

    An isotope-dilution quantification method was developed for 20 natural and synthetic steroid hormones and additional compounds in filtered and unfiltered water. Deuterium- or carbon-13-labeled isotope-dilution standards (IDSs) are added to the water sample, which is passed through an octadecylsilyl solid-phase extraction (SPE) disk. Following extract cleanup using Florisil SPE, method compounds are converted to trimethylsilyl derivatives and analyzed by gas chromatography with tandem mass spectrometry. Validation matrices included reagent water, wastewater-affected surface water, and primary (no biological treatment) and secondary wastewater effluent. Overall method recovery for all analytes in these matrices averaged 100%; with overall relative standard deviation of 28%. Mean recoveries of the 20 individual analytes for spiked reagent-water samples prepared along with field samples analyzed in 2009–2010 ranged from 84–104%, with relative standard deviations of 6–36%. Detection levels estimated using ASTM International’s D6091–07 procedure range from 0.4 to 4 ng/L for 17 analytes. Higher censoring levels of 100 ng/L for bisphenol A and 200 ng/L for cholesterol and 3-beta-coprostanol are used to prevent bias and false positives associated with the presence of these analytes in blanks. Absolute method recoveries of the IDSs provide sample-specific performance information and guide data reporting. Careful selection of labeled compounds for use as IDSs is important because both inexact IDS-analyte matches and deuterium label loss affect an IDS’s ability to emulate analyte performance. Six IDS compounds initially tested and applied in this method exhibited deuterium loss and are not used in the final method.

  19. Standardization of ¹³¹I: implementation of CIEMAT/NIST method at BARC, India.

    PubMed

    Kulkarni, D B; Anuradha, R; Reddy, P J; Joseph, Leena

    2011-10-01

    The CIEMAT/NIST efficiency tracing method using ³H standard was implemented at Radiation Safety Systems Division, Bhabha Atomic Research Centre (BARC) for the standardization of ¹³¹I radioactive solution. Measurements were also carried out using the 4π β-γ coincidence counting system maintained as a primary standard at the laboratory. The implementation of the CIEMAT/NIST method was verified by comparing the activity concentration obtained in the laboratory with that of the average value of the APMP intercomparison (Yunoki et al., in progress, (APMP.RI(II)-K2.I-131)). The results obtained by the laboratory is linked to the CIPM Key Comparison Reference Value (KCRV) through the equivalent activity value of National Metrology Institute of Japan (NMIJ) (Yunoki et al., in progress, (APMP.RI(II)-K2.I-131)), which was the pilot laboratory for the intercomparison. The procedure employed to standardize ¹³¹I by the CIEMAT/NIST efficiency tracing technique is presented. The activity concentrations obtained have been normalized with the activity concentration measured by NMIJ to maintain confidentiality of results until the Draft-A report is accepted by all participants. The normalized activity concentrations obtained with the CIEMAT/NIST method was 0.9985 ± 0.0035 kBq/g and using 4π β-γ coincidence counting method was 0.9909 ± 0.0046 kBq/g as on 20 March 2009, 0 h UTC. The normalized activity concentration measured by the NMIJ was 1 ± 0.0024 kBq/g. The normalized average of the activity concentrations of all the participating laboratories was 1.004 ± 0.028 kBq/g. The results obtained in the laboratory are comparable with the other international standards within the uncertainty limits. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Design and Initial Characterization of the SC-200 Proteomics Standard Mixture

    PubMed Central

    Bauman, Andrew; Higdon, Roger; Rapson, Sean; Loiue, Brenton; Hogan, Jason; Stacy, Robin; Napuli, Alberto; Guo, Wenjin; van Voorhis, Wesley; Roach, Jared; Lu, Vincent; Landorf, Elizabeth; Stewart, Elizabeth; Kolker, Natali; Collart, Frank; Myler, Peter; van Belle, Gerald

    2011-01-01

    Abstract High-throughput (HTP) proteomics studies generate large amounts of data. Interpretation of these data requires effective approaches to distinguish noise from biological signal, particularly as instrument and computational capacity increase and studies become more complex. Resolving this issue requires validated and reproducible methods and models, which in turn requires complex experimental and computational standards. The absence of appropriate standards and data sets for validating experimental and computational workflows hinders the development of HTP proteomics methods. Most protein standards are simple mixtures of proteins or peptides, or undercharacterized reference standards in which the identity and concentration of the constituent proteins is unknown. The Seattle Children's 200 (SC-200) proposed proteomics standard mixture is the next step toward developing realistic, fully characterized HTP proteomics standards. The SC-200 exhibits a unique modular design to extend its functionality, and consists of 200 proteins of known identities and molar concentrations from 6 microbial genomes, distributed into 10 molar concentration tiers spanning a 1,000-fold range. We describe the SC-200's design, potential uses, and initial characterization. We identified 84% of SC-200 proteins with an LTQ-Orbitrap and 65% with an LTQ-Velos (false discovery rate = 1% for both). There were obvious trends in success rate, sequence coverage, and spectral counts with protein concentration; however, protein identification, sequence coverage, and spectral counts vary greatly within concentration levels. PMID:21250827

  1. Design and initial characterization of the SC-200 proteomics standard mixture.

    PubMed

    Bauman, Andrew; Higdon, Roger; Rapson, Sean; Loiue, Brenton; Hogan, Jason; Stacy, Robin; Napuli, Alberto; Guo, Wenjin; van Voorhis, Wesley; Roach, Jared; Lu, Vincent; Landorf, Elizabeth; Stewart, Elizabeth; Kolker, Natali; Collart, Frank; Myler, Peter; van Belle, Gerald; Kolker, Eugene

    2011-01-01

    High-throughput (HTP) proteomics studies generate large amounts of data. Interpretation of these data requires effective approaches to distinguish noise from biological signal, particularly as instrument and computational capacity increase and studies become more complex. Resolving this issue requires validated and reproducible methods and models, which in turn requires complex experimental and computational standards. The absence of appropriate standards and data sets for validating experimental and computational workflows hinders the development of HTP proteomics methods. Most protein standards are simple mixtures of proteins or peptides, or undercharacterized reference standards in which the identity and concentration of the constituent proteins is unknown. The Seattle Children's 200 (SC-200) proposed proteomics standard mixture is the next step toward developing realistic, fully characterized HTP proteomics standards. The SC-200 exhibits a unique modular design to extend its functionality, and consists of 200 proteins of known identities and molar concentrations from 6 microbial genomes, distributed into 10 molar concentration tiers spanning a 1,000-fold range. We describe the SC-200's design, potential uses, and initial characterization. We identified 84% of SC-200 proteins with an LTQ-Orbitrap and 65% with an LTQ-Velos (false discovery rate = 1% for both). There were obvious trends in success rate, sequence coverage, and spectral counts with protein concentration; however, protein identification, sequence coverage, and spectral counts vary greatly within concentration levels.

  2. Directionality effects in simultaneous language interpreting: the case of sign language interpreters in The Netherlands.

    PubMed

    Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan

    2011-01-01

    The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.

  3. Puzzle based teaching versus traditional instruction in electrocardiogram interpretation for medical students--a pilot study.

    PubMed

    Rubinstein, Jack; Dhoble, Abhijeet; Ferenchick, Gary

    2009-01-13

    Most medical professionals are expected to possess basic electrocardiogram (EKG) interpretation skills. But, published data suggests that residents' and physicians' EKG interpretation skills are suboptimal. Learning styles differ among medical students; individualization of teaching methods has been shown to be viable and may result in improved learning. Puzzles have been shown to facilitate learning in a relaxed environment. The objective of this study was to assess efficacy of teaching puzzle in EKG interpretation skills among medical students. This is a reader blinded crossover trial. Third year medical students from College of Human Medicine, Michigan State University participated in this study. Two groups (n = 9) received two traditional EKG interpretation skills lectures followed by a standardized exam and two extra sessions with the teaching puzzle and a different exam. Two other groups (n = 6) received identical courses and exams with the puzzle session first followed by the traditional teaching. EKG interpretation scores on final test were used as main outcome measure. The average score after only traditional teaching was 4.07 +/- 2.08 while after only the puzzle session was 4.04 +/- 2.36 (p = 0.97). The average improvement after the traditional session was followed up with a puzzle session was 2.53 +/- 1.94 while the average improvement after the puzzle session was followed with the traditional session was 2.08 +/- 1.73 (p = 0.67). The final EKG exam score for this cohort (n = 15) was 84.1 compared to 86.6 (p = 0.22) for a comparable sample of medical students (n = 15) at a different campus. Teaching EKG interpretation with puzzles is comparable to traditional teaching and may be particularly useful for certain subgroups of students. Puzzle session are more interactive and relaxing, and warrant further investigations on larger scale.

  4. Potential for unreliable interpretation of EEG recorded with microelectrodes.

    PubMed

    Stacey, William C; Kellis, Spencer; Greger, Bradley; Butson, Christopher R; Patel, Paras R; Assaf, Trevor; Mihaylova, Temenuzhka; Glynn, Simon

    2013-08-01

    Recent studies in epilepsy, cognition, and brain machine interfaces have shown the utility of recording intracranial electroencephalography (iEEG) with greater spatial resolution. Many of these studies utilize microelectrodes connected to specialized amplifiers that are optimized for such recordings. We recently measured the impedances of several commercial microelectrodes and demonstrated that they will distort iEEG signals if connected to clinical EEG amplifiers commonly used in most centers. In this study we demonstrate the clinical implications of this effect and identify some of the potential difficulties in using microelectrodes. Human iEEG data were digitally filtered to simulate the signal recorded by a hybrid grid (two macroelectrodes and eight microelectrodes) connected to a standard EEG amplifier. The filtered iEEG data were read by three trained epileptologists, and high frequency oscillations (HFOs) were detected with a well-known algorithm. The filtering method was verified experimentally by recording an injected EEG signal in a saline bath with the same physical acquisition system used to generate the model. Several electrodes underwent scanning electron microscopy (SEM). Macroelectrode recordings were unaltered compared to the source iEEG signal, but microelectrodes attenuated low frequencies. The attenuated signals were difficult to interpret: all three clinicians changed their clinical scoring of slowing and seizures when presented with the same data recorded on different sized electrodes. The HFO detection algorithm was oversensitive with microelectrodes, classifying many more HFOs than when the same data were recorded with macroelectrodes. In addition, during experimental recordings the microelectrodes produced much greater noise as well as large baseline fluctuations, creating sharply contoured transients, and superimposed "false" HFOs. SEM of these microelectrodes demonstrated marked variability in exposed electrode surface area, lead

  5. Interpretation of correlations in clinical research.

    PubMed

    Hung, Man; Bounsanga, Jerry; Voss, Maren Wright

    2017-11-01

    Critically analyzing research is a key skill in evidence-based practice and requires knowledge of research methods, results interpretation, and applications, all of which rely on a foundation based in statistics. Evidence-based practice makes high demands on trained medical professionals to interpret an ever-expanding array of research evidence. As clinical training emphasizes medical care rather than statistics, it is useful to review the basics of statistical methods and what they mean for interpreting clinical studies. We reviewed the basic concepts of correlational associations, violations of normality, unobserved variable bias, sample size, and alpha inflation. The foundations of causal inference were discussed and sound statistical analyses were examined. We discuss four ways in which correlational analysis is misused, including causal inference overreach, over-reliance on significance, alpha inflation, and sample size bias. Recent published studies in the medical field provide evidence of causal assertion overreach drawn from correlational findings. The findings present a primer on the assumptions and nature of correlational methods of analysis and urge clinicians to exercise appropriate caution as they critically analyze the evidence before them and evaluate evidence that supports practice. Critically analyzing new evidence requires statistical knowledge in addition to clinical knowledge. Studies can overstate relationships, expressing causal assertions when only correlational evidence is available. Failure to account for the effect of sample size in the analyses tends to overstate the importance of predictive variables. It is important not to overemphasize the statistical significance without consideration of effect size and whether differences could be considered clinically meaningful.

  6. Using x-ray mammograms to assist in microwave breast image interpretation.

    PubMed

    Curtis, Charlotte; Frayne, Richard; Fear, Elise

    2012-01-01

    Current clinical breast imaging modalities include ultrasound, magnetic resonance (MR) imaging, and the ubiquitous X-ray mammography. Microwave imaging, which takes advantage of differing electromagnetic properties to obtain image contrast, shows potential as a complementary imaging technique. As an emerging modality, interpretation of 3D microwave images poses a significant challenge. MR images are often used to assist in this task, and X-ray mammograms are readily available. However, X-ray mammograms provide 2D images of a breast under compression, resulting in significant geometric distortion. This paper presents a method to estimate the 3D shape of the breast and locations of regions of interest from standard clinical mammograms. The technique was developed using MR images as the reference 3D shape with the future intention of using microwave images. Twelve breast shapes were estimated and compared to ground truth MR images, resulting in a skin surface estimation accurate to within an average Euclidean distance of 10 mm. The 3D locations of regions of interest were estimated to be within the same clinical area of the breast as corresponding regions seen on MR imaging. These results encourage investigation into the use of mammography as a source of information to assist with microwave image interpretation as well as validation of microwave imaging techniques.

  7. Comparison of EPA Method 1615 RT-qPCR Assays in Standard and Kit Format

    EPA Science Inventory

    EPA Method 1615 contains protocols for measuring enterovirus and norovirus by reverse transcription quantitative polymerase chain reaction. A commercial kit based upon these protocols was designed and compared to the method's standard approach. Reagent grade, secondary effluent, ...

  8. The Deep Double Game: Oral Interpretation To Enhance Reading Comprehension.

    ERIC Educational Resources Information Center

    Athanases, Steven Z.; Barton, Jim

    Supported by a rationale for activities in the language arts/English curriculum, this paper demonstrates methods of introducing students to current approaches to oral interpretation of literature. The paper argues that through planning, rehearsing, and reflecting on oral interpretations of literature, readers become increasingly aware of…

  9. Comparison between the triglycerides standardization of routine methods used in Japan and the chromotropic acid reference measurement procedure used by the CDC Lipid Standardization Programme

    PubMed Central

    Nakamura, Masakazu; Iso, Hiroyasu; Kitamura, Akihiko; Imano, Hironori; Noda, Hiroyuki; Kiyama, Masahiko; Sato, Shinichi; Yamagishi, Kazumasa; Nishimura, Kunihiro; Nakai, Michikazu; Vesper, Hubert W; Teramoto, Tamio; Miyamoto, Yoshihiro

    2017-01-01

    Background The US Centers for Disease Control and Prevention ensured adequate performance of the routine triglycerides methods used in Japan by a chromotropic acid reference measurement procedure used by the Centers for Disease Control and Prevention lipid standardization programme as a reference point. We examined standardized data to clarify the performance of routine triglycerides methods. Methods The two routine triglycerides methods were the fluorometric method of Kessler and Lederer and the enzymatic method. The methods were standardized using 495 Centers for Disease Control and Prevention reference pools with 98 different concentrations ranging between 0.37 and 5.15 mmol/L in 141 survey runs. The triglycerides criteria for laboratories which perform triglycerides analyses are used: accuracy, as bias ≤5% from the Centers for Disease Control and Prevention reference value and precision, as measured by CV, ≤5%. Results The correlation of the bias of both methods to the Centers for Disease Control and Prevention reference method was: y (%bias) = 0.516 × (Centers for Disease Control and Prevention reference value) −1.292 (n = 495, R2 = 0.018). Triglycerides bias at medical decision points of 1.13, 1.69 and 2.26 mmol/L was −0.71%, −0.42% and −0.13%, respectively. For the combined precision, the equation y (CV) = −0.398 × (triglycerides value) + 1.797 (n = 495, R2 = 0.081) was used. Precision was 1.35%, 1.12% and 0.90%, respectively. It was shown that triglycerides measurements at Osaka were stable for 36 years. Conclusions The epidemiologic laboratory in Japan met acceptable accuracy goals for 88.7% of all samples, and met acceptable precision goals for 97.8% of all samples measured through the Centers for Disease Control and Prevention lipid standardization programme and demonstrated stable results for an extended period of time. PMID:26680645

  10. Standardization of Clinical Assessment and Sample Collection Across All PERCH Study Sites

    PubMed Central

    Prosperi, Christine; Baggett, Henry C.; Brooks, W. Abdullah; Deloria Knoll, Maria; Hammitt, Laura L.; Howie, Stephen R. C.; Kotloff, Karen L.; Levine, Orin S.; Madhi, Shabir A.; Murdoch, David R.; O’Brien, Katherine L.; Thea, Donald M.; Awori, Juliet O.; Bunthi, Charatdao; DeLuca, Andrea N.; Driscoll, Amanda J.; Ebruke, Bernard E.; Goswami, Doli; Hidgon, Melissa M.; Karron, Ruth A.; Kazungu, Sidi; Kourouma, Nana; Mackenzie, Grant; Moore, David P.; Mudau, Azwifari; Mwale, Magdalene; Nahar, Kamrun; Park, Daniel E.; Piralam, Barameht; Seidenberg, Phil; Sylla, Mamadou; Feikin, Daniel R.; Scott, J. Anthony G.; O’Brien, Katherine L.; Levine, Orin S.; Knoll, Maria Deloria; Feikin, Daniel R.; DeLuca, Andrea N.; Driscoll, Amanda J.; Fancourt, Nicholas; Fu, Wei; Hammitt, Laura L.; Higdon, Melissa M.; Kagucia, E. Wangeci; Karron, Ruth A.; Li, Mengying; Park, Daniel E.; Prosperi, Christine; Wu, Zhenke; Zeger, Scott L.; Watson, Nora L.; Crawley, Jane; Murdoch, David R.; Brooks, W. Abdullah; Endtz, Hubert P.; Zaman, Khalequ; Goswami, Doli; Hossain, Lokman; Jahan, Yasmin; Ashraf, Hasan; Howie, Stephen R. C.; Ebruke, Bernard E.; Antonio, Martin; McLellan, Jessica; Machuka, Eunice; Shamsul, Arifin; Zaman, Syed M.A.; Mackenzie, Grant; Scott, J. Anthony G.; Awori, Juliet O.; Morpeth, Susan C.; Kamau, Alice; Kazungu, Sidi; Kotloff, Karen L.; Tapia, Milagritos D.; Sow, Samba O.; Sylla, Mamadou; Tamboura, Boubou; Onwuchekwa, Uma; Kourouma, Nana; Toure, Aliou; Madhi, Shabir A.; Moore, David P.; Adrian, Peter V.; Baillie, Vicky L.; Kuwanda, Locadiah; Mudau, Azwifarwi; Groome, Michelle J.; Baggett, Henry C.; Thamthitiwat, Somsak; Maloney, Susan A.; Bunthi, Charatdao; Rhodes, Julia; Sawatwong, Pongpun; Akarasewi, Pasakorn; Thea, Donald M.; Mwananyanda, Lawrence; Chipeta, James; Seidenberg, Phil; Mwansa, James; wa Somwe, Somwe; Kwenda, Geoffrey

    2017-01-01

    Abstract Background. Variable adherence to standardized case definitions, clinical procedures, specimen collection techniques, and laboratory methods has complicated the interpretation of previous multicenter pneumonia etiology studies. To circumvent these problems, a program of clinical standardization was embedded in the Pneumonia Etiology Research for Child Health (PERCH) study. Methods. Between March 2011 and August 2013, standardized training on the PERCH case definition, clinical procedures, and collection of laboratory specimens was delivered to 331 clinical staff at 9 study sites in 7 countries (The Gambia, Kenya, Mali, South Africa, Zambia, Thailand, and Bangladesh), through 32 on-site courses and a training website. Staff competency was assessed throughout 24 months of enrollment with multiple-choice question (MCQ) examinations, a video quiz, and checklist evaluations of practical skills. Results. MCQ evaluation was confined to 158 clinical staff members who enrolled PERCH cases and controls, with scores obtained for >86% of eligible staff at each time-point. Median scores after baseline training were ≥80%, and improved by 10 percentage points with refresher training, with no significant intersite differences. Percentage agreement with the clinical trainer on the presence or absence of clinical signs on video clips was high (≥89%), with interobserver concordance being substantial to high (AC1 statistic, 0.62–0.82) for 5 of 6 signs assessed. Staff attained median scores of >90% in checklist evaluations of practical skills. Conclusions. Satisfactory clinical standardization was achieved within and across all PERCH sites, providing reassurance that any etiological or clinical differences observed across the study sites are true differences, and not attributable to differences in application of the clinical case definition, interpretation of clinical signs, or in techniques used for clinical measurements or specimen collection. PMID:28575355

  11. A Capabilities Based Critique of Gutmann's Democratic Interpretation of Equal Educational Opportunity

    ERIC Educational Resources Information Center

    DeCesare, Tony

    2016-01-01

    One of Amy Gutmann's important achievements in "Democratic Education" is her development of a "democratic interpretation of equal educational opportunity." This standard of equality demands that "all educable children learn enough to participate effectively in the democratic process." In other words, Gutmann demands…

  12. Revolutionizing volunteer interpreter services: an evaluation of an innovative medical interpreter education program.

    PubMed

    Hasbún Avalos, Oswaldo; Pennington, Kaylin; Osterberg, Lars

    2013-12-01

    In our ever-increasingly multicultural, multilingual society, medical interpreters serve an important role in the provision of care. Though it is known that using untrained interpreters leads to decreased quality of care for limited English proficiency patients, because of a short supply of professionals and a lack of formalized, feasible education programs for volunteers, community health centers and internal medicine practices continue to rely on untrained interpreters. To develop and formally evaluate a novel medical interpreter education program that encompasses major tenets of interpretation, tailored to the needs of volunteer medical interpreters. One-armed, quasi-experimental retro-pre-post study using survey ratings and feedback correlated by assessment scores to determine educational intervention effects. Thirty-eight students; 24 Spanish, nine Mandarin, and five Vietnamese. The majority had prior interpreting experience but no formal medical interpreter training. Students completed retrospective pre-test and post-test surveys measuring confidence in and perceived knowledge of key skills of interpretation. Primary outcome measures were a 10-point Likert scale for survey questions of knowledge, skills, and confidence, written and oral assessments of interpreter skills, and qualitative evidence of newfound knowledge in written reflections. Analyses showed a statistically significant (P <0.001) change of about two points in mean self-ratings on knowledge, skills, and confidence, with large effect sizes (d > 0.8). The second half of the program was also quantitatively and qualitatively shown to be a vital learning experience, resulting in 18 % more students passing the oral assessments; a 19 % increase in mean scores for written assessments; and a newfound understanding of interpreter roles and ways to navigate them. This innovative program was successful in increasing volunteer interpreters' skills and knowledge of interpretation, as well as confidence

  13. Radical behaviorist interpretation: Generating and evaluating an account of consumer behavior.

    PubMed

    Foxall, G R

    1998-01-01

    This article considers an approach to the radical behaviorist interpretation of complex human social behavior. The chosen context is consumer psychology, a field currently dominated by cognitive models of purchase and consumption. The nature of operant interpretation is considered, and several levels of operant analysis of complex economic behavior in affluent marketing-oriented economies are developed. Empirical evidence for the interpretation is considered, and a case is made for the qualified use of the hypothetico-deductive method in the appraisal of operant interpretations of complex behaviors.

  14. An Artistic Approach to Fine Arts Interpretation in Higher Education

    ERIC Educational Resources Information Center

    Selan, Jurij

    2013-01-01

    Art criticism was introduced into art education to help students understand works of art. However, art interpretation methods differ according to the educational goals specified for various types of art students. The fine arts interpretation procedures established in education are usually purely theoretical and exclusively verbal, and are thus…

  15. Speeding up local correlation methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kats, Daniel

    2014-12-28

    We present two techniques that can substantially speed up the local correlation methods. The first one allows one to avoid the expensive transformation of the electron-repulsion integrals from atomic orbitals to virtual space. The second one introduces an algorithm for the residual equations in the local perturbative treatment that, in contrast to the standard scheme, does not require holding the amplitudes or residuals in memory. It is shown that even an interpreter-based implementation of the proposed algorithm in the context of local MP2 method is faster and requires less memory than the highly optimized variants of conventional algorithms.

  16. Ecobat: An online resource to facilitate transparent, evidence-based interpretation of bat activity data.

    PubMed

    Lintott, Paul R; Davison, Sophie; van Breda, John; Kubasiewicz, Laura; Dowse, David; Daisley, Jonathan; Haddy, Emily; Mathews, Fiona

    2018-01-01

    Acoustic surveys of bats are one of the techniques most commonly used by ecological practitioners. The results are used in Ecological Impact Assessments to assess the likely impacts of future developments on species that are widely protected in law, and to monitor developments' postconstruction. However, there is no standardized methodology for analyzing or interpreting these data, which can make the assessment of the ecological value of a site very subjective. Comparisons of sites and projects are therefore difficult for ecologists and decision-makers, for example, when trying to identify the best location for a new road based on relative bat activity levels along alternative routes. Here, we present a new web-based, data-driven tool, Ecobat, which addresses the need for a more robust way of interpreting ecological data. Ecobat offers users an easy, standardized, and objective method for analyzing bat activity data. It allows ecological practitioners to compare bat activity data at regional and national scales and to generate a numerical indicator of the relative importance of a night's worth of bat activity. The tool is free and open-source; because the underlying algorithms are already developed, it could easily be expanded to new geographical regions and species. Data donation is required to ensure the robustness of the analyses; we use a positive feedback mechanism to encourage ecological practitioners to share data by providing in return high quality, contextualized data analysis, and graphical visualizations for direct use in ecological reports.

  17. Nonsurgical management of hypertrophic scars: evidence-based therapies, standard practices, and emerging methods.

    PubMed

    Atiyeh, Bishara S

    2007-01-01

    Hypertrophic scars, resulting from alterations in the normal processes of cutaneous wound healing, are characterized by proliferation of dermal tissue with excessive deposition of fibroblast-derived extracellular matrix proteins, especially collagen, over long periods, and by persistent inflammation and fibrosis. Hypertrophic scars are among the most common and frustrating problems after injury. As current aesthetic surgical techniques become more standardized and results more predictable, a fine scar may be the demarcating line between acceptable and unacceptable aesthetic results. However, hypertrophic scars remain notoriously difficult to eradicate because of the high recurrence rates and the incidence of side effects associated with available treatment methods. This review explores the various treatment methods for hypertrophic scarring described in the literature including evidence-based therapies, standard practices, and emerging methods, attempting to distinguish those with clearly proven efficiency from anecdotal reports about therapies of doubtful benefits while trying to differentiate between prophylactic measures and actual treatment methods. Unfortunately, the distinction between hypertrophic scar treatments and keloid treatments is not obvious in most reports, making it difficult to assess the efficacy of hypertrophic scar treatment.

  18. Applied photo interpretation for airbrush cartography

    NASA Technical Reports Server (NTRS)

    Inge, J. L.; Bridges, P. M.

    1976-01-01

    Lunar and planetary exploration has required the development of new techniques of cartographic portrayal. Conventional photo-interpretive methods employing size, shape, shadow, tone, pattern, and texture are applied to computer-processed satellite television images. Comparative judgements are affected by illumination, resolution, variations in surface coloration, and transmission or processing artifacts. The portrayal of tonal densities in a relief illustration is performed using a unique airbrush technique derived from hill-shading of contour maps. The control of tone and line quality is essential because the mid-gray to dark tone densities must be finalized prior to the addition of highlights to the drawing. This is done with an electric eraser until the drawing is completed. The drawing density is controlled with a reflectance-reading densitometer to meet certain density guidelines. The versatility of planetary photo-interpretive methods for airbrushed map portrayals is demonstrated by the application of these techniques to the synthesis of nonrelief data.

  19. Non-standard finite difference and Chebyshev collocation methods for solving fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Agarwal, P.; El-Sayed, A. A.

    2018-06-01

    In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.

  20. A practical method of estimating standard error of age in the fission track dating method

    USGS Publications Warehouse

    Johnson, N.M.; McGee, V.E.; Naeser, C.W.

    1979-01-01

    A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.

  1. Interpretation of the rainbow color scale for quantitative medical imaging: perceptually linear color calibration (CSDF) versus DICOM GSDF

    NASA Astrophysics Data System (ADS)

    Chesterman, Frédérique; Manssens, Hannah; Morel, Céline; Serrell, Guillaume; Piepers, Bastian; Kimpe, Tom

    2017-03-01

    Medical displays for primary diagnosis are calibrated to the DICOM GSDF1 but there is no accepted standard today that describes how display systems for medical modalities involving color should be calibrated. Recently the Color Standard Display Function3,4 (CSDF), a calibration using the CIEDE2000 color difference metric to make a display as perceptually linear as possible has been proposed. In this work we present the results of a first observer study set up to investigate the interpretation accuracy of a rainbow color scale when a medical display is calibrated to CSDF versus DICOM GSDF and a second observer study set up to investigate the detectability of color differences when a medical display is calibrated to CSDF, DICOM GSDF and sRGB. The results of the first study indicate that the error when interpreting a rainbow color scale is lower for CSDF than for DICOM GSDF with statistically significant difference (Mann-Whitney U test) for eight out of twelve observers. The results correspond to what is expected based on CIEDE2000 color differences between consecutive colors along the rainbow color scale for both calibrations. The results of the second study indicate a statistical significant improvement in detecting color differences when a display is calibrated to CSDF compared to DICOM GSDF and a (non-significant) trend indicating improved detection for CSDF compared to sRGB. To our knowledge this is the first work that shows the added value of a perceptual color calibration method (CSDF) in interpreting medical color images using the rainbow color scale. Improved interpretation of the rainbow color scale may be beneficial in the area of quantitative medical imaging (e.g. PET SUV, quantitative MRI and CT and doppler US), where a medical specialist needs to interpret quantitative medical data based on a color scale and/or detect subtle color differences and where improved interpretation accuracy and improved detection of color differences may contribute to a better

  2. Radical behaviorist interpretation: Generating and evaluating an account of consumer behavior

    PubMed Central

    Foxall, Gordon R.

    1998-01-01

    This article considers an approach to the radical behaviorist interpretation of complex human social behavior. The chosen context is consumer psychology, a field currently dominated by cognitive models of purchase and consumption. The nature of operant interpretation is considered, and several levels of operant analysis of complex economic behavior in affluent marketing-oriented economies are developed. Empirical evidence for the interpretation is considered, and a case is made for the qualified use of the hypothetico-deductive method in the appraisal of operant interpretations of complex behaviors. PMID:22478315

  3. Interpretation biases in paranoia.

    PubMed

    Savulich, George; Freeman, Daniel; Shergill, Sukhi; Yiend, Jenny

    2015-01-01

    Information in the environment is frequently ambiguous in meaning. Emotional ambiguity, such as the stare of a stranger, or the scream of a child, encompasses possible good or bad emotional consequences. Those with elevated vulnerability to affective disorders tend to interpret such material more negatively than those without, a phenomenon known as "negative interpretation bias." In this study we examined the relationship between vulnerability to psychosis, measured by trait paranoia, and interpretation bias. One set of material permitted broadly positive/negative (valenced) interpretations, while another allowed more or less paranoid interpretations, allowing us to also investigate the content specificity of interpretation biases associated with paranoia. Regression analyses (n=70) revealed that trait paranoia, trait anxiety, and cognitive inflexibility predicted paranoid interpretation bias, whereas trait anxiety and cognitive inflexibility predicted negative interpretation bias. In a group comparison those with high levels of trait paranoia were negatively biased in their interpretations of ambiguous information relative to those with low trait paranoia, and this effect was most pronounced for material directly related to paranoid concerns. Together these data suggest that a negative interpretation bias occurs in those with elevated vulnerability to paranoia, and that this bias may be strongest for material matching paranoid beliefs. We conclude that content-specific biases may be important in the cause and maintenance of paranoid symptoms. Copyright © 2014. Published by Elsevier Ltd.

  4. Interpreting Mobile and Handheld Air Sensor Readings in Relation to Air Quality Standards and Health Effect Reference Values: Tackling the Challenges.

    PubMed

    Woodall, George M; Hoover, Mark D; Williams, Ronald; Benedict, Kristen; Harper, Martin; Soo, Jhy-Charm; Jarabek, Annie M; Stewart, Michael J; Brown, James S; Hulla, Janis E; Caudill, Motria; Clements, Andrea L; Kaufman, Amanda; Parker, Alison J; Keating, Martha; Balshaw, David; Garrahan, Kevin; Burton, Laureen; Batka, Sheila; Limaye, Vijay S; Hakkinen, Pertti J; Thompson, Bob

    2017-01-01

    The US Environmental Protection Agency (EPA) and other federal agencies face a number of challenges in interpreting and reconciling short-duration (seconds to minutes) readings from mobile and handheld air sensors with the longer duration averages (hours to days) associated with the National Ambient Air Quality Standards (NAAQS) for the criteria pollutants-particulate matter (PM), ozone, carbon monoxide, lead, nitrogen oxides, and sulfur oxides. Similar issues are equally relevant to the hazardous air pollutants (HAPs) where chemical-specific health effect reference values are the best indicators of exposure limits; values which are often based on a lifetime of continuous exposure. A multi-agency, staff-level Air Sensors Health Group (ASHG) was convened in 2013. ASHG represents a multi-institutional collaboration of Federal agencies devoted to discovery and discussion of sensor technologies, interpretation of sensor data, defining the state of sensor-related science across each institution, and provides consultation on how sensors might effectively be used to meet a wide range of research and decision support needs. ASHG focuses on several fronts: improving the understanding of what hand-held sensor technologies may be able to deliver; communicating what hand-held sensor readings can provide to a number of audiences; the challenges of how to integrate data generated by multiple entities using new and unproven technologies; and defining best practices in communicating health-related messages to various audiences. This review summarizes the challenges, successes, and promising tools of those initial ASHG efforts and Federal agency progress on crafting similar products for use with other NAAQS pollutants and the HAPs. NOTE: The opinions expressed are those of the authors and do not necessary represent the opinions of their Federal Agencies or the US Government. Mention of product names does not constitute endorsement.

  5. Interpreting Mobile and Handheld Air Sensor Readings in Relation to Air Quality Standards and Health Effect Reference Values: Tackling the Challenges

    PubMed Central

    Woodall, George M.; Hoover, Mark D.; Williams, Ronald; Benedict, Kristen; Harper, Martin; Soo, Jhy-Charm; Jarabek, Annie M.; Stewart, Michael J.; Brown, James S.; Hulla, Janis E.; Caudill, Motria; Clements, Andrea L.; Kaufman, Amanda; Parker, Alison J.; Keating, Martha; Balshaw, David; Garrahan, Kevin; Burton, Laureen; Batka, Sheila; Limaye, Vijay S.; Hakkinen, Pertti J.; Thompson, Bob

    2017-01-01

    The US Environmental Protection Agency (EPA) and other federal agencies face a number of challenges in interpreting and reconciling short-duration (seconds to minutes) readings from mobile and handheld air sensors with the longer duration averages (hours to days) associated with the National Ambient Air Quality Standards (NAAQS) for the criteria pollutants-particulate matter (PM), ozone, carbon monoxide, lead, nitrogen oxides, and sulfur oxides. Similar issues are equally relevant to the hazardous air pollutants (HAPs) where chemical-specific health effect reference values are the best indicators of exposure limits; values which are often based on a lifetime of continuous exposure. A multi-agency, staff-level Air Sensors Health Group (ASHG) was convened in 2013. ASHG represents a multi-institutional collaboration of Federal agencies devoted to discovery and discussion of sensor technologies, interpretation of sensor data, defining the state of sensor-related science across each institution, and provides consultation on how sensors might effectively be used to meet a wide range of research and decision support needs. ASHG focuses on several fronts: improving the understanding of what hand-held sensor technologies may be able to deliver; communicating what hand-held sensor readings can provide to a number of audiences; the challenges of how to integrate data generated by multiple entities using new and unproven technologies; and defining best practices in communicating health-related messages to various audiences. This review summarizes the challenges, successes, and promising tools of those initial ASHG efforts and Federal agency progress on crafting similar products for use with other NAAQS pollutants and the HAPs. NOTE: The opinions expressed are those of the authors and do not necessary represent the opinions of their Federal Agencies or the US Government. Mention of product names does not constitute endorsement. PMID:29093969

  6. The skin prick test – European standards

    PubMed Central

    2013-01-01

    Skin prick testing is an essential test procedure to confirm sensitization in IgE-mediated allergic disease in subjects with rhinoconjunctivitis, asthma, urticaria, anapylaxis, atopic eczema and food and drug allergy. This manuscript reviews the available evidence including Medline and Embase searches, abstracts of international allergy meetings and position papers from the world allergy literature. The recommended method of prick testing includes the appropriate use of specific allergen extracts, positive and negative controls, interpretation of the tests after 15 – 20 minutes of application, with a positive result defined as a wheal ≥3 mm diameter. A standard prick test panel for Europe for inhalants is proposed and includes hazel (Corylus avellana), alder (Alnus incana), birch (Betula alba), plane (Platanus vulgaris), cypress (Cupressus sempervirens), grass mix (Poa pratensis, Dactilis glomerata, Lolium perenne, Phleum pratense, Festuca pratensis, Helictotrichon pretense), Olive (Olea europaea), mugwort (Artemisia vulgaris), ragweed (Ambrosia artemisiifolia), Alternaria alternata (tenuis), Cladosporium herbarum, Aspergillus fumigatus, Parietaria, cat, dog, Dermatophagoides pteronyssinus, Dermatophagoides farinae, and cockroach (Blatella germanica). Standardization of the skin test procedures and standard panels for different geographic locations are encouraged worldwide to permit better comparisons for diagnostic, clinical and research purposes. PMID:23369181

  7. Exploring the Philosophical Underpinnings of Research: Relating Ontology and Epistemology to the Methodology and Methods of the Scientific, Interpretive, and Critical Research Paradigms

    ERIC Educational Resources Information Center

    Scotland, James

    2012-01-01

    This paper explores the philosophical underpinnings of three major educational research paradigms: scientific, interpretive, and critical. The aim was to outline and explore the interrelationships between each paradigm's ontology, epistemology, methodology and methods. This paper reveals and then discusses some of the underlying assumptions of…

  8. Computer-aided interpretation approach for optical tomographic images

    NASA Astrophysics Data System (ADS)

    Klose, Christian D.; Klose, Alexander D.; Netz, Uwe J.; Scheel, Alexander K.; Beuthan, Jürgen; Hielscher, Andreas H.

    2010-11-01

    A computer-aided interpretation approach is proposed to detect rheumatic arthritis (RA) in human finger joints using optical tomographic images. The image interpretation method employs a classification algorithm that makes use of a so-called self-organizing mapping scheme to classify fingers as either affected or unaffected by RA. Unlike in previous studies, this allows for combining multiple image features, such as minimum and maximum values of the absorption coefficient for identifying affected and not affected joints. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index, and mutual information. Different methods (i.e., clinical diagnostics, ultrasound imaging, magnet resonance imaging, and inspection of optical tomographic images), were used to produce ground truth benchmarks to determine the performance of image interpretations. Using data from 100 finger joints, findings suggest that some parameter combinations lead to higher sensitivities, while others to higher specificities when compared to single parameter classifications employed in previous studies. Maximum performances are reached when combining the minimum/maximum ratio of the absorption coefficient and image variance. In this case, sensitivities and specificities over 0.9 can be achieved. These values are much higher than values obtained when only single parameter classifications were used, where sensitivities and specificities remained well below 0.8.

  9. Differentiating between descriptive and interpretive phenomenological research approaches.

    PubMed

    Matua, Gerald Amandu; Van Der Wal, Dirk Mostert

    2015-07-01

    To provide insight into how descriptive and interpretive phenomenological research approaches can guide nurse researchers during the generation and application of knowledge. Phenomenology is a discipline that investigates people's experiences to reveal what lies 'hidden' in them. It has become a major philosophy and research method in the humanities, human sciences and arts. Phenomenology has transitioned from descriptive phenomenology, which emphasises the 'pure' description of people's experiences, to the 'interpretation' of such experiences, as in hermeneutic phenomenology. However, nurse researchers are still challenged by the epistemological and methodological tenets of these two methods. The data came from relevant online databases and research books. A review of selected peer-reviewed research and discussion papers published between January 1990 and December 2013 was conducted using CINAHL, Science Direct, PubMed and Google Scholar databases. In addition, selected textbooks that addressed phenomenology as a philosophy and as a research methodology were used. Evidence from the literature indicates that most studies following the 'descriptive approach' to research are used to illuminate poorly understood aspects of experiences. In contrast, the 'interpretive/hermeneutic approach' is used to examine contextual features of an experience in relation to other influences such as culture, gender, employment or wellbeing of people or groups experiencing the phenomenon. This allows investigators to arrive at a deeper understanding of the experience, so that caregivers can derive requisite knowledge needed to address such clients' needs. Novice nurse researchers should endeavour to understand phenomenology both as a philosophy and research method. This is vitally important because in-depth understanding of phenomenology ensures that the most appropriate method is chosen to implement a study and to generate knowledge for nursing practice. This paper adds to the current

  10. 48 CFR 9905.502-61 - Interpretation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... to it as a direct cost any cost, if other costs incurred for the same purpose, in like circumstances... to all work of the educational institution. (d) This interpretation does not preclude the allocation... by the educational institution, however, must be followed consistently and the method used to...

  11. Immunogenicity of biologically-derived therapeutics: assessment and interpretation of nonclinical safety studies.

    PubMed

    Ponce, Rafael; Abad, Leslie; Amaravadi, Lakshmi; Gelzleichter, Thomas; Gore, Elizabeth; Green, James; Gupta, Shalini; Herzyk, Danuta; Hurst, Christopher; Ivens, Inge A; Kawabata, Thomas; Maier, Curtis; Mounho, Barbara; Rup, Bonita; Shankar, Gopi; Smith, Holly; Thomas, Peter; Wierda, Dan

    2009-07-01

    An evaluation of potential antibody formation to biologic therapeutics during the course of nonclinical safety studies and its impact on the toxicity profile is expected under current regulatory guidance and is accepted standard practice. However, approaches for incorporating this information in the interpretation of nonclinical safety studies are not clearly established. Described here are the immunological basis of anti-drug antibody formation to biopharmaceuticals (immunogenicity) in laboratory animals, and approaches for generating and interpreting immunogenicity data from nonclinical safety studies of biotechnology-derived therapeutics to support their progression to clinical evaluation. We subscribe that immunogenicity testing strategies should be adapted to the specific needs of each therapeutic development program, and data generated from such analyses should be integrated with available clinical and anatomic pathology, pharmacokinetic, and pharmacodynamic data to properly interpret nonclinical studies.

  12. Translating Radiometric Requirements for Satellite Sensors to Match International Standards.

    PubMed

    Pearlman, Aaron; Datla, Raju; Kacker, Raghu; Cao, Changyong

    2014-01-01

    International scientific standards organizations created standards on evaluating uncertainty in the early 1990s. Although scientists from many fields use these standards, they are not consistently implemented in the remote sensing community, where traditional error analysis framework persists. For a satellite instrument under development, this can create confusion in showing whether requirements are met. We aim to create a methodology for translating requirements from the error analysis framework to the modern uncertainty approach using the product level requirements of the Advanced Baseline Imager (ABI) that will fly on the Geostationary Operational Environmental Satellite R-Series (GOES-R). In this paper we prescribe a method to combine several measurement performance requirements, written using a traditional error analysis framework, into a single specification using the propagation of uncertainties formula. By using this approach, scientists can communicate requirements in a consistent uncertainty framework leading to uniform interpretation throughout the development and operation of any satellite instrument.

  13. Translating Radiometric Requirements for Satellite Sensors to Match International Standards

    PubMed Central

    Pearlman, Aaron; Datla, Raju; Kacker, Raghu; Cao, Changyong

    2014-01-01

    International scientific standards organizations created standards on evaluating uncertainty in the early 1990s. Although scientists from many fields use these standards, they are not consistently implemented in the remote sensing community, where traditional error analysis framework persists. For a satellite instrument under development, this can create confusion in showing whether requirements are met. We aim to create a methodology for translating requirements from the error analysis framework to the modern uncertainty approach using the product level requirements of the Advanced Baseline Imager (ABI) that will fly on the Geostationary Operational Environmental Satellite R-Series (GOES-R). In this paper we prescribe a method to combine several measurement performance requirements, written using a traditional error analysis framework, into a single specification using the propagation of uncertainties formula. By using this approach, scientists can communicate requirements in a consistent uncertainty framework leading to uniform interpretation throughout the development and operation of any satellite instrument. PMID:26601032

  14. ORAL INTERPRETATION.

    ERIC Educational Resources Information Center

    CAMPBELL, PAUL N.

    THE BASIC PREMISE OF THIS BOOK IS THAT LEARNING TO READ ORALLY IS OF FUNDAMENTAL IMPORTANCE TO THOSE WHO WOULD FULLY APPRECIATE OR RESPOND TO LITERATURE. BECAUSE READERS MUST INTERPRET LITERATURE ALWAYS FOR THEMSELVES AND OFTEN FOR AN AUDIENCE, THREE ASPECTS OF ORAL INTERPRETATION ARE EXPLORED--(1) THE CHOICE OF MATERIALS, WHICH REQUIRES AN…

  15. Neutral vs positive oral contrast in diagnosing acute appendicitis with contrast-enhanced CT: sensitivity, specificity, reader confidence and interpretation time

    PubMed Central

    Naeger, D M; Chang, S D; Kolli, P; Shah, V; Huang, W; Thoeni, R F

    2011-01-01

    Objective The study compared the sensitivity, specificity, confidence and interpretation time of readers of differing experience in diagnosing acute appendicitis with contrast-enhanced CT using neutral vs positive oral contrast agents. Methods Contrast-enhanced CT for right lower quadrant or right flank pain was performed in 200 patients with neutral and 200 with positive oral contrast including 199 with proven acute appendicitis and 201 with other diagnoses. Test set disease prevalence was 50%. Two experienced gastrointestinal radiologists, one fellow and two first-year residents blindly assessed all studies for appendicitis (2000 readings) and assigned confidence scores (1=poor to 4=excellent). Receiver operating characteristic (ROC) curves were generated. Total interpretation time was recorded. Each reader's interpretation with the two agents was compared using standard statistical methods. Results Average reader sensitivity was found to be 96% (range 91–99%) with positive and 95% (89–98%) with neutral oral contrast; specificity was 96% (92–98%) and 94% (90–97%). For each reader, no statistically significant difference was found between the two agents (sensitivities p-values >0.6; specificities p-values>0.08), in the area under the ROC curve (range 0.95–0.99) or in average interpretation times. In cases without appendicitis, positive oral contrast demonstrated improved appendix identification (average 90% vs 78%) and higher confidence scores for three readers. Average interpretation times showed no statistically significant differences between the agents. Conclusion Neutral vs positive oral contrast does not affect the accuracy of contrast-enhanced CT for diagnosing acute appendicitis. Although positive oral contrast might help to identify normal appendices, we continue to use neutral oral contrast given its other potential benefits. PMID:20959365

  16. Impact of acquisition and interpretation on total inter-observer variability in echocardiography: results from the quality assurance program of the STAAB cohort study.

    PubMed

    Morbach, Caroline; Gelbrich, Götz; Breunig, Margret; Tiffe, Theresa; Wagner, Martin; Heuschmann, Peter U; Störk, Stefan

    2018-02-14

    Variability related to image acquisition and interpretation is an important issue of echocardiography in clinical trials. Nevertheless, there is no broadly accepted standard method for quality assessment of echocardiography in clinical research reports. We present analyses based on the echocardiography quality-assurance program of the ongoing STAAB cohort study (characteristics and course of heart failure stages A-B and determinants of progression). In 43 healthy individuals (mean age 50 ± 14 years; 18 females), duplicate echocardiography scans were acquired and mutually interpreted by one of three trained sonographers and an EACVI certified physician, respectively. Acquisition (AcV), interpretation (InV), and inter-observer variability (IOV; i.e., variability between the acquisition-interpretation sequences of two different observers), were determined for selected M-mode, B-mode, and Doppler parameters. We calculated Bland-Altman upper 95% limits of absolute differences, implying that 95% of measurement differences were smaller/equal to the given value: e.g. LV end-diastolic volume (mL): 25.0, 25.0, 27.9; septal e' velocity (cm/s): 3.03, 1.25, 3.58. Further, 90, 85, and 80% upper limits of absolute differences were determined for the respective parameters. Both, acquisition and interpretation, independently and sizably contributed to IOV. As such, separate assessment of AcV and InV is likely to aid in echocardiography training and quality-assurance. Our results further suggest to routinely determine IOV in clinical trials as a comprehensive measure of imaging quality. The derived 95, 90, 85, and 80% upper limits of absolute differences are suggested as reproducibility targets of future studies, thus contributing to the international efforts of standardization in quality-assurance.

  17. Suspected pulmonary embolism and lung scan interpretation: Trial of a Bayesian reporting method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, D.M.; Philbrick, J.T.; Schoonover, F.W.

    The objective of this research is to determine whether a Bayesian method of lung scan (LS) reporting could influence the management of patients with suspected pulmonary embolism (PE). The study is performed by the following: (1) A descriptive study of the diagnostic process for suspected PE using the new reporting method; (2) a non-experimental evaluation of the reporting method comparing prospective patients and historical controls; and (3) a survey of physicians' reactions to the reporting innovation. Of 148 consecutive patients enrolled at the time of LS, 129 were completely evaluated; 75 patients scanned the previous year served as controls. Themore » LS results of patients with suspected PE were reported as posttest probabilities of PE calculated from physician-provided pretest probabilities and the likelihood ratios for PE of LS interpretations. Despite the Bayesian intervention, the confirmation or exclusion of PE was often based on inconclusive evidence. PE was considered by the clinician to be ruled out in 98% of patients with posttest probabilities less than 25% and ruled in for 95% of patients with posttest probabilities greater than 75%. Prospective patients and historical controls were similar in terms of tests ordered after the LS (e.g., pulmonary angiography). Patients with intermediate or indeterminate lung scan results had the highest proportion of subsequent testing. Most physicians (80%) found the reporting innovation to be helpful, either because it confirmed clinical judgement (94 cases) or because it led to additional testing (7 cases). Despite the probabilistic guidance provided by the study, the diagnosis of PE was often neither clearly established nor excluded. While physicians appreciated the innovation and were not confused by the terminology, their clinical decision making was not clearly enhanced.« less

  18. Comparison between the triglycerides standardization of routine methods used in Japan and the chromotropic acid reference measurement procedure used by the CDC Lipid Standardization Programme.

    PubMed

    Nakamura, Masakazu; Iso, Hiroyasu; Kitamura, Akihiko; Imano, Hironori; Noda, Hiroyuki; Kiyama, Masahiko; Sato, Shinichi; Yamagishi, Kazumasa; Nishimura, Kunihiro; Nakai, Michikazu; Vesper, Hubert W; Teramoto, Tamio; Miyamoto, Yoshihiro

    2016-11-01

    Background The US Centers for Disease Control and Prevention ensured adequate performance of the routine triglycerides methods used in Japan by a chromotropic acid reference measurement procedure used by the Centers for Disease Control and Prevention lipid standardization programme as a reference point. We examined standardized data to clarify the performance of routine triglycerides methods. Methods The two routine triglycerides methods were the fluorometric method of Kessler and Lederer and the enzymatic method. The methods were standardized using 495 Centers for Disease Control and Prevention reference pools with 98 different concentrations ranging between 0.37 and 5.15 mmol/L in 141 survey runs. The triglycerides criteria for laboratories which perform triglycerides analyses are used: accuracy, as bias ≤5% from the Centers for Disease Control and Prevention reference value and precision, as measured by CV, ≤5%. Results The correlation of the bias of both methods to the Centers for Disease Control and Prevention reference method was: y (%bias) = 0.516 × (Centers for Disease Control and Prevention reference value) -1.292 ( n = 495, R 2  = 0.018). Triglycerides bias at medical decision points of 1.13, 1.69 and 2.26 mmol/L was -0.71%, -0.42% and -0.13%, respectively. For the combined precision, the equation y (CV) = -0.398 × (triglycerides value) + 1.797 ( n = 495, R 2  = 0.081) was used. Precision was 1.35%, 1.12% and 0.90%, respectively. It was shown that triglycerides measurements at Osaka were stable for 36 years. Conclusions The epidemiologic laboratory in Japan met acceptable accuracy goals for 88.7% of all samples, and met acceptable precision goals for 97.8% of all samples measured through the Centers for Disease Control and Prevention lipid standardization programme and demonstrated stable results for an extended period of time.

  19. Men's interpretations of graphical information in a videotape decision aid 1

    PubMed Central

    Pylar, Jan; Wills, Celia E.; Lillie, Janet; Rovner, David R.; Kelly‐Blake, Karen; Holmes‐Rovner, Margaret

    2007-01-01

    Abstract Objective  To examine men's interpretations of graphical information types viewed in a high‐quality, previously tested videotape decision aid (DA). Setting, participants, design  A community‐dwelling sample of men >50 years of age (N = 188) balanced by education (college/non‐college) and race (Black/White) were interviewed just following their viewing of a videotape DA. A descriptive study design was used to examine men's interpretations of a representative sample of the types of graphs that were shown in the benign prostatic hyperplasia videotape DA. Main variables studied  Men provided their interpretation of graphs information presented in three formats that varied in complexity: pictograph, line and horizontal bar graph. Audiotape transcripts of men's responses were coded for meaning and content‐related interpretation statements. Results  Men provided both meaning and content‐focused interpretations of the graphs. Accuracy of interpretation was lower than hypothesized on the basis of literature review (85.4% for pictograph, 65.7% for line graph, 47.8% for horizontal bar graph). Accuracy for pictograph and line graphs was associated with education level,  = 3.94, P = 0.047, and  = 7.55, P = 0.006, respectively. Accuracy was uncorrelated with men's reported liking of the graphs,  = 2.00, P = 0.441. Conclusion  While men generally liked the DA, accuracy of graphs interpretation was associated with format complexity and education level. Graphs are often recommended to improve comprehension of information in DAs. However, additional evaluation is needed in experimental and naturalistic observational settings to develop best practice standards for data representation. PMID:17524011

  20. Inversion methods for interpretation of asteroid lightcurves

    NASA Technical Reports Server (NTRS)

    Kaasalainen, Mikko; Lamberg, L.; Lumme, K.

    1992-01-01

    We have developed methods of inversion that can be used in the determination of the three-dimensional shape or the albedo distribution of the surface of a body from disk-integrated photometry, assuming the shape to be strictly convex. In addition to the theory of inversion methods, we have studied the practical aspects of the inversion problem and applied our methods to lightcurve data of 39 Laetitia and 16 Psyche.

  1. Evaluation of a standard test method for screening fuels in soils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorini, S.S.; Schabron, J.F.

    1996-12-31

    A new screening method for fuel contamination in soils was recently developed as American Society for Testing and Materials (ASTM) Method D-5831-95, Standard Test Method for Screening Fuels in Soils. This method uses low-toxicity chemicals and can be sued to screen organic- rich soils, as well as being fast, easy, and inexpensive to perform. Fuels containing aromatic compounds, such as diesel fuel and gasoline, as well as other aromatic-containing hydrocarbon materials, such as motor oil, crude oil, and cola oil, can be determined. The screening method for fuels in soils was evaluated by conducting a Collaborative study on the method.more » In the Collaborative study, a sand and an organic soil spiked with various concentrations of diesel fuel were tested. Data from the Collaborative study were used to determine the reproducibility (between participants) and repeatability (within participants) precision of the method for screening the test materials. The Collaborative study data also provide information on the performance of portable field equipment (patent pending) versus laboratory equipment for performing the screening method and a comparison of diesel concentration values determined using the screening method versus a laboratory method.« less

  2. Automated Fault Interpretation and Extraction using Improved Supplementary Seismic Datasets

    NASA Astrophysics Data System (ADS)

    Bollmann, T. A.; Shank, R.

    2017-12-01

    During the interpretation of seismic volumes, it is necessary to interpret faults along with horizons of interest. With the improvement of technology, the interpretation of faults can be expedited with the aid of different algorithms that create supplementary seismic attributes, such as semblance and coherency. These products highlight discontinuities, but still need a large amount of human interaction to interpret faults and are plagued by noise and stratigraphic discontinuities. Hale (2013) presents a method to improve on these datasets by creating what is referred to as a Fault Likelihood volume. In general, these volumes contain less noise and do not emphasize stratigraphic features. Instead, planar features within a specified strike and dip range are highlighted. Once a satisfactory Fault Likelihood Volume is created, extraction of fault surfaces is much easier. The extracted fault surfaces are then exported to interpretation software for QC. Numerous software packages have implemented this methodology with varying results. After investigating these platforms, we developed a preferred Automated Fault Interpretation workflow.

  3. Validation of standard method EN ISO 11290-part 2 for the enumeration of Listeria monocytogenes in food.

    PubMed

    Rollier, Patricia; Lombard, Bertrand; Guillier, Laurent; François, Danièle; Romero, Karol; Pierru, Sylvie; Bouhier, Laurence; Gnanou Besse, Nathalie

    2018-05-01

    The reference method for the detection and enumeration of L. monocytogenes in food (Standards EN ISO 11290-1&2) have been validated by inter-laboratory studies in the frame of the Mandate M381 from European Commission to CEN. In this paper, the inter-laboratory studies led in 2013 on 5 matrices (cold-smoked salmon, milk powdered infant food formula, vegetables, environment, and cheese) to validate Standard EN ISO 11290-2 are reported. According to the results obtained, the method of the revised Standard EN ISO 11290-2 can be considered as a good method for the enumeration of L. monocytogenes in foods and food processing environment, in particular for the matrices included in the study. Values of repeatability and reproducibility standard deviations can be considered satisfactory for this type of method with a confirmation stage, since most of them were below 0.3 log 10 , also at low levels, close to the regulatory limit of 100 CFU/g. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable.

    PubMed

    Korjus, Kristjan; Hebart, Martin N; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier's generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term "Cross-validation and cross-testing" improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do.

  5. Next Generation Science Standards: A National Mixed-Methods Study on Teacher Readiness

    ERIC Educational Resources Information Center

    Haag, Susan; Megowan, Colleen

    2015-01-01

    Next Generation Science Standards (NGSS) science and engineering practices are ways of eliciting the reasoning and applying foundational ideas in science. As research has revealed barriers to states and schools adopting the NGSS, this mixed-methods study attempts to identify characteristics of professional development (PD) that will support NGSS…

  6. Efficacy of very fast simulated annealing global optimization method for interpretation of self-potential anomaly by different forward formulation over 2D inclined sheet type structure

    NASA Astrophysics Data System (ADS)

    Biswas, A.; Sharma, S. P.

    2012-12-01

    Self-Potential anomaly is an important geophysical technique that measures the electrical potential due natural source of current in the Earth's subsurface. An inclined sheet type model is a very familiar structure associated with mineralization, fault plane, groundwater flow and many other geological features which exhibits self potential anomaly. A number of linearized and global inversion approaches have been developed for the interpretation of SP anomaly over different structures for various purposes. Mathematical expression to compute the forward response over a two-dimensional dipping sheet type structures can be described in three different ways using five variables in each case. Complexities in the inversion using three different forward approaches are different. Interpretation of self-potential anomaly using very fast simulated annealing global optimization has been developed in the present study which yielded a new insight about the uncertainty and equivalence in model parameters. Interpretation of the measured data yields the location of the causative body, depth to the top, extension, dip and quality of the causative body. In the present study, a comparative performance of three different forward approaches in the interpretation of self-potential anomaly is performed to assess the efficacy of the each approach in resolving the possible ambiguity. Even though each forward formulation yields the same forward response but optimization of different sets of variable using different forward problems poses different kinds of ambiguity in the interpretation. Performance of the three approaches in optimization has been compared and it is observed that out of three methods, one approach is best and suitable for this kind of study. Our VFSA approach has been tested on synthetic, noisy and field data for three different methods to show the efficacy and suitability of the best method. It is important to use the forward problem in the optimization that yields the

  7. Instructional Note: The Interpretive-Paraphrase Workshop

    ERIC Educational Resources Information Center

    Houp, G. Wesley

    2004-01-01

    This article describes the interpretive paraphrase class workshop method, which emphasizes dialogue as a centerpiece of the composing process and provides students with opportunities to re-envision their compositions based on the alternative readings of their peers. A major goal of this writing workshop is to create and sustain student-talk about…

  8. Standards for Cataloging Nonprint Materials. Fourth Edition. An Interpretation and Practical Application.

    ERIC Educational Resources Information Center

    Tillin, Alma M.; Quinly, William J.

    Standards established by the Association for Educational Communications and Technology (AECT) set forth basic cataloging rules that apply to all types of nonprint materials. Included are all elements needed to identify, describe, and retrieve an article. Cataloging rules are applied to 18 specific media formats including audiorecording, films,…

  9. Standardization of shape memory alloy test methods toward certification of aerospace applications

    NASA Astrophysics Data System (ADS)

    Hartl, D. J.; Mabe, J. H.; Benafan, O.; Coda, A.; Conduit, B.; Padan, R.; Van Doren, B.

    2015-08-01

    The response of shape memory alloy (SMA) components employed as actuators has enabled a number of adaptable aero-structural solutions. However, there are currently no industry or government-accepted standardized test methods for SMA materials when used as actuators and their transition to commercialization and production has been hindered. This brief fast track communication introduces to the community a recently initiated collaborative and pre-competitive SMA specification and standardization effort that is expected to deliver the first ever regulatory agency-accepted material specification and test standards for SMA as employed as actuators for commercial and military aviation applications. In the first phase of this effort, described herein, the team is working to review past efforts and deliver a set of agreed-upon properties to be included in future material certification specifications as well as the associated experiments needed to obtain them in a consistent manner. Essential for the success of this project is the participation and input from a number of organizations and individuals, including engineers and designers working in materials and processing development, application design, SMA component fabrication, and testing at the material, component, and system level. Going forward, strong consensus among this diverse body of participants and the SMA research community at large is needed to advance standardization concepts for universal adoption by the greater aerospace community and especially regulatory bodies. It is expected that the development and release of public standards will be done in collaboration with an established standards development organization.

  10. Considerations When Working with Interpreters.

    ERIC Educational Resources Information Center

    Hwa-Froelich, Deborah A.; Westby, Carol E.

    2003-01-01

    This article describes the current training and certification procedures in place for linguistic interpreters, the continuum of interpreter roles, and how interpreters' perspectives may influence the interpretive interaction. The specific skills needed for interpreting in either health care or educational settings are identified. A table compares…

  11. Method for matching customer and manufacturer positions for metal product parameters standardization

    NASA Astrophysics Data System (ADS)

    Polyakova, Marina; Rubin, Gennadij; Danilova, Yulija

    2018-04-01

    Decision making is the main stage of regulation the relations between customer and manufacturer during the design the demands of norms in standards. It is necessary to match the positions of the negotiating sides in order to gain the consensus. In order to take into consideration the differences of customer and manufacturer estimation of the object under standardization process it is obvious to use special methods of analysis. It is proposed to establish relationships between product properties and its functions using functional-target analysis. The special feature of this type of functional analysis is the consideration of the research object functions and properties. It is shown on the example of hexagonal head crew the possibility to establish links between its functions and properties. Such approach allows obtaining a quantitative assessment of the closeness the positions of customer and manufacturer at decision making during the standard norms establishment.

  12. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    ERIC Educational Resources Information Center

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  13. Interpreting the Right to an Education as a Norm Referenced Adequacy Standard

    ERIC Educational Resources Information Center

    Pijanowski, John

    2016-01-01

    Our current conceptions of educational adequacy emerged out of an era dominated by equity-based school resource litigation. During that time of transitioning between successful litigation strategies, legal opinions provided clues as to how future courts might view a norm-referenced approach to establishing an adequacy standard--an approach that…

  14. A novel knowledge-based system for interpreting complex engineering drawings: theory, representation, and implementation.

    PubMed

    Lu, Tong; Tai, Chiew-Lan; Yang, Huafei; Cai, Shijie

    2009-08-01

    We present a novel knowledge-based system to automatically convert real-life engineering drawings to content-oriented high-level descriptions. The proposed method essentially turns the complex interpretation process into two parts: knowledge representation and knowledge-based interpretation. We propose a new hierarchical descriptor-based knowledge representation method to organize the various types of engineering objects and their complex high-level relations. The descriptors are defined using an Extended Backus Naur Form (EBNF), facilitating modification and maintenance. When interpreting a set of related engineering drawings, the knowledge-based interpretation system first constructs an EBNF-tree from the knowledge representation file, then searches for potential engineering objects guided by a depth-first order of the nodes in the EBNF-tree. Experimental results and comparisons with other interpretation systems demonstrate that our knowledge-based system is accurate and robust for high-level interpretation of complex real-life engineering projects.

  15. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    PubMed

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  16. Interpretive criteria of antimicrobial disk susceptibility tests with flomoxef.

    PubMed

    Grimm, H

    1991-01-01

    320 recently isolated pathogens, 20 strains from each of 16 species, were investigated using Mueller-Hinton agar and DIN as well as NCCLS standards. The geometric mean of the agar dilution MICs of flomoxef were 0.44 mg/l for Staphylococcus aureus, 0.05 mg/l (Klebsiella oxytoca) to 12.6 mg/l (Enterobacter spp.) for enterobacteriaceae, 33.1 mg/l for Acinetobacter anitratus, 64 mg/l for Enterococcus faecalis, and more than 256 mg/l for Pseudomonas aeruginosa. For disk susceptibility testing of flomoxef a 30 micrograms disk loading and the following interpretation of inhibition zones using the DIN method were recommended: resistant-up to 22 mm (corresponding to MICs of 8 mg/l or more), moderately susceptible-23 to 29 mm (corresponding to MICs from 1 to 4 mg/l), and susceptible-30 mm or more (corresponding to MICs of 0.5 mg/l or less). The respective values for the NCCLS method using the American high MIC breakpoints are: resistant--up to 14 mm (corresponding to MICs of 32 mg/l or more), moderately susceptible--15 to 17 mm (corresponding to MICs of 16 mg/l), and susceptible--18 mm or more (corresponding to MICs of 8 mg/l or less).

  17. Variability of bioaccessibility results using seventeen different methods on a standard reference material, NIST 2710.

    PubMed

    Koch, Iris; Reimer, Kenneth J; Bakker, Martine I; Basta, Nicholas T; Cave, Mark R; Denys, Sébastien; Dodd, Matt; Hale, Beverly A; Irwin, Rob; Lowney, Yvette W; Moore, Margo M; Paquin, Viviane; Rasmussen, Pat E; Repaso-Subang, Theresa; Stephenson, Gladys L; Siciliano, Steven D; Wragg, Joanna; Zagury, Gerald J

    2013-01-01

    Bioaccessibility is a measurement of a substance's solubility in the human gastro-intestinal system, and is often used in the risk assessment of soils. The present study was designed to determine the variability among laboratories using different methods to measure the bioaccessibility of 24 inorganic contaminants in one standardized soil sample, the standard reference material NIST 2710. Fourteen laboratories used a total of 17 bioaccessibility extraction methods. The variability between methods was assessed by calculating the reproducibility relative standard deviations (RSDs), where reproducibility is the sum of within-laboratory and between-laboratory variability. Whereas within-laboratory repeatability was usually better than (<) 15% for most elements, reproducibility RSDs were much higher, indicating more variability, although for many elements they were comparable to typical uncertainties (e.g., 30% in commercial laboratories). For five trace elements of interest, reproducibility RSDs were: arsenic (As), 22-44%; cadmium (Cd), 11-41%; Cu, 15-30%; lead (Pb), 45-83%; and Zn, 18-56%. Only one method variable, pH, was found to correlate significantly with bioaccessibility for aluminum (Al), Cd, copper (Cu), manganese (Mn), Pb and zinc (Zn) but other method variables could not be examined systematically because of the study design. When bioaccessibility results were directly compared with bioavailability results for As (swine and mouse) and Pb (swine), four methods returned results within uncertainty ranges for both elements: two that were defined as simpler (gastric phase only, limited chemicals) and two were more complex (gastric + intestinal phases, with a mixture of chemicals).

  18. Targeting Lexicon in Interpreting.

    ERIC Educational Resources Information Center

    Farghal, Mohammed; Shakir, Abdullah

    1994-01-01

    Studies student interpreters in the Master's Translation Program at Yarmouk University in Jordan. Analyzes the difficulties of these students, particularly regarding lexical competence, when interpreting from Arabic to English, emphasizing the need to teach lexicon all through interpreting programs. (HB)

  19. Clinicians' Obligations to Use Qualified Medical Interpreters When Caring for Patients with Limited English Proficiency.

    PubMed

    Basu, Gaurab; Costa, Vonessa Phillips; Jain, Priyank

    2017-03-01

    Access to language services is a required and foundational component of care for patients with limited English proficiency (LEP). National standards for medical interpreting set by the US Department of Health and Human Services and by the National Council on Interpreting in Health Care establish the role of qualified medical interpreters in the provision of care in the United States. In the vignette, the attending physician infringes upon the patient's right to appropriate language services and renders unethical care. Clinicians are obliged to create systems and a culture that ensure quality care for patients with LEP. © 2017 American Medical Association. All Rights Reserved.

  20. Some comments on the substituted judgement standard.

    PubMed

    Egonsson, Dan

    2010-02-01

    On a traditional interpretation of the substituted judgement standard (SJS) a person who makes treatment decisions on behalf of a non-competent patient (e.g. concerning euthanasia) ought to decide as the patient would have decided had she been competent. I propose an alternative interpretation of SJS in which the surrogate is required to infer what the patient actually thought about these end-of-life decisions. In clarifying SJS it is also important to differentiate the patient's consent and preference. If SJS is part of an autonomy ideal of the sort found in Kantian ethics, consent seems more important than preference. From a utilitarian perspective a preference-based reading of SJS seems natural. I argue that the justification of SJS within a utilitarian framework will boil down to the question whether a non-competent patient can be said to have any surviving preferences. If we give a virtue-ethical justification of SJS the relative importance of consent and preferences depends on which virtue one stresses--respect or care. I argue that SJS might be an independent normative method for extending the patient's autonomy, both from a Kantian and a virtue ethical perspective.

  1. Standardized Methods to Generate Mock (Spiked) Clinical Specimens by Spiking Blood or Plasma with Cultured Pathogens

    PubMed Central

    Dong, Ming; Fisher, Carolyn; Añez, Germán; Rios, Maria; Nakhasi, Hira L.; Hobson, J. Peyton; Beanan, Maureen; Hockman, Donna; Grigorenko, Elena; Duncan, Robert

    2016-01-01

    Aims To demonstrate standardized methods for spiking pathogens into human matrices for evaluation and comparison among diagnostic platforms. Methods and Results This study presents detailed methods for spiking bacteria or protozoan parasites into whole blood and virus into plasma. Proper methods must start with a documented, reproducible pathogen source followed by steps that include standardized culture, preparation of cryopreserved aliquots, quantification of the aliquots by molecular methods, production of sufficient numbers of individual specimens and testing of the platform with multiple mock specimens. Results are presented following the described procedures that showed acceptable reproducibility comparing in-house real-time PCR assays to a commercially available multiplex molecular assay. Conclusions A step by step procedure has been described that can be followed by assay developers who are targeting low prevalence pathogens. Significance and Impact of Study The development of diagnostic platforms for detection of low prevalence pathogens such as biothreat or emerging agents is challenged by the lack of clinical specimens for performance evaluation. This deficit can be overcome using mock clinical specimens made by spiking cultured pathogens into human matrices. To facilitate evaluation and comparison among platforms, standardized methods must be followed in the preparation and application of spiked specimens. PMID:26835651

  2. Overcoming language barriers in health care: costs and benefits of interpreter services.

    PubMed

    Jacobs, Elizabeth A; Shepard, Donald S; Suaya, Jose A; Stone, Esta-Lee

    2004-05-01

    We assessed the impact of interpreter services on the cost and the utilization of health care services among patients with limited English proficiency. We measured the change in delivery and cost of care provided to patients enrolled in a health maintenance organization before and after interpreter services were implemented. Compared with English-speaking patients, patients who used the interpreter services received significantly more recommended preventive services, made more office visits, and had more prescriptions written and filled. The estimated cost of providing interpreter services was $279 per person per year. Providing interpreter services is a financially viable method for enhancing delivery of health care to patients with limited English proficiency.

  3. Comparisons of shear-wave slowness in the Santa Clara Valley, California using blind interpretations of data from invasive and noninvasive methods

    USGS Publications Warehouse

    Boore, D.M.; Asten, M.W.

    2008-01-01

    Many groups contributed to a blind interpretation exercise for the determination of shear-wave slowness beneath the Santa Clara Valley. The methods included invasive methods in deep boreholes as well as noninvasive methods using active and passive sources, at six sites within the valley (with most investigations being conducted at a pair of closely spaced sites near the center of the valley). Although significant variability exists between the models, the slownesses from the various methods are similar enough that linear site amplifications estimated in several ways are generally within 20% of one another. The methods were able to derive slownesses that increase systematically with distance from the valley edge, corresponding to a tendency for the sites to be underlain by finer-grained materials away from the valley edge. This variation is in agreement with measurements made in the boreholes at the sites.

  4. Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method

    ERIC Educational Resources Information Center

    Liu, Yuming; Schulz, E. Matthew; Yu, Lei

    2008-01-01

    A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…

  5. Parallel text rendering by a PostScript interpreter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kritskii, S.P.; Zastavnoi, B.A.

    1994-11-01

    The most radical method of increasing the performance of devices controlled by PostScript interpreters may be the use of multiprocessor controllers. This paper presents a method for parallelizing the operation of a PostScript interpreter for rendering text. The proposed method is based on decomposition of the outlines of letters into horizontal strips covering equal areas. The subroutines thus obtained are distributed to the processors in a network and then filled in by conventional sequential algorithms. A special algorithm has been developed for dividing the outlines of characters into subroutines so that each may be colored independently of the others. Themore » algorithm uses special estimates for estimating the correct partition so that the corresponding outlines are divided into horizontal strips. A method is presented for finding such estimates. Two different processing approaches are presented. In the first, one of the processors performs the decomposition of the outlines and distributes the strips to the remaining processors, which are responsible for the rendering. In the second approach, the decomposition process is itself distributed among the processors in the network.« less

  6. Alternative Internal Standard Calibration of an Indirect Enzymatic Analytical Method for 2-MCPD Fatty Acid Esters.

    PubMed

    Koyama, Kazuo; Miyazaki, Kinuko; Abe, Kousuke; Egawa, Yoshitsugu; Fukazawa, Toru; Kitta, Tadashi; Miyashita, Takashi; Nezu, Toru; Nohara, Hidenori; Sano, Takashi; Takahashi, Yukinari; Taniguchi, Hideji; Yada, Hiroshi; Yamazaki, Kumiko; Watanabe, Yomi

    2017-06-01

    An indirect enzymatic analysis method for the quantification of fatty acid esters of 2-/3-monochloro-1,2-propanediol (2/3-MCPD) and glycidol was developed, using the deuterated internal standard of each free-form component. A statistical method for calibration and quantification of 2-MCPD-d 5 , which is difficult to obtain, is substituted by 3-MCPD-d 5 used for calculation of 3-MCPD. Using data from a previous collaborative study, the current method for the determination of 2-MCPD content using 2-MCPD-d 5 was compared to three alternative new methods using 3-MCPD-d 5 . The regression analysis showed that the alternative methods were unbiased compared to the current method. The relative standard deviation (RSD R ) among the testing laboratories was ≤ 15% and the Horwitz ratio was ≤ 1.0, a satisfactory value.

  7. QUANTITATIVE TOXICOLOGIC PATHOLOGY-METHODS AND INTERPRETATION' SESSION AT THE JOINT MEETING OF SOCIETY OF TOXICOLOGIC PATHOLOGISTS AND THE INTERNATIONAL FEDERATION OF SOCIETIES OF TOXICOLOGIC PATHOLOGISTS

    EPA Science Inventory

    Report of the 'Quantitative Toxicologic Pathology - Methods and Interpretation' session at the Joint meeting of Society of Toxicologic Pathologists and the International Federation of Societies of Toxicologic Pathologists, Orlando, Florida, USA, June 24-28, 2001. Douglas C. Wolf,...

  8. Directionality Effects in Simultaneous Language Interpreting: The Case of Sign Language Interpreters in the Netherlands

    ERIC Educational Resources Information Center

    van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan

    2011-01-01

    The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of the Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives…

  9. Quantitative determination and validation of octreotide acetate using 1 H-NMR spectroscopy with internal standard method.

    PubMed

    Yu, Chen; Zhang, Qian; Xu, Peng-Yao; Bai, Yin; Shen, Wen-Bin; Di, Bin; Su, Meng-Xiang

    2018-01-01

    Quantitative nuclear magnetic resonance (qNMR) is a well-established technique in quantitative analysis. We presented a validated 1 H-qNMR method for assay of octreotide acetate, a kind of cyclic octopeptide. Deuterium oxide was used to remove the undesired exchangeable peaks, which was referred to as proton exchange, in order to make the quantitative signals isolated in the crowded spectrum of the peptide and ensure precise quantitative analysis. Gemcitabine hydrochloride was chosen as the suitable internal standard. Experimental conditions, including relaxation delay time, the numbers of scans, and pulse angle, were optimized first. Then method validation was carried out in terms of selectivity, stability, linearity, precision, and robustness. The assay result was compared with that by means of high performance liquid chromatography, which is provided by Chinese Pharmacopoeia. The statistical F test, Student's t test, and nonparametric test at 95% confidence level indicate that there was no significant difference between these two methods. qNMR is a simple and accurate quantitative tool with no need for specific corresponding reference standards. It has the potential of the quantitative analysis of other peptide drugs and standardization of the corresponding reference standards. Copyright © 2017 John Wiley & Sons, Ltd.

  10. New clinical validation method for automated sphygmomanometer: a proposal by Japan ISO-WG for sphygmomanometer standard.

    PubMed

    Shirasaki, Osamu; Asou, Yosuke; Takahashi, Yukio

    2007-12-01

    Owing to fast or stepwise cuff deflation, or measuring at places other than the upper arm, the clinical accuracy of most recent automated sphygmomanometers (auto-BPMs) cannot be validated by one-arm simultaneous comparison, which would be the only accurate validation method based on auscultation. Two main alternative methods are provided by current standards, that is, two-arm simultaneous comparison (method 1) and one-arm sequential comparison (method 2); however, the accuracy of these validation methods might not be sufficient to compensate for the suspicious accuracy in lateral blood pressure (BP) differences (LD) and/or BP variations (BPV) between the device and reference readings. Thus, the Japan ISO-WG for sphygmomanometer standards has been studying a new method that might improve validation accuracy (method 3). The purpose of this study is to determine the appropriateness of method 3 by comparing immunity to LD and BPV with those of the current validation methods (methods 1 and 2). The validation accuracy of the above three methods was assessed in human participants [N=120, 45+/-15.3 years (mean+/-SD)]. An oscillometric automated monitor, Omron HEM-762, was used as the tested device. When compared with the others, methods 1 and 3 showed a smaller intra-individual standard deviation of device error (SD1), suggesting their higher reproducibility of validation. The SD1 by method 2 (P=0.004) significantly correlated with the participant's BP, supporting our hypothesis that the increased SD of device error by method 2 is at least partially caused by essential BPV. Method 3 showed a significantly (P=0.0044) smaller interparticipant SD of device error (SD2), suggesting its higher interparticipant consistency of validation. Among the methods of validation of the clinical accuracy of auto-BPMs, method 3, which showed the highest reproducibility and highest interparticipant consistency, can be proposed as being the most appropriate.

  11. Application of a spectrum standardization method for carbon analysis in coal using laser-induced breakdown spectroscopy (LIBS).

    PubMed

    Li, Xiongwei; Wang, Zhe; Fu, Yangting; Li, Zheng; Liu, Jianmin; Ni, Weidou

    2014-01-01

    Measurement of coal carbon content using laser-induced breakdown spectroscopy (LIBS) is limited by its low precision and accuracy. A modified spectrum standardization method was proposed to achieve both reproducible and accurate results for the quantitative analysis of carbon content in coal using LIBS. The proposed method used the molecular emissions of diatomic carbon (C2) and cyanide (CN) to compensate for the diminution of atomic carbon emissions in high volatile content coal samples caused by matrix effect. The compensated carbon line intensities were further converted into an assumed standard state with standard plasma temperature, electron number density, and total number density of carbon, under which the carbon line intensity is proportional to its concentration in the coal samples. To obtain better compensation for fluctuations of total carbon number density, the segmental spectral area was used and an iterative algorithm was applied that is different from our previous spectrum standardization calculations. The modified spectrum standardization model was applied to the measurement of carbon content in 24 bituminous coal samples. The results demonstrate that the proposed method has superior performance over the generally applied normalization methods. The average relative standard deviation was 3.21%, the coefficient of determination was 0.90, the root mean square error of prediction was 2.24%, and the average maximum relative error for the modified model was 12.18%, showing an overall improvement over the corresponding values for the normalization with segmental spectrum area, 6.00%, 0.75, 3.77%, and 15.40%, respectively.

  12. Implementing the Next Generation Science Standards: How Instructional Coaches Mediate Standards-Based Educational Reform to Teacher Practice

    NASA Astrophysics Data System (ADS)

    Laxton, Katherine E.

    This dissertation takes a close look at how district-level instructional coaches support teachers in learning to shifting their instructional practice, related to the Next Generation Science Standards. This dissertation aims to address how re-structuring professional development to a job-embedded coaching model supports individual teacher learning of new reform-related instructional practice. Implementing the NGSS is a problem of supporting professional learning in a way that will enable educators to make fundamental changes to their teaching practice. However, there are few examples in the literature that explain how coaches interact with teachers to improve teacher learning of reform-related instructional practice. There are also few examples in the literature that specifically address how supporting teachers with extended professional learning opportunities, aligned with high-leverage practices, tools and curriculum, impacts how teachers make sense of new standards-based educational reforms and what manifests in classroom instruction. This dissertation proposes four conceptual categories of sense-making that influence how instructional coaches interpret the nature of reform, their roles and in instructional improvement and how to work with teachers. It is important to understand how coaches interpret reform because their interpretations may have unintended consequences related to privileging certain views about instruction, or establishing priorities for how to work with teachers. In this dissertation, we found that re-structuring professional development to a job-embedded coaching model supported teachers in learning new reform-related instructional practice. However, individual teacher interpretations of reform emerged and seemed to be linked to how instructional coaches supported teacher learning.

  13. Interpretation of Radiological Images: Towards a Framework of Knowledge and Skills

    ERIC Educational Resources Information Center

    van der Gijp, A.; van der Schaaf, M. F.; van der Schaaf, I. C.; Huige, J. C. B. M.; Ravesloot, C. J.; van Schaik, J. P. J.; ten Cate, Th. J.

    2014-01-01

    The knowledge and skills that are required for radiological image interpretation are not well documented, even though medical imaging is gaining importance. This study aims to develop a comprehensive framework of knowledge and skills, required for two-dimensional and multiplanar image interpretation in radiology. A mixed-method study approach was…

  14. Mechanistic interpretation of nondestructive pavement testing deflections

    NASA Astrophysics Data System (ADS)

    Hoffman, M. S.; Thompson, M. R.

    1981-06-01

    A method for the back calculation of material properties in flexible pavements based on the interpretation of surface deflection measurements is proposed. The ILLI-PAVE, a stress-dependent finite element pavement model, was used to generate data for developing algorithms and nomographs for deflection basin interpretation. Twenty four different flexible pavement sections throughout the State of Illinois were studied. Deflections were measured and loading mode effects on pavement response were investigated. The factors controlling the pavement response to different loading modes are identified and explained. Correlations between different devices are developed. The back calculated parameters derived from the proposed evaluation procedure can be used as inputs for asphalt concrete overlay design.

  15. Radiology's Achilles' heel: error and variation in the interpretation of the Röntgen image.

    PubMed

    Robinson, P J

    1997-11-01

    The performance of the human eye and brain has failed to keep pace with the enormous technical progress in the first full century of radiology. Errors and variations in interpretation now represent the weakest aspect of clinical imaging. Those interpretations which differ from the consensus view of a panel of "experts" may be regarded as errors; where experts fail to achieve consensus, differing reports are regarded as "observer variation". Errors arise from poor technique, failures of perception, lack of knowledge and misjudgments. Observer variation is substantial and should be taken into account when different diagnostic methods are compared; in many cases the difference between observers outweighs the difference between techniques. Strategies for reducing error include attention to viewing conditions, training of the observers, availability of previous films and relevant clinical data, dual or multiple reporting, standardization of terminology and report format, and assistance from computers. Digital acquisition and display will probably not affect observer variation but the performance of radiologists, as measured by receiver operating characteristic (ROC) analysis, may be improved by computer-directed search for specific image features. Other current developments show that where image features can be comprehensively described, computer analysis can replace the perception function of the observer, whilst the function of interpretation can in some cases be performed better by artificial neural networks. However, computer-assisted diagnosis is still in its infancy and complete replacement of the human observer is as yet a remote possibility.

  16. Visualizing the Sample Standard Deviation

    ERIC Educational Resources Information Center

    Sarkar, Jyotirmoy; Rashid, Mamunur

    2017-01-01

    The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…

  17. Interpretations of cigarette advertisement warning labels by Philadelphia Puerto Ricans.

    PubMed

    Morris, Nancy; Gilpin, Dawn R; Lenos, Melissa; Hobbs, Renee

    2011-09-01

    This study examined Philadelphia Puerto Ricans' interpretations of the Surgeon General's warnings that appear on cigarette packaging and in advertisements. In-home family focus groups in which participants were asked to comment on magazine cigarette advertisements showed a great variety of interpretations of the legally mandated warning labels. These findings (a) corroborate and add to research in public health and communications regarding the possibility of wide variations in message interpretations and (b) support the call for public health messages to be carefully tested for effectiveness among different social groups. The article's focus on Puerto Ricans addresses the problem of misleading conclusions that can arise from aggregating all Latino subpopulations into one group. The use of a naturalistic setting to examine interpretations of messages about smoking departs from the experimental methods typically used for such research and provides new evidence that even a seemingly straightforward message can be interpreted in multiple ways. Understanding and addressing differences in message interpretation can guide public health campaigns aimed at reducing health disparities. Copyright © Taylor & Francis Group, LLC

  18. The continued value of disk diffusion for assessing antimicrobial susceptibility in clinical laboratories: report from the Clinical and Laboratory Standards Institute Methods Development and Standardization Working Group.

    PubMed

    Humphries, Romney M; Kircher, Susan; Ferrell, Andrea; Krause, Kevin M; Malherbe, Rianna; Hsiung, Andre; Burnham, C A

    2018-05-09

    Expedited pathways to antimicrobial agent approval by the United States Food and Drug Administration (FDA) have led to increased delays between drug approval and the availability of FDA-cleared antimicrobial susceptibility testing (AST) devices. Antimicrobial disks for use with disk diffusion testing are among the first AST devices available to clinical laboratories. However, many laboratories are reluctant to implement a disk diffusion method for a variety of reasons, including dwindling proficiency with this method, interruptions to laboratory workflow, uncertainty surrounding the quality and reliability of a disk diffusion test, and perceived need to report an MIC to clinicians. This mini-review provides a report from the Clinical and Laboratory Standards Institute Working Group on Methods Development and Standardization on the current standards and clinical utility of disk diffusion testing. Copyright © 2018 American Society for Microbiology.

  19. 42 CFR 37.52 - Method of obtaining definitive interpretations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... other diseases must be demonstrated by those physicians who desire to be B Readers by taking and passing... specified by NIOSH. Each physician who desires to take the digital version of the examination will be provided a complete set of the current NIOSH-approved standard reference digital radiographs. Physicians...

  20. Limited English proficient Hmong- and Spanish-speaking patients' perceptions of the quality of interpreter services.

    PubMed

    Lor, Maichou; Xiong, Phia; Schwei, Rebecca J; Bowers, Barbara J; Jacobs, Elizabeth A

    2016-02-01

    Language barriers are a large and growing problem for patients in the US and around the world. Interpreter services are a standard solution for addressing language barriers and most research has focused on utilization of interpreter services and their effect on health outcomes for patients who do not speak the same language as their healthcare providers including nurses. However, there is limited research on patients' perceptions of these interpreter services. To examine Hmong- and Spanish-speaking patients' perceptions of interpreter service quality in the context of receiving cancer preventive services. Twenty limited English proficient Hmong (n=10) and Spanish-speaking participants (n=10) ranging in age from 33 to 75 years were interviewed by two bilingual researchers in a Midwestern state. Interviews were audio taped, transcribed verbatim, and translated into English. Analysis was done using conventional content analysis. The two groups shared perceptions about the quality of interpreter services as variable along three dimensions. Specifically, both groups evaluated quality of interpreters based on the interpreters' ability to provide: (a) literal interpretation, (b) cultural interpretation, and (c) emotional interpretation during the health care encounter. The groups differed, however, on how they described the consequences of poor interpretation quality. Hmong participants described how poor quality interpretation could lead to: (a) poor interpersonal relationships among patients, providers, and interpreters, (b) inability of patients to follow through with treatment plans, and (c) emotional distress for patients. Our study highlights the fact that patients are discerning consumers of interpreter services; and could be effective partners in efforts to reform and enhance interpreter services. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Interpreting a CMS excess in l l j j +missing -transverse-momentum with the golden cascade of the minimal supersymmetric standard model

    NASA Astrophysics Data System (ADS)

    Allanach, Ben; Kvellestad, Anders; Raklev, Are

    2015-06-01

    The CMS experiment recently reported an excess consistent with an invariant mass edge in opposite-sign same flavor leptons, when produced in conjunction with at least two jets and missing transverse momentum. We provide an interpretation of the edge in terms of (anti)squark pair production followed by the "golden cascade" decay for one of the squarks: q ˜ →χ˜2 0q →l ˜ l q →χ˜1 0q l l in the minimal supersymmetric standard model. A simplified model involving binos, winos, an on-shell slepton, and the first two generations of squarks fits the event rate and the invariant mass edge. We check consistency with a recent ATLAS search in a similar region, finding that much of the good-fit parameter space is still allowed at the 95% confidence level (C.L.). However, a combination of other LHC searches, notably two-lepton stop pair searches and jets plus p T, rule out all of the remaining parameter space at the 95% C.L.

  2. Impact of HIPAA’s Minimum Necessary Standard on Genomic Data Sharing

    PubMed Central

    Evans, Barbara J.; Jarvik, Gail P.

    2017-01-01

    Purpose This article provides a brief introduction to the HIPAA Privacy Rule’s minimum necessary standard, which applies to sharing of genomic data, particularly clinical data, following 2013 Privacy Rule revisions. Methods This research used the Thomson Reuters Westlaw™ database and law library resources in its legal analysis of the HIPAA privacy tiers and the impact of the minimum necessary standard on genomic data-sharing. We considered relevant example cases of genomic data-sharing needs. Results In a climate of stepped-up HIPAA enforcement, this standard is of concern to laboratories that generate, use, and share genomic information. How data-sharing activities are characterized—whether for research, public health, or clinical interpretation and medical practice support—affects how the minimum necessary standard applies and its overall impact on data access and use. Conclusion There is no clear regulatory guidance on how to apply HIPAA’s minimum necessary standard when considering the sharing of information in the data-rich environment of genomic testing. Laboratories that perform genomic testing should engage with policy-makers to foster sound, well-informed policies and appropriate characterization of data-sharing activities to minimize adverse impacts on day-to-day workflows. PMID:28914268

  3. Quality Metrics Of Digitally Derived Imagery And Their Relation To Interpreter Performance

    NASA Astrophysics Data System (ADS)

    Burke, James J.; Snyder, Harry L.

    1981-12-01

    Two hundred-fifty transparencies, displaying a new digital database consisting of 25 degraded versions (5 blur levels x 5 noise levels) of each of 10 digitized, first-generation positive transparencies, were used in two experiments involving 15 trained military photo-interpreters. Each image is 86 mm square and represents 40962 8-bit pixels. In the "interpretation" experiment, each photo-interpreter (judge) spent approximately two days extracting Essential Elements of Information (EEI's) from one degraded version of each scene at a constant blur level (FWHM = 40, 84 or 322 μm). In the scaling experiment, each judge assigned a numerical value to each of the 250 images, according to its perceived position on a 10-point NATO-standardized scale (0 = useless through 9 = nearly perfect), to the nearest 0.1 unit. Eighty-eight of the 100 possible values were used by the judges, indicating that 62 categories are needed to scale these hardcopy images. The overall correlation between the scaling and interpretation results was 0.9. Though the main effect of blur was not significant (p = 0.146) in the interpretation experiment, that of noise was significant (p = 0.005), and all main factors (blur, noise, scene, order of battle) and most interactions were statistically significant in the scaling experiment.

  4. A Comparison of Three Methods for Computing Scale Score Conditional Standard Errors of Measurement. ACT Research Report Series, 2013 (7)

    ERIC Educational Resources Information Center

    Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu

    2013-01-01

    Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…

  5. Absence of Nosocomial Transmission of Imported Lassa Fever during Use of Standard Barrier Nursing Methods.

    PubMed

    Grahn, Anna; Bråve, Andreas; Tolfvenstam, Thomas; Studahl, Marie

    2018-06-01

    Nosocomial transmission of Lassa virus (LASV) is reported to be low when care for the index patient includes proper barrier nursing methods. We investigated whether asymptomatic LASV infection occurred in healthcare workers who used standard barrier nursing methods during the first 15 days of caring for a patient with Lassa fever in Sweden. Of 76 persons who were defined as having been potentially exposed to LASV, 53 provided blood samples for detection of LASV IgG. These persons also responded to a detailed questionnaire to evaluate exposure to different body fluids from the index patient. LASV-specific IgG was not detected in any of the 53 persons. Five of 53 persons had not been using proper barrier nursing methods. Our results strengthen the argument for a low risk of secondary transmission of LASV in humans when standard barrier nursing methods are used and the patient has only mild symptoms.

  6. Combining natural background levels (NBLs) assessment with indicator kriging analysis to improve groundwater quality data interpretation and management.

    PubMed

    Ducci, Daniela; de Melo, M Teresa Condesso; Preziosi, Elisabetta; Sellerino, Mariangela; Parrone, Daniele; Ribeiro, Luis

    2016-11-01

    The natural background level (NBL) concept is revisited and combined with indicator kriging method to analyze the spatial distribution of groundwater quality within a groundwater body (GWB). The aim is to provide a methodology to easily identify areas with the same probability of exceeding a given threshold (which may be a groundwater quality criteria, standards, or recommended limits for selected properties and constituents). Three case studies with different hydrogeological settings and located in two countries (Portugal and Italy) are used to derive NBL using the preselection method and validate the proposed methodology illustrating its main advantages over conventional statistical water quality analysis. Indicator kriging analysis was used to create probability maps of the three potential groundwater contaminants. The results clearly indicate the areas within a groundwater body that are potentially contaminated because the concentrations exceed the drinking water standards or even the local NBL, and cannot be justified by geogenic origin. The combined methodology developed facilitates the management of groundwater quality because it allows for the spatial interpretation of NBL values. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Standardization of chemical analytical techniques for pyrolysis bio-oil: history, challenges, and current status of methods

    DOE PAGES

    Ferrell, Jack R.; Olarte, Mariefel V.; Christensen, Earl D.; ...

    2016-07-05

    Here, we discuss the standardization of analytical techniques for pyrolysis bio-oils, including the current status of methods, and our opinions on future directions. First, the history of past standardization efforts is summarized, and both successful and unsuccessful validation of analytical techniques highlighted. The majority of analytical standardization studies to-date has tested only physical characterization techniques. In this paper, we present results from an international round robin on the validation of chemical characterization techniques for bio-oils. Techniques tested included acid number, carbonyl titrations using two different methods (one at room temperature and one at 80 °C), 31P NMR for determination ofmore » hydroxyl groups, and a quantitative gas chromatography–mass spectrometry (GC-MS) method. Both carbonyl titration and acid number methods have yielded acceptable inter-laboratory variabilities. 31P NMR produced acceptable results for aliphatic and phenolic hydroxyl groups, but not for carboxylic hydroxyl groups. As shown in previous round robins, GC-MS results were more variable. Reliable chemical characterization of bio-oils will enable upgrading research and allow for detailed comparisons of bio-oils produced at different facilities. Reliable analytics are also needed to enable an emerging bioenergy industry, as processing facilities often have different analytical needs and capabilities than research facilities. We feel that correlations in reliable characterizations of bio-oils will help strike a balance between research and industry, and will ultimately help to -determine metrics for bio-oil quality. Lastly, the standardization of additional analytical methods is needed, particularly for upgraded bio-oils.« less

  8. Standardization of chemical analytical techniques for pyrolysis bio-oil: history, challenges, and current status of methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrell, Jack R.; Olarte, Mariefel V.; Christensen, Earl D.

    Here, we discuss the standardization of analytical techniques for pyrolysis bio-oils, including the current status of methods, and our opinions on future directions. First, the history of past standardization efforts is summarized, and both successful and unsuccessful validation of analytical techniques highlighted. The majority of analytical standardization studies to-date has tested only physical characterization techniques. In this paper, we present results from an international round robin on the validation of chemical characterization techniques for bio-oils. Techniques tested included acid number, carbonyl titrations using two different methods (one at room temperature and one at 80 °C), 31P NMR for determination ofmore » hydroxyl groups, and a quantitative gas chromatography–mass spectrometry (GC-MS) method. Both carbonyl titration and acid number methods have yielded acceptable inter-laboratory variabilities. 31P NMR produced acceptable results for aliphatic and phenolic hydroxyl groups, but not for carboxylic hydroxyl groups. As shown in previous round robins, GC-MS results were more variable. Reliable chemical characterization of bio-oils will enable upgrading research and allow for detailed comparisons of bio-oils produced at different facilities. Reliable analytics are also needed to enable an emerging bioenergy industry, as processing facilities often have different analytical needs and capabilities than research facilities. We feel that correlations in reliable characterizations of bio-oils will help strike a balance between research and industry, and will ultimately help to -determine metrics for bio-oil quality. Lastly, the standardization of additional analytical methods is needed, particularly for upgraded bio-oils.« less

  9. Standard-less analysis of Zircaloy clad samples by an instrumental neutron activation method

    NASA Astrophysics Data System (ADS)

    Acharya, R.; Nair, A. G. C.; Reddy, A. V. R.; Goswami, A.

    2004-03-01

    A non-destructive method for analysis of irregular shape and size samples of Zircaloy has been developed using the recently standardized k0-based internal mono standard instrumental neutron activation analysis (INAA). The samples of Zircaloy-2 and -4 tubes, used as fuel cladding in Indian boiling water reactors (BWR) and pressurized heavy water reactors (PHWR), respectively, have been analyzed. Samples weighing in the range of a few tens of grams were irradiated in the thermal column of Apsara reactor to minimize neutron flux perturbations and high radiation dose. The method utilizes in situ relative detection efficiency using the γ-rays of selected activation products in the sample for overcoming γ-ray self-attenuation. Since the major and minor constituents (Zr, Sn, Fe, Cr and/or Ni) in these samples were amenable to NAA, the absolute concentrations of all the elements were determined using mass balance instead of using the concentration of the internal mono standard. Concentrations were also determined in a smaller size Zircaloy-4 sample by irradiating in the core position of the reactor to validate the present methodology. The results were compared with literature specifications and were found to be satisfactory. Values of sensitivities and detection limits have been evaluated for the elements analyzed.

  10. On the statistical significance of excess events: Remarks of caution and the need for a standard method of calculation

    NASA Technical Reports Server (NTRS)

    Staubert, R.

    1985-01-01

    Methods for calculating the statistical significance of excess events and the interpretation of the formally derived values are discussed. It is argued that a simple formula for a conservative estimate should generally be used in order to provide a common understanding of quoted values.

  11. A systematic approach to the interpretation of preoperative staging MRI for rectal cancer.

    PubMed

    Taylor, Fiona G M; Swift, Robert I; Blomqvist, Lennart; Brown, Gina

    2008-12-01

    The purpose of this article is to provide an aid to the systematic evaluation of MRI in staging rectal cancer. MRI has been shown to be an effective tool for the accurate preoperative staging of rectal cancer. In the Magnetic Resonance Imaging and Rectal Cancer European Equivalence Study (MERCURY), imaging workshops were held for participating radiologists to ensure standardization of scan acquisition techniques and interpretation of the images. In this article, we report how the information was obtained and give examples of the images and how they are interpreted, with the aim of providing a systematic approach to the reporting process.

  12. Standardization of fixation, processing and staining methods for the central nervous system of vertebrates.

    PubMed

    Aldana Marcos, H J; Ferrari, C C; Benitez, I; Affanni, J M

    1996-12-01

    This paper reports the standardization of methods used for processing and embedding various vertebrate brains of different size in paraffin. Other technical details developed for avoiding frequent difficulties arising during laboratory routine are also reported. Some modifications of the Nissl and Klüver-Barrera staining methods are proposed. These modifications include: 1) a Nissl stain solution with a rapid and efficient action with easier differentiation; 2) the use of a cheap microwave oven for the Klüver-Barrera stain. These procedures have the advantage of permitting Nissl and Klüver-Barrera staining of nervous tissue in about five and fifteen minutes respectively. The proposed procedures have been tested in brains obtained from fish, amphibians, reptiles and mammals of different body sizes. They are the result of our long experience in preparing slides for comparative studies. Serial sections of excellent quality were regularly obtained in all the specimens studied. These standardized methods, being simple and quick, are recommended for routine use in neurobiological laboratories.

  13. Managing Highway Maintenance: Standards for Maintenance Work, Part 3, Unit 8, Level 2.

    ERIC Educational Resources Information Center

    Federal Highway Administration (DOT), Washington, DC. Offices of Research and Development.

    Part of the series "Managing Highway Maintenance," the unit explains various uses of maintenance standards and how standards should be interpreted and communicated to formen and crew leaders. Several examples are given of the decisions made when applying the standards to routine work. The preceding units on standards (parts 1 and 2)…

  14. Standard Setting Methods for Pass/Fail Decisions on High-Stakes Objective Structured Clinical Examinations: A Validity Study.

    PubMed

    Yousuf, Naveed; Violato, Claudio; Zuberi, Rukhsana W

    2015-01-01

    CONSTRUCT: Authentic standard setting methods will demonstrate high convergent validity evidence of their outcomes, that is, cutoff scores and pass/fail decisions, with most other methods when compared with each other. The objective structured clinical examination (OSCE) was established for valid, reliable, and objective assessment of clinical skills in health professions education. Various standard setting methods have been proposed to identify objective, reliable, and valid cutoff scores on OSCEs. These methods may identify different cutoff scores for the same examinations. Identification of valid and reliable cutoff scores for OSCEs remains an important issue and a challenge. Thirty OSCE stations administered at least twice in the years 2010-2012 to 393 medical students in Years 2 and 3 at Aga Khan University are included. Psychometric properties of the scores are determined. Cutoff scores and pass/fail decisions of Wijnen, Cohen, Mean-1.5SD, Mean-1SD, Angoff, borderline group and borderline regression (BL-R) methods are compared with each other and with three variants of cluster analysis using repeated measures analysis of variance and Cohen's kappa. The mean psychometric indices on the 30 OSCE stations are reliability coefficient = 0.76 (SD = 0.12); standard error of measurement = 5.66 (SD = 1.38); coefficient of determination = 0.47 (SD = 0.19), and intergrade discrimination = 7.19 (SD = 1.89). BL-R and Wijnen methods show the highest convergent validity evidence among other methods on the defined criteria. Angoff and Mean-1.5SD demonstrated least convergent validity evidence. The three cluster variants showed substantial convergent validity with borderline methods. Although there was a high level of convergent validity of Wijnen method, it lacks the theoretical strength to be used for competency-based assessments. The BL-R method is found to show the highest convergent validity evidences for OSCEs with other standard setting methods used in the present study

  15. Theory Interpretations in PVS

    NASA Technical Reports Server (NTRS)

    Owre, Sam; Shankar, Natarajan; Butler, Ricky W. (Technical Monitor)

    2001-01-01

    The purpose of this task was to provide a mechanism for theory interpretations in a prototype verification system (PVS) so that it is possible to demonstrate the consistency of a theory by exhibiting an interpretation that validates the axioms. The mechanization makes it possible to show that one collection of theories is correctly interpreted by another collection of theories under a user-specified interpretation for the uninterpreted types and constants. A theory instance is generated and imported, while the axiom instances are generated as proof obligations to ensure that the interpretation is valid. Interpretations can be used to show that an implementation is a correct refinement of a specification, that an axiomatically defined specification is consistent, or that a axiomatically defined specification captures its intended models. In addition, the theory parameter mechanism has been extended with a notion of theory as parameter so that a theory instance can be given as an actual parameter to an imported theory. Theory interpretations can thus be used to refine an abstract specification or to demonstrate the consistency of an axiomatic theory. In this report we describe the mechanism in detail. This extension is a part of PVS version 3.0, which will be publicly released in mid-2001.

  16. Interpreting wireline measurements in coal beds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, D.J.

    1991-06-01

    When logging coal seams with wireline tools, the interpretation method needed to evaluate the coals is different from that used for conventional oil and gas reservoirs. Wireline logs identify coals easily. For an evaluation, the contribution of each coal component on the raw measurements must be considered. This paper will discuss how each log measurement is affected by each component. The components of a coal will be identified as the mineral matter, macerals, moisture content, rank, gas content, and cleat porosity. The measurements illustrated are from the resistivity, litho-density, neutron, sonic, dielectric, and geochemical tools. Once the coal component effectsmore » have been determined, an interpretation of the logs can be made. This paper will illustrate how to use these corrected logs in a coal evaluation.« less

  17. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples.

    PubMed

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-05

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Effect of Picture Archiving and Communication System Image Manipulation on the Agreement of Chest Radiograph Interpretation in the Neonatal Intensive Care Unit.

    PubMed

    Castro, Denise A; Naqvi, Asad Ahmed; Vandenkerkhof, Elizabeth; Flavin, Michael P; Manson, David; Soboleski, Donald

    2016-01-01

    Variability in image interpretation has been attributed to differences in the interpreters' knowledge base, experience level, and access to the clinical scenario. Picture archiving and communication system (PACS) has allowed the user to manipulate the images while developing their impression of the radiograph. The aim of this study was to determine the agreement of chest radiograph (CXR) impressions among radiologists and neonatologists and help determine the effect of image manipulation with PACS on report impression. Prospective cohort study included 60 patients from the Neonatal Intensive Care Unit undergoing CXRs. Three radiologists and three neonatologists reviewed two consecutive frontal CXRs of each patient. Each physician was allowed manipulation of images as needed to provide a decision of "improved," "unchanged," or "disease progression" lung disease for each patient. Each physician repeated the process once more; this time, they were not allowed to individually manipulate the images, but an independent radiologist presets the image brightness and contrast to best optimize the CXR appearance. Percent agreement and opposing reporting views were calculated between all six physicians for each of the two methods (allowing and not allowing image manipulation). One hundred percent agreement in image impression between all six observers was only seen in 5% of cases when allowing image manipulation; 100% agreement was seen in 13% of the cases when there was no manipulation of the images. Agreement in CXR interpretation is poor; the ability to manipulate the images on PACS results in a decrease in agreement in the interpretation of these studies. New methods to standardize image appearance and allow improved comparison with previous studies should be sought to improve clinician agreement in interpretation consistency and advance patient care.

  19. Investigating patients' experiences: methodological usefulness of interpretive interactionism.

    PubMed

    Tower, Marion; Rowe, Jennifer; Wallis, Marianne

    2012-01-01

    To demonstrate the methodological usefulness of interpretive interactionism by applying it to the example of a study investigating healthcare experiences of women affected by domestic violence. Understanding patients' experiences of health, illness and health care is important to nurses. For many years, biomedical discourse has prevailed in healthcare language and research, and has influenced healthcare responses. Contemporary nursing scholarship can be developed by engaging with new ways of understanding therapeutic interactions with patients. Research that uses qualitative methods of inquiry is an important paradigm for nurses who seek to explain and understand or describe experiences rather than predict outcomes. Interpretive interactionism is an interpretive form of inquiry for conducting studies of social or personal problems that have healthcare policy implications. It puts the patient at the centre of the research process and makes visible the experiences of patients as they interact with the healthcare and social systems that surround them. Interpretive interactionism draws on concepts of symbolic interactionism, phenomenology and hermeneutics. Interpretive interactionism is a patient-centred methodology that provides an alternative way of understanding patients' experiences. It can contribute to policy and practice development by drawing on the perspectives and experiences of patients, who are central to the research process. It also allows research findings to be situated in and linked to healthcare policy, professional ethics and organisational approaches to care. Interpretive interactionism has methodological utility because it can contribute to policy and practice development by drawing on the perspectives and experiences of patients who are central to the research process. Interpretive interactionism allows research findings to be situated in and linked to health policy, professional ethics and organisational approaches to caring.

  20. Detection of climate signal in dendrochronological data analysis: a comparison of tree-ring standardization methods

    NASA Astrophysics Data System (ADS)

    Helama, S.; Lindholm, M.; Timonen, M.; Eronen, M.

    2004-12-01

    Tree-ring standardization methods were compared. Traditional methods along with the recently introduced approaches of regional curve standardization (RCS) and power-transformation (PT) were included. The difficulty in removing non-climatic variation (noise) while simultaneously preserving the low-frequency variability in the tree-ring series was emphasized. The potential risk of obtaining inflated index values was analysed by comparing methods to extract tree-ring indices from the standardization curve. The material for the tree-ring series, previously used in several palaeoclimate predictions, came from living and dead wood of high-latitude Scots pine in northernmost Europe. This material provided a useful example of a long composite tree-ring chronology with the typical strengths and weaknesses of such data, particularly in the context of standardization. PT stabilized the heteroscedastic variation in the original tree-ring series more efficiently than any other standardization practice expected to preserve the low-frequency variability. RCS showed great potential in preserving variability in tree-ring series at centennial time scales; however, this method requires a homogeneous sample for reliable signal estimation. It is not recommended to derive indices by subtraction without first stabilizing the variance in the case of series of forest-limit tree-ring data. Index calculation by division did not seem to produce inflated chronology values for the past one and a half centuries of the chronology (where mean sample cambial age is high). On the other hand, potential bias of high RCS chronology values was observed during the period of anomalously low mean sample cambial age. An alternative technique for chronology construction was proposed based on series age decomposition, where indices in the young vigorously behaving part of each series are extracted from the curve by division and in the mature part by subtraction. Because of their specific nature, the

  1. 24 CFR Appendix II to Subpart C of... - Development of Standards; Calculation Methods

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...; Calculation Methods I. Background Information Concerning the Standards (a) Thermal Radiation: (1) Introduction... and structures in the event of fire. The resulting fireball emits thermal radiation which is absorbed... radiation being emitted. The radiation can cause severe burn, injuries and even death to exposed persons...

  2. ARI Image Interpretation Research: 1970-1980

    DTIC Science & Technology

    1980-07-01

    28 3. Index marks on data base stereo pair ... .......... .. 41 4. Identification learning curves for three methods used by interpreters of high...it may be impractical in operational units (but not in the school). Team consensus feedback can increase target identification proficiency and...in target identification can be provided with a minimum of instructor participation using operational imagery as the basic instructional material

  3. A Retrospective Analysis Comparing the New Standardized Letter of Recommendation in Dermatology with the Classic Narrative Letter of Recommendation

    PubMed Central

    Mosser, Joy; Lee, Grace; Pootrakul, Llana; Harfmann, Katya; Fabbro, Stephanie; Faith, Esteban Fernandez; Carr, David; Plotner, Alisha; Zirwas, Matthew; Kaffenberger, Benjamin H.

    2016-01-01

    Background: In an effort to avoid numerous problems associated with narrative letters of recommendation, a dermatology standardized letter of recommendation was utilized in the 2014–2015 resident application cycle. Objective: A comparison of the standardized letter of recommendation and narrative letters of recommendation from a single institution and application cycle to determine if the standardized letter of recommendation met its original goals of efficiency, applicant stratification, and validity. Methods: Eight dermatologists assessed all standardized letters of recommendation/narrative letters of recommendation pairs received during the 2014–2015 application cycle. Five readers repeated the analysis two months later. Each letter of recommendation was evaluated based on a seven question survey. Letter analysis and survey completion for each letter was timed. Results: Compared to the narrative letters of recommendation, the standardized letter of recommendation is easier to interpret (p<0.0001), has less exaggeration of applicants’ positive traits (p<0.001), and has higher inter-rater and intrarater reliability for determining applicant traits including personality, reliability, work-ethic, and global score. Standardized letters of recommendation are also faster to interpret (p<0.0001) and provide more information about the writer’s background or writer-applicant relationship than narrative letters of recommendation (p<0.001). Limitations: This study was completed at a single institution. Conclusions: The standardized letter of recommendation appears to be meeting its initial goals of 1) efficiency, 2) applicant stratification, and 3) validity. (J Clin Aesthet Dermatol. 2016;9(9):36–2.) PMID:27878060

  4. An Interpretive Master Plan at the Museum of Fine Arts, Houston

    ERIC Educational Resources Information Center

    Schneider, Beth B.

    2008-01-01

    This case study presents the methods the staff at the Museum of Fine Arts, Houston used to develop and implement an interpretive master plan from 1996-2000. The process can be a model for other museums. Looking back a decade after the plan was developed provides insights into the role of interpretive plans as statements of goals, expressions of…

  5. Effect of Radiologists’ Diagnostic Work-up Volume on Interpretive Performance

    PubMed Central

    Anderson, Melissa L.; Smith, Robert A.; Carney, Patricia A.; Miglioretti, Diana L.; Monsees, Barbara S.; Sickles, Edward A.; Taplin, Stephen H.; Geller, Berta M.; Yankaskas, Bonnie C.; Onega, Tracy L.

    2014-01-01

    Purpose To examine radiologists’ screening performance in relation to the number of diagnostic work-ups performed after abnormal findings are discovered at screening mammography by the same radiologist or by different radiologists. Materials and Methods In an institutional review board–approved HIPAA-compliant study, the authors linked 651 671 screening mammograms interpreted from 2002 to 2006 by 96 radiologists in the Breast Cancer Surveillance Consortium to cancer registries (standard of reference) to evaluate the performance of screening mammography (sensitivity, false-positive rate [FPRfalse-positive rate], and cancer detection rate [CDRcancer detection rate]). Logistic regression was used to assess the association between the volume of recalled screening mammograms (“own” mammograms, where the radiologist who interpreted the diagnostic image was the same radiologist who had interpreted the screening image, and “any” mammograms, where the radiologist who interpreted the diagnostic image may or may not have been the radiologist who interpreted the screening image) and screening performance and whether the association between total annual volume and performance differed according to the volume of diagnostic work-up. Results Annually, 38% of radiologists performed the diagnostic work-up for 25 or fewer of their own recalled screening mammograms, 24% performed the work-up for 0–50, and 39% performed the work-up for more than 50. For the work-up of recalled screening mammograms from any radiologist, 24% of radiologists performed the work-up for 0–50 mammograms, 32% performed the work-up for 51–125, and 44% performed the work-up for more than 125. With increasing numbers of radiologist work-ups for their own recalled mammograms, the sensitivity (P = .039), FPRfalse-positive rate (P = .004), and CDRcancer detection rate (P < .001) of screening mammography increased, yielding a stepped increase in women recalled per cancer detected from 17.4 for 25 or

  6. Standardized methods for Grand Canyon fisheries research 2015

    USGS Publications Warehouse

    Persons, William R.; Ward, David L.; Avery, Luke A.

    2013-01-01

    This document presents protocols and guidelines to persons sampling fishes in the Grand Canyon, to help ensure consistency in fish handling, fish tagging, and data collection among different projects and organizations. Most such research and monitoring projects are conducted under the general umbrella of the Glen Canyon Dam Adaptive Management Program and include studies by the U.S. Geological Survey (USGS), U.S. Fish and Wildlife Service (FWS), National Park Service (NPS), the Arizona Game and Fish Department (AGFD), various universities, and private contractors. This document is intended to provide guidance to fieldworkers regarding protocols that may vary from year to year depending on specific projects and objectives. We also provide herein documentation of standard methods used in the Grand Canyon that can be cited in scientific publications, as well as a summary of changes in protocols since the document was first created in 2002.

  7. Lessons from Sociocultural Writing Research for Implementing the Common Core State Standards

    ERIC Educational Resources Information Center

    Woodard, Rebecca; Kline, Sonia

    2016-01-01

    The Common Core State Standards advocate more writing than previous standards; however, in taking a college and career readiness perspective, the Standards neglect to emphasize the role of context and culture in learning to write. We argue that sociocultural perspectives that pay attention to these factors offer insights into how to interpret and…

  8. Automated Interpretation of LIBS Spectra using a Fuzzy Logic Inference Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeremy J. Hatch; Timothy R. McJunkin; Cynthia Hanson

    2012-02-01

    Automated interpretation of laser-induced breakdown spectroscopy (LIBS) data is necessary due to the plethora of spectra that can be acquired in a relatively short time. However, traditional chemometric and artificial neural network methods that have been employed are not always transparent to a skilled user. A fuzzy logic approach to data interpretation has now been adapted to LIBS spectral interpretation. A fuzzy logic inference engine (FLIE) was used to differentiate between various copper containing and stainless steel alloys as well as unknowns. Results using FLIE indicate a high degree of confidence in spectral assignment.

  9. Competency in ECG Interpretation Among Medical Students

    PubMed Central

    Kopeć, Grzegorz; Magoń, Wojciech; Hołda, Mateusz; Podolec, Piotr

    2015-01-01

    Background Electrocardiogram (ECG) is commonly used in diagnosis of heart diseases, including many life-threatening disorders. We aimed to assess skills in ECG interpretation among Polish medical students and to analyze the determinants of these skills. Material/Methods Undergraduates from all Polish medical schools were asked to complete a web-based survey containing 18 ECG strips. Questions concerned primary ECG parameters (rate, rhythm, and axis), emergencies, and common ECG abnormalities. Analysis was restricted to students in their clinical years (4th–6th), and students in their preclinical years (1st–3rd) were used as controls. Results We enrolled 536 medical students (females: n=299; 55.8%), aged 19 to 31 (23±1.6) years from all Polish medical schools. Most (72%) were in their clinical years. The overall rate of good response was better in students in years 4th–5th than those in years 1st–3rd (66% vs. 56%; p<0.0001). Competency in ECG interpretation was higher in students who reported ECG self-learning (69% vs. 62%; p<0.0001) but no difference was found between students who attended or did not attend regular ECG classes (66% vs. 66%; p=0.99). On multivariable analysis (p<0.0001), being in clinical years (OR: 2.45 [1.35–4.46] and self-learning (OR: 2.44 [1.46–4.08]) determined competency in ECG interpretation. Conclusions Polish medical students in their clinical years have a good level of competency in interpreting the primary ECG parameters, but their ability to recognize ECG signs of emergencies and common heart abnormalities is low. ECG interpretation skills are determined by self-education but not by attendance at regular ECG classes. Our results indicate qualitative and quantitative deficiencies in teaching ECG interpretation at medical schools. PMID:26541993

  10. The association between ruminative thinking and negative interpretation bias in social anxiety.

    PubMed

    Badra, Marcel; Schulze, Lars; Becker, Eni S; Vrijsen, Janna Nonja; Renneberg, Babette; Zetsche, Ulrike

    2017-09-01

    Cognitive models propose that both, negative interpretations of ambiguous social situations and ruminative thoughts about social events contribute to the maintenance of social anxiety disorder. It has further been postulated that ruminative thoughts fuel biased negative interpretations, however, evidence is rare. The present study used a multi-method approach to assess ruminative processing following a social interaction (post-event processing by self-report questionnaire and social rumination by experience sampling method) and negative interpretation bias (via two separate tasks) in a student sample (n = 51) screened for high (HSA) and low social anxiety (LSA). Results support the hypothesis that group differences in negative interpretations of ambiguous social situations in HSAs vs. LSAs are mediated by higher levels of post-event processing assessed in the questionnaire. Exploratory analyses highlight the potential role of comorbid depressive symptoms. The current findings help to advance the understanding of the association between two cognitive processes involved in social anxiety and stress the importance of ruminative post-event processing.

  11. Spectrophotometric methods for the determination of urea in real samples using silver nanoparticles by standard addition and 2nd order derivative methods

    NASA Astrophysics Data System (ADS)

    Ali, Nauman; Ismail, Muhammad; Khan, Adnan; Khan, Hamayun; Haider, Sajjad; Kamal, Tahseen

    2018-01-01

    In this work, we have developed simple, sensitive and inexpensive methods for the spectrophotometric determination of urea in urine samples using silver nanoparticles (AgNPs). The standard addition and 2nd order derivative methods were adopted for this purpose. AgNPs were prepared by chemical reduction of AgNO3 with hydrazine using 1,3-di-(1H-imidazol-1-yl)-2-propanol (DIPO) as a stabilizing agent in aqueous medium. The proposed methods were based on the complexation of AgNPs with urea. Using this concept, urea in the urine samples was successfully determined spectrophotometric methods. The results showed high percent recovery with ± RSD. The recoveries of urea in the three urine samples by spectrophotometric standard addition were 99.2% ± 5.37, 96.3% ± 4.49, 104.88% ± 4.99 and that of spectrophotometric 2nd order derivative method were 115.3% ± 5.2, 103.4% ± 2.6, 105.93% ± 0.76. The results show that these methods can open doors for a potential role of AgNPs in the clinical determination of urea in urine, blood, biological, non-biological fluids.

  12. Geovisual analytics to enhance spatial scan statistic interpretation: an analysis of U.S. cervical cancer mortality

    PubMed Central

    Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; MacEachren, Alan M

    2008-01-01

    through the analysis of SaTScan results. Conclusion The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. Method We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit. PMID:18992163

  13. User-generated quality standards for youth mental health in primary care: a participatory research design using mixed methods

    PubMed Central

    Graham, Tanya; Rose, Diana; Murray, Joanna; Ashworth, Mark; Tylee, André

    2014-01-01

    Objectives To develop user-generated quality standards for young people with mental health problems in primary care using a participatory research model. Methods 50 young people aged 16–25 from community settings and primary care participated in focus groups and interviews about their views and experiences of seeking help for mental health problems in primary care, cofacilitated by young service users and repeated to ensure respondent validation. A second group of young people also aged 16–25 who had sought help for any mental health problem from primary care or secondary care within the last 5 years were trained as focus groups cofacilitators (n=12) developed the quality standards from the qualitative data and participated in four nominal groups (n=28). Results 46 quality standards were developed and ranked by young service users. Agreement was defined as 100% of scores within a two-point region. Group consensus existed for 16 quality standards representing the following aspects of primary care: better advertising and information (three); improved competence through mental health training and skill mix within the practice (two); alternatives to medication (three); improved referral protocol (three); and specific questions and reassurances (five). Alternatives to medication and specific questions and reassurances are aspects of quality which have not been previously reported. Conclusions We have demonstrated the feasibility of using participatory research methods in order to develop user-generated quality standards. The development of patient-generated quality standards may offer a more formal method of incorporating the views of service users into quality improvement initiatives. This method can be adapted for generating quality standards applicable to other patient groups. PMID:24920648

  14. Isolating DNA from sexual assault cases: a comparison of standard methods with a nuclease-based approach

    PubMed Central

    2012-01-01

    Background Profiling sperm DNA present on vaginal swabs taken from rape victims often contributes to identifying and incarcerating rapists. Large amounts of the victim’s epithelial cells contaminate the sperm present on swabs, however, and complicate this process. The standard method for obtaining relatively pure sperm DNA from a vaginal swab is to digest the epithelial cells with Proteinase K in order to solubilize the victim’s DNA, and to then physically separate the soluble DNA from the intact sperm by pelleting the sperm, removing the victim’s fraction, and repeatedly washing the sperm pellet. An alternative approach that does not require washing steps is to digest with Proteinase K, pellet the sperm, remove the victim’s fraction, and then digest the residual victim’s DNA with a nuclease. Methods The nuclease approach has been commercialized in a product, the Erase Sperm Isolation Kit (PTC Labs, Columbia, MO, USA), and five crime laboratories have tested it on semen-spiked female buccal swabs in a direct comparison with their standard methods. Comparisons have also been performed on timed post-coital vaginal swabs and evidence collected from sexual assault cases. Results For the semen-spiked buccal swabs, Erase outperformed the standard methods in all five laboratories and in most cases was able to provide a clean male profile from buccal swabs spiked with only 1,500 sperm. The vaginal swabs taken after consensual sex and the evidence collected from rape victims showed a similar pattern of Erase providing superior profiles. Conclusions In all samples tested, STR profiles of the male DNA fractions obtained with Erase were as good as or better than those obtained using the standard methods. PMID:23211019

  15. Digital Image Quality And Interpretability: Database And Hardcopy Studies

    NASA Astrophysics Data System (ADS)

    Snyder, H. L.; Maddox, M. E.; Shedivy, D. I.; Turpin, J. A.; Burke, J. J.; Strickland, R. N.

    1982-02-01

    Two hundred fifty transparencies, displaying a new digital database consisting of 25 degraded versions (5 blur levels x 5 noise levels) of each of 10 digitized, first-generation positive transparencies, were used in two experiments involving 15 trained military photointer-preters. Each image is 86 mm square and represents 40962 8-bit pixels. In the "interpretation" experiment, each photointerpreter (judge) spent approximately two days extracting essential elements of information (EEls) from one degraded version of each scene at a constant Gaussian blur level (FWHM = 40, 84, or 322 Am). In the scaling experiment, each judge assigned a numerical value to each of the 250 images, according to its perceived position on a 10-point NATO-standardized scale (0 = useless through 9 = nearly perfect), to the nearest 0.1 unit. Eighty-eight of the 100 possible values were used by the judges, indicating that 62 categories, based on the Shannon-Wiener measure of information, are needed to scale these hardcopy images. The overall correlation between the scaling and interpretation results was 0.9. Though the main effect of blur was not statistically significant in the interpretation experiment, that of noise was significant, and all main factors (blur, noise, scene, order of battle) and most interactions were statistically significant in the scaling experiment.

  16. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods

    PubMed Central

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-01-01

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest

  17. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods.

    PubMed

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-04-07

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest

  18. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable

    PubMed Central

    Korjus, Kristjan; Hebart, Martin N.; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier’s generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term “Cross-validation and cross-testing” improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do. PMID:27564393

  19. A formal approach to the analysis of clinical computer-interpretable guideline modeling languages.

    PubMed

    Grando, M Adela; Glasspool, David; Fox, John

    2012-01-01

    To develop proof strategies to formally study the expressiveness of workflow-based languages, and to investigate their applicability to clinical computer-interpretable guideline (CIG) modeling languages. We propose two strategies for studying the expressiveness of workflow-based languages based on a standard set of workflow patterns expressed as Petri nets (PNs) and notions of congruence and bisimilarity from process calculus. Proof that a PN-based pattern P can be expressed in a language L can be carried out semi-automatically. Proof that a language L cannot provide the behavior specified by a PNP requires proof by exhaustion based on analysis of cases and cannot be performed automatically. The proof strategies are generic but we exemplify their use with a particular CIG modeling language, PROforma. To illustrate the method we evaluate the expressiveness of PROforma against three standard workflow patterns and compare our results with a previous similar but informal comparison. We show that the two proof strategies are effective in evaluating a CIG modeling language against standard workflow patterns. We find that using the proposed formal techniques we obtain different results to a comparable previously published but less formal study. We discuss the utility of these analyses as the basis for principled extensions to CIG modeling languages. Additionally we explain how the same proof strategies can be reused to prove the satisfaction of patterns expressed in the declarative language CIGDec. The proof strategies we propose are useful tools for analysing the expressiveness of CIG modeling languages. This study provides good evidence of the benefits of applying formal methods of proof over semi-formal ones. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Ultrasound functional evaluation of fetuses with myelomeningocele: study of the interpretation of results.

    PubMed

    Maroto, A; Illescas, T; Meléndez, M; Arévalo, S; Rodó, C; Peiró, J L; Belfort, M; Cuxart, A; Carreras, E

    2017-10-01

    To assess the reliability of the interpretation of a new technique for the ultrasound evaluation of the level of neurological lesion in fetuses with myelomeningocele. Observational study including myelomeningocele fetuses, referred to our center for the sonographic assessment of the fetal lower-limb movements, made and recorded by an expert in Maternal-fetal medicine and a specialist in Rehabilitation. Two observers, with different levels of expertise and blinded to each other's results, interpreted each recorded scan two different times. The agreement for the segmental levels assigned between the observers and the gold standard, the inter-observer and intra-observer reproducibility were tested using the weighed Kappa (wκ) index. Twenty-eight scans were recorded and evaluated. The agreement between the observers and the gold standard remained constant for the expert observer (wκ = 0.82) and increased (wκ = 0.66-wκ = 0.72) for the other one. The inter-observer and the intra-observer variability for the expert observer were wκ = 0.72 and wκ = 0.94, respectively. The agreement for the prenatal evaluation of the segmental neurological level was excellent, after a short training period, for observers with different degrees of expertise. The interpretation of this technique is reproducible enough and this supports its value for the prediction of postnatal motor function in myelomeningocele fetuses.