DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, S
2007-08-15
Over the course of fifty-three years, LLNL had six acute releases of tritiated hydrogen gas (HT) and one acute release of tritiated water vapor (HTO) that were too large relative to the annual releases to be included as part of the annual releases from normal operations detailed in Parts 3 and 4 of the Tritium Dose Reconstruction (TDR). Sandia National Laboratories/California (SNL/CA) had one such release of HT and one of HTO. Doses to the maximally exposed individual (MEI) for these accidents have been modeled using an equation derived from the time-dependent tritium model, UFOTRI, and parameter values based onmore » expert judgment. All of these acute releases are described in this report. Doses that could not have been exceeded from the large HT releases of 1965 and 1970 were calculated to be 43 {micro}Sv (4.3 mrem) and 120 {micro}Sv (12 mrem) to an adult, respectively. Two published sets of dose predictions for the accidental HT release in 1970 are compared with the dose predictions of this TDR. The highest predicted dose was for an acute release of HTO in 1954. For this release, the dose that could not have been exceeded was estimated to have been 2 mSv (200 mrem), although, because of the high uncertainty about the predictions, the likely dose may have been as low as 360 {micro}Sv (36 mrem) or less. The estimated maximum exposures from the accidental releases were such that no adverse health effects would be expected. Appendix A lists all accidents and large routine puff releases that have occurred at LLNL and SNL/CA between 1953 and 2005. Appendix B describes the processes unique to tritium that must be modeled after an acute release, some of the time-dependent tritium models being used today, and the results of tests of these models.« less
Methodology, status and plans for development and assessment of Cathare code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bestion, D.; Barre, F.; Faydide, B.
1997-07-01
This paper presents the methodology, status and plans for the development, assessment and uncertainty evaluation of the Cathare code. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the status of the code development and assessment is presented. The general strategy used for the development and the assessment of the code is presented. Analytical experiments with separate effect tests, and component tests are used for the development and the validation of closure laws. Successive Revisions of constitutive laws are implemented in successive Versions of the code and assessed. System tests ormore » integral tests are used to validate the general consistency of the Revision. Each delivery of a code Version + Revision is fully assessed and documented. A methodology is being developed to determine the uncertainty on all constitutive laws of the code using calculations of many analytical tests and applying the Discrete Adjoint Sensitivity Method (DASM). At last, the plans for the future developments of the code are presented. They concern the optimization of the code performance through parallel computing - the code will be used for real time full scope plant simulators - the coupling with many other codes (neutronic codes, severe accident codes), the application of the code for containment thermalhydraulics. Also, physical improvements are required in the field of low pressure transients and in the modeling for the 3-D model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.
2004-09-14
This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.
Cantwell, Kate; Morgans, Amee; Smith, Karen; Livingston, Michael; Dietze, Paul
2014-02-01
This paper aims to examine whether an adaptation of the International Classification of Disease (ICD) coding system can be applied retrospectively to final paramedic assessment data in an ambulance dataset with a view to developing more fine-grained, clinically relevant case definitions than are available through point-of-call data. Over 1.2 million case records were extracted from the Ambulance Victoria data warehouse. Data fields included dispatch code, cause (CN) and final primary assessment (FPA). Each FPA was converted to an ICD-10-AM code using word matching or best fit. ICD-10-AM codes were then converted into Major Diagnostic Categories (MDC). CN was aligned with the ICD-10-AM codes for external cause of morbidity and mortality. The most accurate results were obtained when ICD-10-AM codes were assigned using information from both FPA and CN. Comparison of cases coded as unconscious at point-of-call with the associated paramedic assessment highlighted the extra clinical detail obtained when paramedic assessment data are used. Ambulance paramedic assessment data can be aligned with ICD-10-AM and MDC with relative ease, allowing retrospective coding of large datasets. Coding of ambulance data using ICD-10-AM allows for comparison of not only ambulance service users but also with other population groups. WHAT IS KNOWN ABOUT THE TOPIC? There is no reliable and standard coding and categorising system for paramedic assessment data contained in ambulance service databases. WHAT DOES THIS PAPER ADD? This study demonstrates that ambulance paramedic assessment data can be aligned with ICD-10-AM and MDC with relative ease, allowing retrospective coding of large datasets. Representation of ambulance case types using ICD-10-AM-coded information obtained after paramedic assessment is more fine grained and clinically relevant than point-of-call data, which uses caller information before ambulance attendance. WHAT ARE THE IMPLICATIONS FOR PRACTITIONERS? This paper describes a model of coding using an internationally recognised standard coding and categorising system to support analysis of paramedic assessment. Ambulance data coded using ICD-10-AM allows for reliable reporting and comparison within the prehospital setting and across the healthcare industry.
Evaluation in industry of a draft code of practice for manual handling.
Ashby, Liz; Tappin, David; Bentley, Tim
2004-05-01
This paper reports findings from a study which evaluated the draft New Zealand Code of Practice for Manual Handling. The evaluation assessed the ease of use, applicability and validity of the Code and in particular the associated manual handling hazard assessment tools, within New Zealand industry. The Code was studied in a sample of eight companies from four sectors of industry. Subjective feedback and objective findings indicated that the Code was useful, applicable and informative. The manual handling hazard assessment tools incorporated in the Code could be adequately applied by most users, with risk assessment outcomes largely consistent with the findings of researchers using more specific ergonomics methodologies. However, some changes were recommended to the risk assessment tools to improve usability and validity. The evaluation concluded that both the Code and the tools within it would benefit from simplification, improved typography and layout, and industry-specific information on manual handling hazards.
ERIC Educational Resources Information Center
Morris, Suzanne E.
2010-01-01
This paper provides a review of institutional authorship policies as required by the "Australian Code for the Responsible Conduct of Research" (the "Code") (National Health and Medical Research Council (NHMRC), the Australian Research Council (ARC) & Universities Australia (UA) 2007), and assesses them for Code compliance.…
Peng, Mingkai; Sundararajan, Vijaya; Williamson, Tyler; Minty, Evan P; Smith, Tony C; Doktorchik, Chelsea T A; Quan, Hude
2018-03-01
Data quality assessment is a challenging facet for research using coded administrative health data. Current assessment approaches are time and resource intensive. We explored whether association rule mining (ARM) can be used to develop rules for assessing data quality. We extracted 2013 and 2014 records from the hospital discharge abstract database (DAD) for patients between the ages of 55 and 65 from five acute care hospitals in Alberta, Canada. The ARM was conducted using the 2013 DAD to extract rules with support ≥0.0019 and confidence ≥0.5 using the bootstrap technique, and tested in the 2014 DAD. The rules were compared against the method of coding frequency and assessed for their ability to detect error introduced by two kinds of data manipulation: random permutation and random deletion. The association rules generally had clear clinical meanings. Comparing 2014 data to 2013 data (both original), there were 3 rules with a confidence difference >0.1, while coding frequency difference of codes in the right hand of rules was less than 0.004. After random permutation of 50% of codes in the 2014 data, average rule confidence dropped from 0.72 to 0.27 while coding frequency remained unchanged. Rule confidence decreased with the increase of coding deletion, as expected. Rule confidence was more sensitive to code deletion compared to coding frequency, with slope of change ranging from 1.7 to 184.9 with a median of 9.1. The ARM is a promising technique to assess data quality. It offers a systematic way to derive coding association rules hidden in data, and potentially provides a sensitive and efficient method of assessing data quality compared to standard methods. Copyright © 2018 Elsevier Inc. All rights reserved.
Methodology, status, and plans for development and assessment of the RELAP5 code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, G.W.; Riemke, R.A.
1997-07-01
RELAP/MOD3 is a computer code used for the simulation of transients and accidents in light-water nuclear power plants. The objective of the program to develop and maintain RELAP5 was and is to provide the U.S. Nuclear Regulatory Commission with an independent tool for assessing reactor safety. This paper describes code requirements, models, solution scheme, language and structure, user interface validation, and documentation. The paper also describes the current and near term development program and provides an assessment of the code`s strengths and limitations.
Alarcon, Gene M; Gamble, Rose F; Ryan, Tyler J; Walter, Charles; Jessup, Sarah A; Wood, David W; Capiola, August
2018-07-01
Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature. Copyright © 2018 Elsevier Ltd. All rights reserved.
RELAP5-3D Developmental Assessment: Comparison of Versions 4.3.4i and 4.2.1i
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayless, Paul David
2015-10-01
Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code using versions 4.3.4i and 4.2.1i. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions changed between these two code versions and can be used to identify cases in which the assessment judgment may need to be changed in Volume III of the code manual. Changes to the assessment judgments made after reviewing allmore » of the assessment cases are also provided.« less
RELAP5-3D Developmental Assessment: Comparison of Versions 4.2.1i and 4.1.3i
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayless, Paul D.
2014-06-01
Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code using versions 4.2.1i and 4.1.3i. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions changed between these two code versions and can be used to identify cases in which the assessment judgment may need to be changed in Volume III of the code manual. Changes to the assessment judgments made after reviewing allmore » of the assessment cases are also provided.« less
Preliminary Assessment of Turbomachinery Codes
NASA Technical Reports Server (NTRS)
Mazumder, Quamrul H.
2007-01-01
This report assesses different CFD codes developed and currently being used at Glenn Research Center to predict turbomachinery fluid flow and heat transfer behavior. This report will consider the following codes: APNASA, TURBO, GlennHT, H3D, and SWIFT. Each code will be described separately in the following section with their current modeling capabilities, level of validation, pre/post processing, and future development and validation requirements. This report addresses only previously published and validations of the codes. However, the codes have been further developed to extend the capabilities of the codes.
The adaption and use of research codes for performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebetrau, A.M.
1987-05-01
Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ditmars, J.D.; Walbridge, E.W.; Rote, D.M.
1983-10-01
Repository performance assessment is analysis that identifies events and processes that might affect a repository system for isolation of radioactive waste, examines their effects on barriers to waste migration, and estimates the probabilities of their occurrence and their consequences. In 1983 Battelle Memorial Institute's Office of Nuclear Waste Isolation (ONWI) prepared two plans - one for performance assessment for a waste repository in salt and one for verification and validation of performance assessment technology. At the request of the US Department of Energy's Salt Repository Project Office (SRPO), Argonne National Laboratory reviewed those plans and prepared this report to advisemore » SRPO of specific areas where ONWI's plans for performance assessment might be improved. This report presents a framework for repository performance assessment that clearly identifies the relationships among the disposal problems, the processes underlying the problems, the tools for assessment (computer codes), and the data. In particular, the relationships among important processes and 26 model codes available to ONWI are indicated. A common suggestion for computer code verification and validation is the need for specific and unambiguous documentation of the results of performance assessment activities. A major portion of this report consists of status summaries of 27 model codes indicated as potentially useful by ONWI. The code summaries focus on three main areas: (1) the code's purpose, capabilities, and limitations; (2) status of the elements of documentation and review essential for code verification and validation; and (3) proposed application of the code for performance assessment of salt repository systems. 15 references, 6 figures, 4 tables.« less
Schnabel, M; Mann, D; Efe, T; Schrappe, M; V Garrel, T; Gotzen, L; Schaeg, M
2004-10-01
The introduction of the German Diagnostic Related Groups (D-DRG) system requires redesigning administrative patient management strategies. Wrong coding leads to inaccurate grouping and endangers the reimbursement of treatment costs. This situation emphasizes the roles of documentation and coding as factors of economical success. The aims of this study were to assess the quantity and quality of initial documentation and coding (ICD-10 and OPS-301) and find operative strategies to improve efficiency and strategic means to ensure optimal documentation and coding quality. In a prospective study, documentation and coding quality were evaluated in a standardized way by weekly assessment. Clinical data from 1385 inpatients were processed for initial correctness and quality of documentation and coding. Principal diagnoses were found to be accurate in 82.7% of cases, inexact in 7.1%, and wrong in 10.1%. Effects on financial returns occurred in 16%. Based on these findings, an optimized, interdisciplinary, and multiprofessional workflow on medical documentation, coding, and data control was developed. Workflow incorporating regular assessment of documentation and coding quality is required by the DRG system to ensure efficient accounting of hospital services. Interdisciplinary and multiprofessional cooperation is recognized to be an important factor in establishing an efficient workflow in medical documentation and coding.
Code Verification Capabilities and Assessments in Support of ASC V&V Level 2 Milestone #6035
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott William; Budzien, Joanne Louise; Ferguson, Jim Michael
This document provides a summary of the code verification activities supporting the FY17 Level 2 V&V milestone entitled “Deliver a Capability for V&V Assessments of Code Implementations of Physics Models and Numerical Algorithms in Support of Future Predictive Capability Framework Pegposts.” The physics validation activities supporting this milestone are documented separately. The objectives of this portion of the milestone are: 1) Develop software tools to support code verification analysis; 2) Document standard definitions of code verification test problems; and 3) Perform code verification assessments (focusing on error behavior of algorithms). This report and a set of additional standalone documents servemore » as the compilation of results demonstrating accomplishment of these objectives.« less
Impacts of Model Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Sivaraman, Deepak; Elliott, Douglas B.
The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO 2 emissions atmore » the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.« less
Alternate Assessment Manual for the Arizona Student Achievement Program
ERIC Educational Resources Information Center
Arizona Department of Education, 2005
2005-01-01
The Alternate Assessment Code of Ethics informs school personnel involved in alternate assessments of ethical, nondiscriminatory assessment practices and underscores the diligence necessary to provide accurate assessment data for instructional decision-making. The importance of commitment and adherence to the Alternate Assessment Code of Ethics by…
Methodology, status and plans for development and assessment of TUF and CATHENA codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luxat, J.C.; Liu, W.S.; Leung, R.K.
1997-07-01
An overview is presented of the Canadian two-fluid computer codes TUF and CATHENA with specific focus on the constraints imposed during development of these codes and the areas of application for which they are intended. Additionally a process for systematic assessment of these codes is described which is part of a broader, industry based initiative for validation of computer codes used in all major disciplines of safety analysis. This is intended to provide both the licensee and the regulator in Canada with an objective basis for assessing the adequacy of codes for use in specific applications. Although focused specifically onmore » CANDU reactors, Canadian experience in developing advanced two-fluid codes to meet wide-ranging application needs while maintaining past investment in plant modelling provides a useful contribution to international efforts in this area.« less
Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W.; Imel, Zac E.; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C.
2014-01-01
The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. PMID:25242192
Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C
2015-02-01
The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.
Development of probabilistic multimedia multipathway computer codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, C.; LePoire, D.; Gnanapragasam, E.
2002-01-01
The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributionsmore » for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.« less
Burstyn, Igor; Slutsky, Anton; Lee, Derrick G; Singer, Alison B; An, Yuan; Michael, Yvonne L
2014-05-01
Epidemiologists typically collect narrative descriptions of occupational histories because these are less prone than self-reported exposures to recall bias of exposure to a specific hazard. However, the task of coding these narratives can be daunting and prohibitively time-consuming in some settings. The aim of this manuscript is to evaluate the performance of a computer algorithm to translate the narrative description of occupational codes into standard classification of jobs (2010 Standard Occupational Classification) in an epidemiological context. The fundamental question we address is whether exposure assignment resulting from manual (presumed gold standard) coding of the narratives is materially different from that arising from the application of automated coding. We pursued our work through three motivating examples: assessment of physical demands in Women's Health Initiative observational study, evaluation of predictors of exposure to coal tar pitch volatiles in the US Occupational Safety and Health Administration's (OSHA) Integrated Management Information System, and assessment of exposure to agents known to cause occupational asthma in a pregnancy cohort. In these diverse settings, we demonstrate that automated coding of occupations results in assignment of exposures that are in reasonable agreement with results that can be obtained through manual coding. The correlation between physical demand scores based on manual and automated job classification schemes was reasonable (r = 0.5). The agreement between predictive probability of exceeding the OSHA's permissible exposure level for polycyclic aromatic hydrocarbons, using coal tar pitch volatiles as a surrogate, based on manual and automated coding of jobs was modest (Kendall rank correlation = 0.29). In the case of binary assignment of exposure to asthmagens, we observed that fair to excellent agreement in classifications can be reached, depending on presence of ambiguity in assigned job classification (κ = 0.5-0.8). Thus, the success of automated coding appears to depend on the setting and type of exposure that is being assessed. Our overall recommendation is that automated translation of short narrative descriptions of jobs for exposure assessment is feasible in some settings and essential for large cohorts, especially if combined with manual coding to both assess reliability of coding and to further refine the coding algorithm.
Sukanya, Chongthawonsatid
2017-10-01
This study examined the validity of the principal diagnoses on discharge summaries and coding assessments. Data were collected from the National Health Security Office (NHSO) of Thailand in 2015. In total, 118,971 medical records were audited. The sample was drawn from government hospitals and private hospitals covered by the Universal Coverage Scheme in Thailand. Hospitals and cases were selected using NHSO criteria. The validity of the principal diagnoses listed in the "Summary and Coding Assessment" forms was established by comparing data from the discharge summaries with data obtained from medical record reviews, and additionally, by comparing data from the coding assessments with data in the computerized ICD (the data base used for reimbursement-purposes). The summary assessments had low sensitivities (7.3%-37.9%), high specificities (97.2%-99.8%), low positive predictive values (9.2%-60.7%), and high negative predictive values (95.9%-99.3%). The coding assessments had low sensitivities (31.1%-69.4%), high specificities (99.0%-99.9%), moderate positive predictive values (43.8%-89.0%), and high negative predictive values (97.3%-99.5%). The discharge summaries and codings often contained mistakes, particularly the categories "Endocrine, nutritional, and metabolic diseases", "Symptoms, signs, and abnormal clinical and laboratory findings not elsewhere classified", "Factors influencing health status and contact with health services", and "Injury, poisoning, and certain other consequences of external causes". The validity of the principal diagnoses on the summary and coding assessment forms was found to be low. The training of physicians and coders must be strengthened to improve the validity of discharge summaries and codings.
Compliance Verification Paths for Residential and Commercial Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, David R.; Makela, Eric J.; Fannin, Jerica D.
2011-10-10
This report looks at different ways to verify energy code compliance and to ensure that the energy efficiency goals of an adopted document are achieved. Conformity assessment is the body of work that ensures compliance, including activities that can ensure residential and commercial buildings satisfy energy codes and standards. This report identifies and discusses conformity-assessment activities and provides guidance for conducting assessments.
Aiello, Francesco A; Judelson, Dejah R; Messina, Louis M; Indes, Jeffrey; FitzGerald, Gordon; Doucet, Danielle R; Simons, Jessica P; Schanzer, Andres
2016-08-01
Vascular surgery procedural reimbursement depends on accurate procedural coding and documentation. Despite the critical importance of correct coding, there has been a paucity of research focused on the effect of direct physician involvement. We hypothesize that direct physician involvement in procedural coding will lead to improved coding accuracy, increased work relative value unit (wRVU) assignment, and increased physician reimbursement. This prospective observational cohort study evaluated procedural coding accuracy of fistulograms at an academic medical institution (January-June 2014). All fistulograms were coded by institutional coders (traditional coding) and by a single vascular surgeon whose codes were verified by two institution coders (multidisciplinary coding). The coding methods were compared, and differences were translated into revenue and wRVUs using the Medicare Physician Fee Schedule. Comparison between traditional and multidisciplinary coding was performed for three discrete study periods: baseline (period 1), after a coding education session for physicians and coders (period 2), and after a coding education session with implementation of an operative dictation template (period 3). The accuracy of surgeon operative dictations during each study period was also assessed. An external validation at a second academic institution was performed during period 1 to assess and compare coding accuracy. During period 1, traditional coding resulted in a 4.4% (P = .004) loss in reimbursement and a 5.4% (P = .01) loss in wRVUs compared with multidisciplinary coding. During period 2, no significant difference was found between traditional and multidisciplinary coding in reimbursement (1.3% loss; P = .24) or wRVUs (1.8% loss; P = .20). During period 3, traditional coding yielded a higher overall reimbursement (1.3% gain; P = .26) than multidisciplinary coding. This increase, however, was due to errors by institution coders, with six inappropriately used codes resulting in a higher overall reimbursement that was subsequently corrected. Assessment of physician documentation showed improvement, with decreased documentation errors at each period (11% vs 3.1% vs 0.6%; P = .02). Overall, between period 1 and period 3, multidisciplinary coding resulted in a significant increase in additional reimbursement ($17.63 per procedure; P = .004) and wRVUs (0.50 per procedure; P = .01). External validation at a second academic institution was performed to assess coding accuracy during period 1. Similar to institution 1, traditional coding revealed an 11% loss in reimbursement ($13,178 vs $14,630; P = .007) and a 12% loss in wRVU (293 vs 329; P = .01) compared with multidisciplinary coding. Physician involvement in the coding of endovascular procedures leads to improved procedural coding accuracy, increased wRVU assignments, and increased physician reimbursement. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Selection of a computer code for Hanford low-level waste engineered-system performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGrail, B.P.; Mahoney, L.A.
Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected tomore » affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.« less
New CPT codes: hospital, consultation, emergency and nursing facility services.
Zuber, T J; Henley, D E
1992-03-01
New evaluation and management codes were created by the Current Procedural Terminology (CPT) Editorial Panel to ensure more accurate and consistent reporting of physician services. The new hospital inpatient codes describe three levels of service for both initial and subsequent care. Critical care services are reported according to the total time spent by a physician providing constant attention to a critically ill patient. Consultation codes are divided into four categories: office/outpatient, initial inpatient, follow-up inpatient and confirmatory. Emergency department services for both new and established patients are limited to five codes. In 1992, nursing facility services are described with either comprehensive-assessment codes or subsequent-care codes. Hospital discharge services may be reported in addition to the comprehensive nursing facility assessment. Since the 1992 CPT book will list only the new codes, and since all insurance carriers will not be using these codes in 1992, physicians are encouraged to keep their 1991 code books and contact their local insurance carriers to determine which codes will be used.
Automatic Coding of Short Text Responses via Clustering in Educational Assessment
ERIC Educational Resources Information Center
Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank
2016-01-01
Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…
El-Damanhoury, Hatem M.; Fakhruddin, Kausar Sadia; Awad, Manal A.
2014-01-01
Objective: To assess the feasibility of teaching International Caries Detection and Assessment System (ICDAS) II and its e-learning program as tools for occlusal caries detection to freshmen dental students in comparison to dental graduates with 2 years of experience. Materials and Methods: Eighty-four freshmen and 32 dental graduates examined occlusal surfaces of molars/premolars (n = 72) after a lecture and a hands-on workshop. The same procedure was repeated after 1 month following the training with ICDAS II e-learning program. Validation of ICDAS II codes was done histologically. Intra- and inter-examiner reproducibility of ICDAS II severity scores were assessed before and after e-learning using (Fleiss's kappa). Results: The kappa values showed inter-examiner reproducibility ranged from 0.53 (ICDAS II code cut off ≥ 1) to 0.70 (ICDAS II code cut off ≥ 3) by undergraduates and 0.69 (ICDAS II code cut off ≥ 1) to 0.95 (ICDAS II code cut off ≥ 3) by graduates. The inter-examiner reproducibility ranged from 0.64 (ICDAS II code cut off ≥ 1) to 0.89 (ICDAS II code cut off ≥ 3). No statistically significant difference was found between both groups in intra-examiner agreements for assessing ICDAS II codes. A high statistically significant difference (P ≤ 0.01) in correct identification of codes 1, 2, and 4 from before to after e-learning were observed in both groups. The bias indices for the undergraduate group were higher than those of the graduate group. Conclusions: Early exposure of students to ICDAS II is a valuable method of teaching caries detection and its e-learning program significantly improves their caries diagnostic skills. PMID:25512730
Spoken Narrative Assessment: A Supplementary Measure of Children's Creativity
ERIC Educational Resources Information Center
Wong, Miranda Kit-Yi; So, Wing Chee
2016-01-01
This study developed a spoken narrative (i.e., storytelling) assessment as a supplementary measure of children's creativity. Both spoken and gestural contents of children's spoken narratives were coded to assess their verbal and nonverbal creativity. The psychometric properties of the coding system for the spoken narrative assessment were…
A Qualitative Analysis of Narrative Preclerkship Assessment Data to Evaluate Teamwork Skills.
Dolan, Brigid M; O'Brien, Celia Laird; Cameron, Kenzie A; Green, Marianne M
2018-04-16
Construct: Students entering the health professions require competency in teamwork. Although many teamwork curricula and assessments exist, studies have not demonstrated robust longitudinal assessment of preclerkship students' teamwork skills and attitudes. Assessment portfolios may serve to fill this gap, but it is unknown how narrative comments within portfolios describe student teamwork behaviors. We performed a qualitative analysis of narrative data in 15 assessment portfolios. Student portfolios were randomly selected from 3 groups stratified by quantitative ratings of teamwork performance gathered from small-group and clinical preceptor assessment forms. Narrative data included peer and faculty feedback from these same forms. Data were coded for teamwork-related behaviors using a constant comparative approach combined with an identification of the valence of the coded statements as either "positive observation" or "suggestion for improvement." Eight codes related to teamwork emerged: attitude and demeanor, information facilitation, leadership, preparation and dependability, professionalism, team orientation, values team member contributions, and nonspecific teamwork comments. The frequency of codes and valence varied across the 3 performance groups, with students in the low-performing group receiving more suggestions for improvement across all teamwork codes. Narrative data from assessment portfolios included specific descriptions of teamwork behavior, with important contributions provided by both faculty and peers. A variety of teamwork domains were represented. Such feedback as collected in an assessment portfolio can be used for longitudinal assessment of preclerkship student teamwork skills and attitudes.
Illum, Niels Ove; Gradel, Kim Oren
2017-01-01
To help parents assess disability in their own children using World Health Organization (WHO) International Classification of Functioning, Disability and Health, Child and Youth Version (ICF-CY) code qualifier scoring and to assess the validity and reliability of the data sets obtained. Parents of 162 children with spina bifida, spinal muscular atrophy, muscular disorders, cerebral palsy, visual impairment, hearing impairment, mental disability, or disability following brain tumours performed scoring for 26 body functions qualifiers (b codes) and activities and participation qualifiers (d codes). Scoring was repeated after 6 months. Psychometric and Rasch data analysis was undertaken. The initial and repeated data had Cronbach α of 0.96 and 0.97, respectively. Inter-code correlation was 0.54 (range: 0.23-0.91) and 0.76 (range: 0.20-0.92). The corrected code-total correlations were 0.72 (range: 0.49-0.83) and 0.75 (range: 0.50-0.87). When repeated, the ICF-CY code qualifier scoring showed a correlation R of 0.90. Rasch analysis of the selected ICF-CY code data demonstrated a mean measure of 0.00 and 0.00, respectively. Code qualifier infit mean square (MNSQ) had a mean of 1.01 and 1.00. The mean corresponding outfit MNSQ was 1.05 and 1.01. The ICF-CY code τ thresholds and category measures were continuous when assessed and reassessed by parents. Participating children had a mean of 56 codes scores (range: 26-130) before and a mean of 55.9 scores (range: 25-125) after repeat. Corresponding measures were -1.10 (range: -5.31 to 5.25) and -1.11 (range: -5.42 to 5.36), respectively. Based on measures obtained at the 2 occasions, the correlation coefficient R was 0.84. The child code map showed coherence of ICF-CY codes at each level. There was continuity in covering the range across disabilities. And, first and foremost, the distribution of codes reflexed a true continuity in disability with codes for motor functions activated first, then codes for cognitive functions, and, finally, codes for more complex functions. Parents can assess their own children in a valid and reliable way, and if the WHO ICF-CY second-level code data set is functioning in a clinically sound way, it can be employed as a tool for identifying the severity of disabilities and for monitoring changes in those disabilities over time. The ICF-CY codes selected in this study might be one cornerstone in forming a national or even international generic set of ICF-CY codes for the benefit of children with disabilities, their parents, and caregivers and for the whole community supporting with children with disabilities on a daily and perpetual basis.
Illum, Niels Ove; Gradel, Kim Oren
2017-01-01
AIM To help parents assess disability in their own children using World Health Organization (WHO) International Classification of Functioning, Disability and Health, Child and Youth Version (ICF-CY) code qualifier scoring and to assess the validity and reliability of the data sets obtained. METHOD Parents of 162 children with spina bifida, spinal muscular atrophy, muscular disorders, cerebral palsy, visual impairment, hearing impairment, mental disability, or disability following brain tumours performed scoring for 26 body functions qualifiers (b codes) and activities and participation qualifiers (d codes). Scoring was repeated after 6 months. Psychometric and Rasch data analysis was undertaken. RESULTS The initial and repeated data had Cronbach α of 0.96 and 0.97, respectively. Inter-code correlation was 0.54 (range: 0.23-0.91) and 0.76 (range: 0.20-0.92). The corrected code-total correlations were 0.72 (range: 0.49-0.83) and 0.75 (range: 0.50-0.87). When repeated, the ICF-CY code qualifier scoring showed a correlation R of 0.90. Rasch analysis of the selected ICF-CY code data demonstrated a mean measure of 0.00 and 0.00, respectively. Code qualifier infit mean square (MNSQ) had a mean of 1.01 and 1.00. The mean corresponding outfit MNSQ was 1.05 and 1.01. The ICF-CY code τ thresholds and category measures were continuous when assessed and reassessed by parents. Participating children had a mean of 56 codes scores (range: 26-130) before and a mean of 55.9 scores (range: 25-125) after repeat. Corresponding measures were −1.10 (range: −5.31 to 5.25) and −1.11 (range: −5.42 to 5.36), respectively. Based on measures obtained at the 2 occasions, the correlation coefficient R was 0.84. The child code map showed coherence of ICF-CY codes at each level. There was continuity in covering the range across disabilities. And, first and foremost, the distribution of codes reflexed a true continuity in disability with codes for motor functions activated first, then codes for cognitive functions, and, finally, codes for more complex functions. CONCLUSIONS Parents can assess their own children in a valid and reliable way, and if the WHO ICF-CY second-level code data set is functioning in a clinically sound way, it can be employed as a tool for identifying the severity of disabilities and for monitoring changes in those disabilities over time. The ICF-CY codes selected in this study might be one cornerstone in forming a national or even international generic set of ICF-CY codes for the benefit of children with disabilities, their parents, and caregivers and for the whole community supporting with children with disabilities on a daily and perpetual basis. PMID:28680270
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.
1995-12-31
In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less
Lamb, Mary K; Innes, Kerry; Saad, Patricia; Rust, Julie; Dimitropoulos, Vera; Cumerlato, Megan
The Performance Indicators for Coding Quality (PICQ) is a data quality assessment tool developed by Australia's National Centre for Classification in Health (NCCH). PICQ consists of a number of indicators covering all ICD-10-AM disease chapters, some procedure chapters from the Australian Classification of Health Intervention (ACHI) and some Australian Coding Standards (ACS). The indicators can be used to assess the coding quality of hospital morbidity data by monitoring compliance of coding conventions and ACS; this enables the identification of particular records that may be incorrectly coded, thus providing a measure of data quality. There are 31 obstetric indicators available for the ICD-10-AM Fourth Edition. Twenty of these 31 indicators were classified as Fatal, nine as Warning and two Relative. These indicators were used to examine coding quality of obstetric records in the 2004-2005 financial year Australian national hospital morbidity dataset. Records with obstetric disease or procedure codes listed anywhere in the code string were extracted and exported from the SPSS source file. Data were then imported into a Microsoft Access database table as per PICQ instructions, and run against all Fatal and Warning and Relative (N=31) obstetric PICQ 2006 Fourth Edition Indicators v.5 for the ICD-10- AM Fourth Edition. There were 689,905 gynaecological and obstetric records in the 2004-2005 financial year, of which 1.14% were found to have triggered Fatal degree errors, 3.78% Warning degree errors and 8.35% Relative degree errors. The types of errors include completeness, redundancy, specificity and sequencing problems. It was found that PICQ is a useful initial screening tool for the assessment of ICD-10-AM/ACHI coding quality. The overall quality of codes assigned to obstetric records in the 2004- 2005 Australian national morbidity dataset is of fair quality.
ERIC Educational Resources Information Center
Shanley, Jenelle R.; Niec, Larissa N.
2011-01-01
This study evaluated the inclusion of uncoded segments in the Dyadic Parent-Child Interaction Coding System, an analogue observation of parent-child interactions. The relationships between warm-up and coded segments were assessed, as well as the segments' associations with parent ratings of parent and child behaviors. Sixty-nine non-referred…
[Assessment of Coding in German Diagnosis Related Groups System in Otorhinolaryngology].
Ellies, Maik; Anders, Berit; Seger, Wolfgang
2018-05-14
Prospective analysis of assessment reports in otorhinolaryngology for the period 01-03-2011 to 31-03-2017 by the Health Advisory Boards in Lower Saxony and Bremen, Germany in relation to coding in the G-DRG-System. The assessment reports were documented using a standardized database system developed on the basis of the electronic data exchange (DTA) by the Health Advisory Board in Lower Saxony. In addition, the documentation of the assessment reports according to the G-DRG system was used for assessment. Furthermore, the assessment of a case was evaluated once again on the basis of the present assessment documents and presented as an example in detail. During the period from 01-03-2011 to 31-03-2017, a total of 27,424 cases of inpatient assessments of DRGs according to the G-DRG system were collected in the field of otorhinolaryngology. In 7,259 cases, the DRG was changed, and in 20,175 cases, the suspicion of a DRG-relevant coding error was not justified in the review; thus, a DRG change rate of 26% of the assessments was identified over the time period investigated. There were different kinds of coding errors. In order to improve the coding quality in otorhinolaryngology, in addition to the special consideration of the presented "hit list" by the otorhinolaryngology departments, there should be more intensive cooperation between hospitals and the Health Advisory Boards of the federal states. © Georg Thieme Verlag KG Stuttgart · New York.
Current and anticipated uses of thermalhydraulic and neutronic codes at PSI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aksan, S.N.; Zimmermann, M.A.; Yadigaroglu, G.
1997-07-01
The thermalhydraulic and/or neutronic codes in use at PSI mainly provide the capability to perform deterministic safety analysis for Swiss NPPs and also serve as analysis tools for experimental facilities for LWR and ALWR simulations. In relation to these applications, physical model development and improvements, and assessment of the codes are also essential components of the activities. In this paper, a brief overview is provided on the thermalhydraulic and/or neutronic codes used for safety analysis of LWRs, at PSI, and also of some experiences and applications with these codes. Based on these experiences, additional assessment needs are indicated, together withmore » some model improvement needs. The future needs that could be used to specify both the development of a new code and also improvement of available codes are summarized.« less
Benchmarking NNWSI flow and transport codes: COVE 1 results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayden, N.K.
1985-06-01
The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of themore » codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs.« less
Comparison of codes assessing galactic cosmic radiation exposure of aircraft crew.
Bottollier-Depois, J F; Beck, P; Bennett, B; Bennett, L; Bütikofer, R; Clairand, I; Desorgher, L; Dyer, C; Felsberger, E; Flückiger, E; Hands, A; Kindl, P; Latocha, M; Lewis, B; Leuthold, G; Maczka, T; Mares, V; McCall, M J; O'Brien, K; Rollet, S; Rühm, W; Wissmann, F
2009-10-01
The assessment of the exposure to cosmic radiation onboard aircraft is one of the preoccupations of bodies responsible for radiation protection. Cosmic particle flux is significantly higher onboard aircraft than at ground level and its intensity depends on the solar activity. The dose is usually estimated using codes validated by the experimental data. In this paper, a comparison of various codes is presented, some of them are used routinely, to assess the dose received by the aircraft crew caused by the galactic cosmic radiation. Results are provided for periods close to solar maximum and minimum and for selected flights covering major commercial routes in the world. The overall agreement between the codes, particularly for those routinely used for aircraft crew dosimetry, was better than +/-20 % from the median in all but two cases. The agreement within the codes is considered to be fully satisfactory for radiation protection purposes.
ERIC Educational Resources Information Center
Wang, Yanqing; Li, Hang; Feng, Yuqiang; Jiang, Yu; Liu, Ying
2012-01-01
The traditional assessment approach, in which one single written examination counts toward a student's total score, no longer meets new demands of programming language education. Based on a peer code review process model, we developed an online assessment system called "EduPCR" and used a novel approach to assess the learning of computer…
Reliability assessments in qualitative health promotion research.
Cook, Kay E
2012-03-01
This article contributes to the debate about the use of reliability assessments in qualitative research in general, and health promotion research in particular. In this article, I examine the use of reliability assessments in qualitative health promotion research in response to health promotion researchers' commonly held misconception that reliability assessments improve the rigor of qualitative research. All qualitative articles published in the journal Health Promotion International from 2003 to 2009 employing reliability assessments were examined. In total, 31.3% (20/64) articles employed some form of reliability assessment. The use of reliability assessments increased over the study period, ranging from <20% in 2003/2004 to 50% and above in 2008/2009, while at the same time the total number of qualitative articles decreased. The articles were then classified into four types of reliability assessments, including the verification of thematic codes, the use of inter-rater reliability statistics, congruence in team coding and congruence in coding across sites. The merits of each type were discussed, with the subsequent discussion focusing on the deductive nature of reliable thematic coding, the limited depth of immediately verifiable data and the usefulness of such studies to health promotion and the advancement of the qualitative paradigm.
Ethical and educational considerations in coding hand surgeries.
Lifchez, Scott D; Leinberry, Charles F; Rivlin, Michael; Blazar, Philip E
2014-07-01
To assess treatment coding knowledge and practices among residents, fellows, and attending hand surgeons. Through the use of 6 hypothetical cases, we developed a coding survey to assess coding knowledge and practices. We e-mailed this survey to residents, fellows, and attending hand surgeons. In additionally, we asked 2 professional coders to code these cases. A total of 71 participants completed the survey out of 134 people to whom the survey was sent (response rate = 53%). We observed marked disparity in codes chosen among surgeons and among professional coders. Results of this study indicate that coding knowledge, not just its ethical application, had a major role in coding procedures accurately. Surgical coding is an essential part of a hand surgeon's practice and is not well learned during residency or fellowship. Whereas ethical issues such as deliberate unbundling and upcoding may have a role in inaccurate coding, lack of knowledge among surgeons and coders has a major role as well. Coding has a critical role in every hand surgery practice. Inconstancies among those polled in this study reveal that an increase in education on coding during training and improvement in the clarity and consistency of the Current Procedural Terminology coding rules themselves are needed. Copyright © 2014 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
RELAP-7 Code Assessment Plan and Requirement Traceability Matrix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Junsoo; Choi, Yong-joon; Smith, Curtis L.
2016-10-01
The RELAP-7, a safety analysis code for nuclear reactor system, is under development at Idaho National Laboratory (INL). Overall, the code development is directed towards leveraging the advancements in computer science technology, numerical solution methods and physical models over the last decades. Recently, INL has also been putting an effort to establish the code assessment plan, which aims to ensure an improved final product quality through the RELAP-7 development process. The ultimate goal of this plan is to propose a suitable way to systematically assess the wide range of software requirements for RELAP-7, including the software design, user interface, andmore » technical requirements, etc. To this end, we first survey the literature (i.e., international/domestic reports, research articles) addressing the desirable features generally required for advanced nuclear system safety analysis codes. In addition, the V&V (verification and validation) efforts as well as the legacy issues of several recently-developed codes (e.g., RELAP5-3D, TRACE V5.0) are investigated. Lastly, this paper outlines the Requirement Traceability Matrix (RTM) for RELAP-7 which can be used to systematically evaluate and identify the code development process and its present capability.« less
Dhakal, Sanjaya; Burwen, Dale R; Polakowski, Laura L; Zinderman, Craig E; Wise, Robert P
2014-03-01
Assess whether Medicare data are useful for monitoring tissue allograft safety and utilization. We used health care claims (billing) data from 2007 for 35 million fee-for-service Medicare beneficiaries, a predominantly elderly population. Using search terms for transplant-related procedures, we generated lists of ICD-9-CM and CPT(®) codes and assessed the frequency of selected allograft procedures. Step 1 used inpatient data and ICD-9-CM procedure codes. Step 2 added non-institutional provider (e.g., physician) claims, outpatient institutional claims, and CPT codes. We assembled preliminary lists of diagnosis codes for infections after selected allograft procedures. Many ICD-9-CM codes were ambiguous as to whether the procedure involved an allograft. Among 1.3 million persons with a procedure ascertained using the list of ICD-9-CM codes, only 1,886 claims clearly involved an allograft. CPT codes enabled better ascertainment of some allograft procedures (over 17,000 persons had corneal transplants and over 2,700 had allograft skin transplants). For spinal fusion procedures, CPT codes improved specificity for allografts; of nearly 100,000 patients with ICD-9-CM codes for spinal fusions, more than 34,000 had CPT codes indicating allograft use. Monitoring infrequent events (infections) after infrequent exposures (tissue allografts) requires large study populations. A strength of the large Medicare databases is the substantial number of certain allograft procedures. Limitations include lack of clinical detail and donor information. Medicare data can potentially augment passive reporting systems and may be useful for monitoring tissue allograft safety and utilization where codes clearly identify allograft use and coding algorithms can effectively screen for infections.
Assessment of communication abilities in multilingual children: Language rights or human rights?
Cruz-Ferreira, Madalena
2018-02-01
Communication involves a sender, a receiver and a shared code operating through shared rules. Breach of communication results from disruption to any of these basic components of a communicative chain, although assessment of communication abilities typically focuses on senders/receivers, on two assumptions: first, that their command of features and rules of the language in question (the code), such as sounds, words or word order, as described in linguists' theorisations, represents the full scope of linguistic competence; and second, that languages are stable, homogeneous entities, unaffected by their users' communicative needs. Bypassing the role of the code in successful communication assigns decisive rights to abstract languages rather than to real-life language users, routinely leading to suspected or diagnosed speech-language disorder in academic and clinical assessment of multilingual children's communicative skills. This commentary reflects on whether code-driven assessment practices comply with the spirit of Article 19 of the Universal Declaration of Human Rights.
NASA Astrophysics Data System (ADS)
Mattie, P. D.; Knowlton, R. G.; Arnold, B. W.; Tien, N.; Kuo, M.
2006-12-01
Sandia National Laboratories (Sandia), a U.S. Department of Energy National Laboratory, has over 30 years experience in radioactive waste disposal and is providing assistance internationally in a number of areas relevant to the safety assessment of radioactive waste disposal systems. International technology transfer efforts are often hampered by small budgets, time schedule constraints, and a lack of experienced personnel in countries with small radioactive waste disposal programs. In an effort to surmount these difficulties, Sandia has developed a system that utilizes a combination of commercially available codes and existing legacy codes for probabilistic safety assessment modeling that facilitates the technology transfer and maximizes limited available funding. Numerous codes developed and endorsed by the United States Nuclear Regulatory Commission and codes developed and maintained by United States Department of Energy are generally available to foreign countries after addressing import/export control and copyright requirements. From a programmatic view, it is easier to utilize existing codes than to develop new codes. From an economic perspective, it is not possible for most countries with small radioactive waste disposal programs to maintain complex software, which meets the rigors of both domestic regulatory requirements and international peer review. Therefore, re-vitalization of deterministic legacy codes, as well as an adaptation of contemporary deterministic codes, provides a creditable and solid computational platform for constructing probabilistic safety assessment models. External model linkage capabilities in Goldsim and the techniques applied to facilitate this process will be presented using example applications, including Breach, Leach, and Transport-Multiple Species (BLT-MS), a U.S. NRC sponsored code simulating release and transport of contaminants from a subsurface low-level waste disposal facility used in a cooperative technology transfer project between Sandia National Laboratories and Taiwan's Institute of Nuclear Energy Research (INER) for the preliminary assessment of several candidate low-level waste repository sites. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE AC04 94AL85000.
Exploring the Utility of Sequential Analysis in Studying Informal Formative Assessment Practices
ERIC Educational Resources Information Center
Furtak, Erin Marie; Ruiz-Primo, Maria Araceli; Bakeman, Roger
2017-01-01
Formative assessment is a classroom practice that has received much attention in recent years for its established potential at increasing student learning. A frequent analytic approach for determining the quality of formative assessment practices is to develop a coding scheme and determine frequencies with which the codes are observed; however,…
ERIC Educational Resources Information Center
Nakashian, Mary
2008-01-01
Researchers from the Mailman School of Public Health at Columbia University prepared a case study of CODES (Community Outreach and Development Efforts Save). CODES is a coalition of 35 people and organizations in northern Manhattan committed to promoting safe streets, parks and schools. The case study analyzed the factors that prompted CODES'…
Solution of nonlinear flow equations for complex aerodynamic shapes
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed
1992-01-01
Solution-adaptive CFD codes based on unstructured methods for 3-D complex geometries in subsonic to supersonic regimes were investigated, and the computed solution data were analyzed in conjunction with experimental data obtained from wind tunnel measurements in order to assess and validate the predictability of the code. Specifically, the FELISA code was assessed and improved in cooperation with NASA Langley and Imperial College, Swansea, U.K.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.
2011-03-01
This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repositorymore » designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are needed for repository modeling are severely lacking. In addition, most of existing reactive transport codes were developed for non-radioactive contaminants, and they need to be adapted to account for radionuclide decay and in-growth. The accessibility to the source codes is generally limited. Because the problems of interest for the Waste IPSC are likely to result in relatively large computational models, a compact memory-usage footprint and a fast/robust solution procedure will be needed. A robust massively parallel processing (MPP) capability will also be required to provide reasonable turnaround times on the analyses that will be performed with the code. A performance assessment (PA) calculation for a waste disposal system generally requires a large number (hundreds to thousands) of model simulations to quantify the effect of model parameter uncertainties on the predicted repository performance. A set of codes for a PA calculation must be sufficiently robust and fast in terms of code execution. A PA system as a whole must be able to provide multiple alternative models for a specific set of physical/chemical processes, so that the users can choose various levels of modeling complexity based on their modeling needs. This requires PA codes, preferably, to be highly modularized. Most of the existing codes have difficulties meeting these requirements. Based on the gap analysis results, we have made the following recommendations for the code selection and code development for the NEAMS waste IPSC: (1) build fully coupled high-fidelity THCMBR codes using the existing SIERRA codes (e.g., ARIA and ADAGIO) and platform, (2) use DAKOTA to build an enhanced performance assessment system (EPAS), and build a modular code architecture and key code modules for performance assessments. The key chemical calculation modules will be built by expanding the existing CANTERA capabilities as well as by extracting useful components from other existing codes.« less
78 FR 67048 - Prothioconazole; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-08
... code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532). B. How can I get electronic access to other related information? You may... Assessment and Determination of Safety Section 408(b)(2)(A)(i) of FFDCA allows EPA to establish a tolerance...
Audit of Clinical Coding of Major Head and Neck Operations
Mitra, Indu; Malik, Tass; Homer, Jarrod J; Loughran, Sean
2009-01-01
INTRODUCTION Within the NHS, operations are coded using the Office of Population Censuses and Surveys (OPCS) classification system. These codes, together with diagnostic codes, are used to generate Healthcare Resource Group (HRG) codes, which correlate to a payment bracket. The aim of this study was to determine whether allocated procedure codes for major head and neck operations were correct and reflective of the work undertaken. HRG codes generated were assessed to determine accuracy of remuneration. PATIENTS AND METHODS The coding of consecutive major head and neck operations undertaken in a tertiary referral centre over a retrospective 3-month period were assessed. Procedure codes were initially ascribed by professional hospital coders. Operations were then recoded by the surgical trainee in liaison with the head of clinical coding. The initial and revised procedure codes were compared and used to generate HRG codes, to determine whether the payment banding had altered. RESULTS A total of 34 cases were reviewed. The number of procedure codes generated initially by the clinical coders was 99, whereas the revised codes generated 146. Of the original codes, 47 of 99 (47.4%) were incorrect. In 19 of the 34 cases reviewed (55.9%), the HRG code remained unchanged, thus resulting in the correct payment. Six cases were never coded, equating to £15,300 loss of payment. CONCLUSIONS These results highlight the inadequacy of this system to reward hospitals for the work carried out within the NHS in a fair and consistent manner. The current coding system was found to be complicated, ambiguous and inaccurate, resulting in loss of remuneration. PMID:19220944
Test code for the assessment and improvement of Reynolds stress models
NASA Technical Reports Server (NTRS)
Rubesin, M. W.; Viegas, J. R.; Vandromme, D.; Minh, H. HA
1987-01-01
An existing two-dimensional, compressible flow, Navier-Stokes computer code, containing a full Reynolds stress turbulence model, was adapted for use as a test bed for assessing and improving turbulence models based on turbulence simulation experiments. To date, the results of using the code in comparison with simulated channel flow and over an oscillating flat plate have shown that the turbulence model used in the code needs improvement for these flows. It is also shown that direct simulation of turbulent flows over a range of Reynolds numbers are needed to guide subsequent improvement of turbulence models.
Validation of Carotid Artery Revascularization Coding in Ontario Health Administrative Databases.
Hussain, Mohamad A; Mamdani, Muhammad; Saposnik, Gustavo; Tu, Jack V; Turkel-Parrella, David; Spears, Julian; Al-Omran, Mohammed
2016-04-02
The positive predictive value (PPV) of carotid endarterectomy (CEA) and carotid artery stenting (CAS) procedure and post-operative complication coding were assessed in Ontario health administrative databases. Between 1 April 2002 and 31 March 2014, a random sample of 428 patients were identified using Canadian Classification of Health Intervention (CCI) procedure codes and Ontario Health Insurance Plan (OHIP) billing codes from administrative data. A blinded chart review was conducted at two high-volume vascular centers to assess the level of agreement between the administrative records and the corresponding patients' hospital charts. PPV was calculated with 95% confidence intervals (CIs) to estimate the validity of CEA and CAS coding, utilizing hospital charts as the gold standard. Sensitivity of CEA and CAS coding were also assessed by linking two independent databases of 540 CEA-treated patients (Ontario Stroke Registry) and 140 CAS-treated patients (single-center CAS database) to administrative records. PPV for CEA ranged from 99% to 100% and sensitivity ranged from 81.5% to 89.6% using CCI and OHIP codes. A CCI code with a PPV of 87% (95% CI, 78.8-92.9) and sensitivity of 92.9% (95% CI, 87.4-96.1) in identifying CAS was also identified. PPV for post-admission complication diagnosis coding was 71.4% (95% CI, 53.7-85.4) for stroke/transient ischemic attack, and 82.4% (95% CI, 56.6-96.2) for myocardial infarction. Our analysis demonstrated that the codes used in administrative databases accurately identify CEA and CAS-treated patients. Researchers can confidently use administrative data to conduct population-based studies of CEA and CAS.
Feasibility of a computer-assisted feedback system between dispatch centre and ambulances.
Lindström, Veronica; Karlsten, Rolf; Falk, Ann-Charlotte; Castrèn, Maaret
2011-06-01
The aim of the study was to evaluate the feasibility of a newly developed computer-assisted feedback system between dispatch centre and ambulances in Stockholm, Sweden. A computer-assisted feedback system based on a Finnish model was designed to fit the Swedish emergency medical system. Feedback codes were identified and divided into three categories; assessment of patients' primary condition when ambulance arrives at scene, no transport by the ambulance and level of priority. Two ambulances and one emergency medical communication centre (EMCC) in Stockholm participated in the study. A sample of 530 feedback codes sent through the computer-assisted feedback system was reviewed. The information on the ambulance medical records was compared with the feedback codes used and 240 assignments were further analyzed. The used feedback codes sent from ambulance to EMCC were correct in 92% of the assignments. The most commonly used feedback code sent to the emergency medical dispatchers was 'agree with the dispatchers' assessment'. In addition, in 160 assignments there was a mismatch between emergency medical dispatchers and ambulance nurse assessments. Our results have shown a high agreement between medical dispatchers and ambulance nurse assessment. The feasibility of the feedback codes seems to be acceptable based on the small margin of error. The computer-assisted feedback system may, when used on a daily basis, make it possible for the medical dispatchers to receive feedback in a structural way. The EMCC organization can directly evaluate any changes in the assessment protocol by structured feedback sent from the ambulance.
Assessment and Application of the ROSE Code for Reactor Outage Thermal-Hydraulic and Safety Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Thomas K.S.; Ko, F.-K.; Dai, L.-C
The currently available tools, such as RELAP5, RETRAN, and others, cannot easily and correctly perform the task of analyzing the system behavior during plant outages. Therefore, a medium-sized program aiming at reactor outage simulation and evaluation, such as midloop operation (MLO) with loss of residual heat removal (RHR), has been developed. Important thermal-hydraulic processes involved during MLO with loss of RHR can be properly simulated by the newly developed reactor outage simulation and evaluation (ROSE) code. The two-region approach with a modified two-fluid model has been adopted to be the theoretical basis of the ROSE code.To verify the analytical modelmore » in the first step, posttest calculations against the integral midloop experiments with loss of RHR have been performed. The excellent simulation capacity of the ROSE code against the Institute of Nuclear Energy Research Integral System Test Facility test data is demonstrated. To further mature the ROSE code in simulating a full-sized pressurized water reactor, assessment against the WGOTHIC code and the Maanshan momentary-loss-of-RHR event has been undertaken. The successfully assessed ROSE code is then applied to evaluate the abnormal operation procedure (AOP) with loss of RHR during MLO (AOP 537.4) for the Maanshan plant. The ROSE code also has been successfully transplanted into the Maanshan training simulator to support operator training. How the simulator was upgraded by the ROSE code for MLO will be presented in the future.« less
Rennie, Michael J; Watsford, Mark L; Spurrs, Robert W; Kelly, Stephen J; Pine, Matthew J
2018-06-01
To examine the frequency and time spent in the phases of Australian Football (AF) match-play and to assess the intra-assessor reliability of coding these phases of match-play. Observational, intra-reliability assessment. Video footage of 10 random quarters of AF match-play were coded by a single researcher. Phases of offence, defence, contested play, umpire stoppage, set shot and goal reset were coded using a set of operational definitions. Descriptive statistics were provided for all phases of match-play. Following a 6-month washout period, intra-coder reliability was assessed using typical error of measurement (TEM) and intra-class correlation coefficients (ICC). A quarter of AF match-play involved 128±20 different phases of match-play. The highest proportion of match-play involved contested play (25%), followed by offence (18%), defence (18%) and umpire stoppages (18%). The mean duration of offence, defence, contested play, umpire stoppage, set shot and goal reset were 14, 14, 10, 11, 28 and 47s, respectively. No differences were found between the two coding assessments (p>0.05). ICCs for coding the phases of play demonstrated very high reliability (r=0.902-0.992). TEM of the total time spent in each phase of play represented moderate to good reliability (TEM=1.8-9.3%). Coding of offence, defence and contested play tended to display slightly poorer TEMs than umpire stoppages, set shots and goal resets (TEM=8.1 vs 4.5%). Researchers can reliably code the phases of AF match-play which may permit the analysis of specific elements of competition. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Haliasos, N; Rezajooi, K; O'neill, K S; Van Dellen, J; Hudovsky, Anita; Nouraei, Sar
2010-04-01
Clinical coding is the translation of documented clinical activities during an admission to a codified language. Healthcare Resource Groupings (HRGs) are derived from coding data and are used to calculate payment to hospitals in England, Wales and Scotland and to conduct national audit and benchmarking exercises. Coding is an error-prone process and an understanding of its accuracy within neurosurgery is critical for financial, organizational and clinical governance purposes. We undertook a multidisciplinary audit of neurosurgical clinical coding accuracy. Neurosurgeons trained in coding assessed the accuracy of 386 patient episodes. Where clinicians felt a coding error was present, the case was discussed with an experienced clinical coder. Concordance between the initial coder-only clinical coding and the final clinician-coder multidisciplinary coding was assessed. At least one coding error occurred in 71/386 patients (18.4%). There were 36 diagnosis and 93 procedure errors and in 40 cases, the initial HRG changed (10.4%). Financially, this translated to pound111 revenue-loss per patient episode and projected to pound171,452 of annual loss to the department. 85% of all coding errors were due to accumulation of coding changes that occurred only once in the whole data set. Neurosurgical clinical coding is error-prone. This is financially disadvantageous and with the coding data being the source of comparisons within and between departments, coding inaccuracies paint a distorted picture of departmental activity and subspecialism in audit and benchmarking. Clinical engagement improves accuracy and is encouraged within a clinical governance framework.
ERIC Educational Resources Information Center
Salisbury, Amy L.; Fallone, Melissa Duncan; Lester, Barry
2005-01-01
This review provides an overview and definition of the concept of neurobehavior in human development. Two neurobehavioral assessments used by the authors in current fetal and infant research are discussed: the NICU Network Neurobehavioral Assessment Scale and the Fetal Neurobehavior Coding System. This review will present how the two assessments…
Using DEWIS and R for Multi-Staged Statistics e-Assessments
ERIC Educational Resources Information Center
Gwynllyw, D. Rhys; Weir, Iain S.; Henderson, Karen L.
2016-01-01
We demonstrate how the DEWIS e-Assessment system may use embedded R code to facilitate the assessment of students' ability to perform involved statistical analyses. The R code has been written to emulate SPSS output and thus the statistical results for each bespoke data set can be generated efficiently and accurately using standard R routines.…
2009-09-01
nuclear industry for conducting performance assessment calculations. The analytical FORTRAN code for the DNAPL source function, REMChlor, was...project. The first was to apply existing deterministic codes , such as T2VOC and UTCHEM, to the DNAPL source zone to simulate the remediation processes...but describe the spatial variability of source zones unlike one-dimensional flow and transport codes that assume homogeneity. The Lagrangian models
Probabilistic Seismic Hazard Assessment for Iraq
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onur, Tuna; Gok, Rengin; Abdulnaby, Wathiq
Probabilistic Seismic Hazard Assessments (PSHA) form the basis for most contemporary seismic provisions in building codes around the world. The current building code of Iraq was published in 1997. An update to this edition is in the process of being released. However, there are no national PSHA studies in Iraq for the new building code to refer to for seismic loading in terms of spectral accelerations. As an interim solution, the new draft building code was considering to refer to PSHA results produced in the late 1990s as part of the Global Seismic Hazard Assessment Program (GSHAP; Giardini et al.,more » 1999). However these results are: a) more than 15 years outdated, b) PGA-based only, necessitating rough conversion factors to calculate spectral accelerations at 0.3s and 1.0s for seismic design, and c) at a probability level of 10% chance of exceedance in 50 years, not the 2% that the building code requires. Hence there is a pressing need for a new, updated PSHA for Iraq.« less
Fernández-Lansac, Violeta; Crespo, María
2017-07-26
This study introduces a new coding system, the Coding and Assessment System for Narratives of Trauma (CASNOT), to analyse several language domains in narratives of autobiographical memories, especially in trauma narratives. The development of the coding system is described. It was applied to assess positive and traumatic/negative narratives in 50 battered women (trauma-exposed group) and 50 nontrauma-exposed women (control group). Three blind raters coded each narrative. Inter-rater reliability analyses were conducted for the CASNOT language categories (multirater Kfree coefficients) and dimensions (intraclass correlation coefficients). High levels of inter-rater agreement were found for most of the language domains. Categories that did not reach the expected reliability were mainly those related to cognitive processes, which reflects difficulties in operationalizing constructs such as lack of control or helplessness, control or planning, and rationalization or memory elaboration. Applications and limitations of the CASNOT are discussed to enhance narrative measures for autobiographical memories.
Bracken, M B; Belanger, K; Hellenbrand, K; Addesso, K; Patel, S; Triche, E; Leaderer, B P
1998-09-01
The home wiring code is the most widely used metric for studies of residential electromagnetic field (EMF) exposure and health effects. Despite the fact that wiring code often shows stronger correlations with disease outcome than more direct EMF home assessments, little is known about potential confounders of the wiring code association. In a study carried out in southern Connecticut in 1988-1991, the authors used strict and widely used criteria to assess the wiring codes of 3,259 homes in which respondents lived. They also collected other home characteristics from the tax assessor's office, estimated traffic density around the home from state data, and interviewed each subject (2,967 mothers of reproductive age) for personal characteristics. Women who lived in very high current configuration wiring coded homes were more likely to be in manual jobs and their homes were older (built before 1949, odds ratio (OR) = 73.24, 95% confidence interval (CI) 29.53-181.65) and had lower assessed value and higher traffic densities (highest density quartile, OR = 3.99, 95% CI 1.17-13.62). Because some of these variables have themselves been associated with health outcomes, the possibility of confounding of the wiring code associations must be rigorously evaluated in future EMF research.
Development of the Brief Romantic Relationship Interaction Coding Scheme (BRRICS)
Humbad, Mikhila N.; Donnellan, M. Brent; Klump, Kelly L.; Burt, S. Alexandra
2012-01-01
Although observational studies of romantic relationships are common, many existing coding schemes require considerable amounts of time and resources to implement. The current study presents a new coding scheme, the Brief Romantic Relationship Interaction Coding Scheme (BRRICS), designed to assess various aspects of romantic relationship both quickly and efficiently. The BRRICS consists of four individual coding dimensions assessing positive and negative affect in each member of the dyad, as well as four codes assessing specific components of the dyadic interaction (i.e., positive reciprocity, demand-withdraw pattern, negative reciprocity, and overall satisfaction). Concurrent associations with measures of marital adjustment and conflict were evaluated in a sample of 118 married couples participating in the Michigan State University Twin Registry. Couples were asked to discuss common conflicts in their marriage while being videotaped. Undergraduate coders used the BRRICS to rate these interactions. The BRRICS scales were correlated in expected directions with self-reports of marital adjustment, as well as children’s perception of the severity and frequency of marital conflict. Based on these results, the BRRICS may be an efficient tool for researchers with large samples of observational data who are interested in coding global aspects of the relationship but do not have the resources to use labor intensive schemes. PMID:21875192
1985-06-02
was declared a few days later under the auspices of the guarantors of the Rio Protocol of 1942 (Argentina, Brazil, Chile and the USA). Further...Charge d’affaires: Marin Kostov. Canada: Edif. Belmonte 6, Avda Corea 126 y Amazonas, Wuito; tel, 458-102; Ambassador: (Vacant) Chile : Avda...Availability Status In 1861 adopted Civil Code of Chile - based on Napoleonic Code, Roman Code, Louisiana Code, the Austrian and Prussian Codes and Seven
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eyler, L L; Trent, D S; Budden, M J
During the course of the TEMPEST computer code development a concurrent effort was conducted to assess the code's performance and the validity of computed results. The results of this work are presented in this document. The principal objective of this effort was to assure the code's computational correctness for a wide range of hydrothermal phenomena typical of fast breeder reactor application. 47 refs., 94 figs., 6 tabs.
Does the Holland Code Predict Job Satisfaction and Productivity in Clothing Factory Workers?
ERIC Educational Resources Information Center
Heesacker, Martin; And Others
1988-01-01
Administered Self-Directed Search to sewing machine operators to determine Holland code, and assessed work productivity, job satisfaction, absenteeism, and insurance claims. Most workers were of the Social code. Social subjects were the most satisfied, Conventional and Realistic subjects next, and subjects of other codes less so. Productivity of…
The Social Interactive Coding System (SICS): An On-Line, Clinically Relevant Descriptive Tool.
ERIC Educational Resources Information Center
Rice, Mabel L.; And Others
1990-01-01
The Social Interactive Coding System (SICS) assesses the continuous verbal interactions of preschool children as a function of play areas, addressees, script codes, and play levels. This paper describes the 26 subjects and the setting involved in SICS development, coding definitions and procedures, training procedures, reliability, sample…
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.
1991-01-01
Computations from two Navier-Stokes codes, NSS and F3D, are presented for a tangent-ogive-cylinder body at high angle of attack. Features of this steady flow include a pair of primary vortices on the leeward side of the body as well as secondary vortices. The topological and physical plausibility of this vortical structure is discussed. The accuracy of these codes are assessed by comparison of the numerical solutions with experimental data. The effects of turbulence model, numerical dissipation, and grid refinement are presented. The overall efficiency of these codes are also assessed by examining their convergence rates, computational time per time step, and maximum allowable time step for time-accurate computations. Overall, the numerical results from both codes compared equally well with experimental data, however, the NSS code was found to be significantly more efficient than the F3D code.
Assessment of PWR Steam Generator modelling in RELAP5/MOD2. International Agreement Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Putney, J.M.; Preece, R.J.
1993-06-01
An assessment of Steam Generator (SG) modelling in the PWR thermal-hydraulic code RELAP5/MOD2 is presented. The assessment is based on a review of code assessment calculations performed in the UK and elsewhere, detailed calculations against a series of commissioning tests carried out on the Wolf Creek PWR and analytical investigations of the phenomena involved in normal and abnormal SG operation. A number of modelling deficiencies are identified and their implications for PWR safety analysis are discussed -- including methods for compensating for the deficiencies through changes to the input deck. Consideration is also given as to whether the deficiencies willmore » still be present in the successor code RELAP5/MOD3.« less
Pacific Northwest (PNW) Hydrologic Landscape (HL) polygons and HL code
A five-letter hydrologic landscape code representing five indices of hydrologic form that are related to hydrologic function: climate, seasonality, aquifer permeability, terrain, and soil permeability. Each hydrologic assessment unit is classified by one of the 81 different five-letter codes representing these indices. Polygon features in this dataset were created by aggregating (dissolving boundaries between) adjacent, similarly-coded hydrologic assessment units. Climate Classes: V-Very wet, W-Wet, M-Moist, D-Dry, S-Semiarid, A-Arid. Seasonality Sub-Classes: w-Fall or winter, s-Spring. Aquifer Permeability Classes: H-High, L-Low. Terrain Classes: M-Mountain, T-Transitional, F-Flat. Soil Permeability Classes: H-High, L-Low.
Tate, A Rosemary; Dungey, Sheena; Glew, Simon; Beloff, Natalia; Williams, Rachael; Williams, Tim
2017-01-01
Objective To assess the effect of coding quality on estimates of the incidence of diabetes in the UK between 1995 and 2014. Design A cross-sectional analysis examining diabetes coding from 1995 to 2014 and how the choice of codes (diagnosis codes vs codes which suggest diagnosis) and quality of coding affect estimated incidence. Setting Routine primary care data from 684 practices contributing to the UK Clinical Practice Research Datalink (data contributed from Vision (INPS) practices). Main outcome measure Incidence rates of diabetes and how they are affected by (1) GP coding and (2) excluding ‘poor’ quality practices with at least 10% incident patients inaccurately coded between 2004 and 2014. Results Incidence rates and accuracy of coding varied widely between practices and the trends differed according to selected category of code. If diagnosis codes were used, the incidence of type 2 increased sharply until 2004 (when the UK Quality Outcomes Framework was introduced), and then flattened off, until 2009, after which they decreased. If non-diagnosis codes were included, the numbers continued to increase until 2012. Although coding quality improved over time, 15% of the 666 practices that contributed data between 2004 and 2014 were labelled ‘poor’ quality. When these practices were dropped from the analyses, the downward trend in the incidence of type 2 after 2009 became less marked and incidence rates were higher. Conclusions In contrast to some previous reports, diabetes incidence (based on diagnostic codes) appears not to have increased since 2004 in the UK. Choice of codes can make a significant difference to incidence estimates, as can quality of recording. Codes and data quality should be checked when assessing incidence rates using GP data. PMID:28122831
2012-01-01
Background Procedures documented by general practitioners in primary care have not been studied in relation to procedure coding systems. We aimed to describe procedures documented by Swedish general practitioners in electronic patient records and to compare them to the Swedish Classification of Health Interventions (KVÅ) and SNOMED CT. Methods Procedures in 200 record entries were identified, coded, assessed in relation to two procedure coding systems and analysed. Results 417 procedures found in the 200 electronic patient record entries were coded with 36 different Classification of Health Interventions categories and 148 different SNOMED CT concepts. 22.8% of the procedures could not be coded with any Classification of Health Interventions category and 4.3% could not be coded with any SNOMED CT concept. 206 procedure-concept/category pairs were assessed as a complete match in SNOMED CT compared to 10 in the Classification of Health Interventions. Conclusions Procedures documented by general practitioners were present in nearly all electronic patient record entries. Almost all procedures could be coded using SNOMED CT. Classification of Health Interventions covered the procedures to a lesser extent and with a much lower degree of concordance. SNOMED CT is a more flexible terminology system that can be used for different purposes for procedure coding in primary care. PMID:22230095
The grout/glass performance assessment code system (GPACS) with verification and benchmarking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepho, M.G.; Sutherland, W.H.; Rittmann, P.D.
1994-12-01
GPACS is a computer code system for calculating water flow (unsaturated or saturated), solute transport, and human doses due to the slow release of contaminants from a waste form (in particular grout or glass) through an engineered system and through a vadose zone to an aquifer, well and river. This dual-purpose document is intended to serve as a user`s guide and verification/benchmark document for the Grout/Glass Performance Assessment Code system (GPACS). GPACS can be used for low-level-waste (LLW) Glass Performance Assessment and many other applications including other low-level-waste performance assessments and risk assessments. Based on all the cses presented, GPACSmore » is adequate (verified) for calculating water flow and contaminant transport in unsaturated-zone sediments and for calculating human doses via the groundwater pathway.« less
A methodology for the rigorous verification of plasma simulation codes
NASA Astrophysics Data System (ADS)
Riva, Fabio
2016-10-01
The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.
NASA Technical Reports Server (NTRS)
Westra, Douglas G.; Lin, Jeff; West, Jeff; Tucker, Kevin
2006-01-01
This document is a viewgraph presentation of a paper that documents a continuing effort at Marshall Space Flight Center (MSFC) to use, assess, and continually improve CFD codes to the point of material utility in the design of rocket engine combustion devices. This paper describes how the code is presently being used to simulate combustion in a single element combustion chamber with shear coaxial injectors using gaseous oxygen and gaseous hydrogen propellants. The ultimate purpose of the efforts documented is to assess and further improve the Loci-CHEM code and the implementation of it. Single element shear coaxial injectors were tested as part of the Staged Combustion Injector Technology (SCIT) program, where detailed chamber wall heat fluxes were measured. Data was taken over a range of chamber pressures for propellants injected at both ambient and elevated temperatures. Several test cases are simulated as part of the effort to demonstrate use of the Loci-CHEM CFD code and to enable us to make improvements in the code as needed. The simulations presented also include a grid independence study on hybrid grids. Several two-equation eddy viscosity low Reynolds number turbulence models are also evaluated as part of the study. All calculations are presented with a comparison to the experimental data. Weaknesses of the code relative to test data are discussed and continuing efforts to improve the code are presented.
Wall interference assessment and corrections
NASA Technical Reports Server (NTRS)
Newman, P. A.; Kemp, W. B., Jr.; Garriz, J. A.
1989-01-01
Wind tunnel wall interference assessment and correction (WIAC) concepts, applications, and typical results are discussed in terms of several nonlinear transonic codes and one panel method code developed for and being implemented at NASA-Langley. Contrasts between 2-D and 3-D transonic testing factors which affect WIAC procedures are illustrated using airfoil data from the 0.3 m Transonic Cryogenic Tunnel and Pathfinder 1 data from the National Transonic Facility. Initial results from the 3-D WIAC codes are encouraging; research on and implementation of WIAC concepts continue.
An assessment of multibody simulation tools for articulated spacecraft
NASA Technical Reports Server (NTRS)
Man, Guy K.; Sirlin, Samuel W.
1989-01-01
A survey of multibody simulation codes was conducted in the spring of 1988, to obtain an assessment of the state of the art in multibody simulation codes from the users of the codes. This survey covers the most often used articulated multibody simulation codes in the spacecraft and robotics community. There was no attempt to perform a complete survey of all available multibody codes in all disciplines. Furthermore, this is not an exhaustive evaluation of even robotics and spacecraft multibody simulation codes, as the survey was designed to capture feedback on issues most important to the users of simulation codes. We must keep in mind that the information received was limited and the technical background of the respondents varied greatly. Therefore, only the most often cited observations from the questionnaire are reported here. In this survey, it was found that no one code had both many users (reports) and no limitations. The first section is a report on multibody code applications. Following applications is a discussion of execution time, which is the most troublesome issue for flexible multibody codes. The representation of component flexible bodies, which affects both simulation setup time as well as execution time, is presented next. Following component data preparation, two sections address the accessibility or usability of a code, evaluated by considering its user interface design and examining the overall simulation integrated environment. A summary of user efforts at code verification is reported, before a tabular summary of the questionnaire responses. Finally, some conclusions are drawn.
CBP Toolbox Version 3.0 “Beta Testing” Performance Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, III, F. G.
2016-07-29
One function of the Cementitious Barriers Partnership (CBP) is to assess available models of cement degradation and to assemble suitable models into a “Toolbox” that would be made available to members of the partnership, as well as the DOE Complex. To this end, SRNL and Vanderbilt University collaborated to develop an interface using the GoldSim software to the STADIUM @ code developed by SIMCO Technologies, Inc. and LeachXS/ORCHESTRA developed by Energy research Centre of the Netherlands (ECN). Release of Version 3.0 of the CBP Toolbox is planned in the near future. As a part of this release, an increased levelmore » of quality assurance for the partner codes and the GoldSim interface has been developed. This report documents results from evaluation testing of the ability of CBP Toolbox 3.0 to perform simulations of concrete degradation applicable to performance assessment of waste disposal facilities. Simulations of the behavior of Savannah River Saltstone Vault 2 and Vault 1/4 concrete subject to sulfate attack and carbonation over a 500- to 1000-year time period were run using a new and upgraded version of the STADIUM @ code and the version of LeachXS/ORCHESTRA released in Version 2.0 of the CBP Toolbox. Running both codes allowed comparison of results from two models which take very different approaches to simulating cement degradation. In addition, simulations of chloride attack on the two concretes were made using the STADIUM @ code. The evaluation sought to demonstrate that: 1) the codes are capable of running extended realistic simulations in a reasonable amount of time; 2) the codes produce “reasonable” results; the code developers have provided validation test results as part of their code QA documentation; and 3) the two codes produce results that are consistent with one another. Results of the evaluation testing showed that the three criteria listed above were met by the CBP partner codes. Therefore, it is concluded that the codes can be used to support performance assessment. This conclusion takes into account the QA documentation produced for the partner codes and for the CBP Toolbox.« less
77 FR 42654 - Trifloxystrobin; Pesticide Tolerance
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-20
... code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532). This... filing. III. Aggregate Risk Assessment and Determination of Safety Section 408(b)(2)(A)(i) of FFDCA... dose at which adverse effects of concern are identified (the LOAEL). Uncertainty/safety factors are...
Residential Building Energy Code Field Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Bartlett, M. Halverson, V. Mendon, J. Hathaway, Y. Xie
This document presents a methodology for assessing baseline energy efficiency in new single-family residential buildings and quantifying related savings potential. The approach was developed by Pacific Northwest National Laboratory (PNNL) for the U.S. Department of Energy (DOE) Building Energy Codes Program with the objective of assisting states as they assess energy efficiency in residential buildings and implementation of their building energy codes, as well as to target areas for improvement through energy codes and broader energy-efficiency programs. It is also intended to facilitate a consistent and replicable approach to research studies of this type and establish a transparent data setmore » to represent baseline construction practices across U.S. states.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eder, D C; Anderson, R W; Bailey, D S
2009-10-05
The generation of neutron/gamma radiation, electromagnetic pulses (EMP), debris and shrapnel at mega-Joule class laser facilities (NIF and LMJ) impacts experiments conducted at these facilities. The complex 3D numerical codes used to assess these impacts range from an established code that required minor modifications (MCNP - calculates neutron and gamma radiation levels in complex geometries), through a code that required significant modifications to treat new phenomena (EMSolve - calculates EMP from electrons escaping from laser targets), to a new code, ALE-AMR, that is being developed through a joint collaboration between LLNL, CEA, and UC (UCSD, UCLA, and LBL) for debrismore » and shrapnel modelling.« less
Hansen, J H; Nandkumar, S
1995-01-01
The formulation of reliable signal processing algorithms for speech coding and synthesis require the selection of a prior criterion of performance. Though coding efficiency (bits/second) or computational requirements can be used, a final performance measure must always include speech quality. In this paper, three objective speech quality measures are considered with respect to quality assessment for American English, noisy American English, and noise-free versions of seven languages. The purpose is to determine whether objective quality measures can be used to quantify changes in quality for a given voice coding method, with a known subjective performance level, as background noise or language conditions are changed. The speech coding algorithm chosen is regular-pulse excitation with long-term prediction (RPE-LTP), which has been chosen as the standard voice compression algorithm for the European Digital Mobile Radio system. Three areas are considered for objective quality assessment which include: (i) vocoder performance for American English in a noise-free environment, (ii) speech quality variation for three additive background noise sources, and (iii) noise-free performance for seven languages which include English, Japanese, Finnish, German, Hindi, Spanish, and French. It is suggested that although existing objective quality measures will never replace subjective testing, they can be a useful means of assessing changes in performance, identifying areas for improvement in algorithm design, and augmenting subjective quality tests for voice coding/compression algorithms in noise-free, noisy, and/or non-English applications.
A test of the validity of the motivational interviewing treatment integrity code.
Forsberg, Lars; Berman, Anne H; Kallmén, Håkan; Hermansson, Ulric; Helgason, Asgeir R
2008-01-01
To evaluate the Swedish version of the Motivational Interviewing Treatment Code (MITI), MITI coding was applied to tape-recorded counseling sessions. Construct validity was assessed using factor analysis on 120 MITI-coded sessions. Discriminant validity was assessed by comparing MITI coding of motivational interviewing (MI) sessions with information- and advice-giving sessions as well as by comparing MI-trained practitioners with untrained practitioners. A principal-axis factoring analysis yielded some evidence for MITI construct validity. MITI differentiated between practitioners with different levels of MI training as well as between MI practitioners and advice-giving counselors, thus supporting discriminant validity. MITI may be used as a training tool together with supervision to confirm and enhance MI practice in clinical settings. MITI can also serve as a tool for evaluating MI integrity in clinical research.
An Assessment of Current Fan Noise Prediction Capability
NASA Technical Reports Server (NTRS)
Envia, Edmane; Woodward, Richard P.; Elliott, David M.; Fite, E. Brian; Hughes, Christopher E.; Podboy, Gary G.; Sutliff, Daniel L.
2008-01-01
In this paper, the results of an extensive assessment exercise carried out to establish the current state of the art for predicting fan noise at NASA are presented. Representative codes in the empirical, analytical, and computational categories were exercised and assessed against a set of benchmark acoustic data obtained from wind tunnel tests of three model scale fans. The chosen codes were ANOPP, representing an empirical capability, RSI, representing an analytical capability, and LINFLUX, representing a computational aeroacoustics capability. The selected benchmark fans cover a wide range of fan pressure ratios and fan tip speeds, and are representative of modern turbofan engine designs. The assessment results indicate that the ANOPP code can predict fan noise spectrum to within 4 dB of the measurement uncertainty band on a third-octave basis for the low and moderate tip speed fans except at extreme aft emission angles. The RSI code can predict fan broadband noise spectrum to within 1.5 dB of experimental uncertainty band provided the rotor-only contribution is taken into account. The LINFLUX code can predict interaction tone power levels to within experimental uncertainties at low and moderate fan tip speeds, but could deviate by as much as 6.5 dB outside the experimental uncertainty band at the highest tip speeds in some case.
OLTARIS: On-Line Tool for the Assessment of Radiation in Space
NASA Technical Reports Server (NTRS)
Singleterry, Robert C., Jr.; Blattnig, Steve R.; Clowdsley, Martha S.; Qualls, Garry D.; Sandridge, Christopher A.; Simonsen, Lisa C.; Norbury, John W.; Slaba, Tony C.; Walker, Steven A.; Badavi, Francis F.;
2010-01-01
The On-Line Tool for the Assessment of Radiation In Space (OLTARIS) is a World Wide Web based tool that assesses the effects of space radiation on humans and electronics in items such as spacecraft, habitats, rovers, and spacesuits. This document explains the basis behind the interface and framework used to input the data, perform the assessment, and output the results to the user as well as the physics, engineering, and computer science used to develop OLTARIS. The transport and physics is based on the HZETRN and NUCFRG research codes. The OLTARIS website is the successor to the SIREST website from the early 2000's. Modifications have been made to the code to enable easy maintenance, additions, and configuration management along with a more modern web interface. Overall, the code has been verified, tested, and modified to enable faster and more accurate assessments.
A Guide for Recertification of Ground Based Pressure Vessels and Liquid Holding Tanks
1987-12-15
Boiler and Pressure Vessel Code , Section...Requirements 202 Calculate Vessel MAWP Using ASME Boiler and Pressure Vessel Code Section VUI, Division 1. 203 Assess Vessel MAWP Using ASME Boiler and Pressure Vessel Code Section...Engineers (ASME) Boiler and Pressure Vessel Code (B&PV) Section VIll, Division 1, or other applicable standard. This activity involves the
ERIC Educational Resources Information Center
Yamamoto, Kentaro; He, Qiwei; Shin, Hyo Jeong; von Davier, Mattias
2017-01-01
Approximately a third of the Programme for International Student Assessment (PISA) items in the core domains (math, reading, and science) are constructed-response items and require human coding (scoring). This process is time-consuming, expensive, and prone to error as often (a) humans code inconsistently, and (b) coding reliability in…
Uncertainty Assessment of Hypersonic Aerothermodynamics Prediction Capability
NASA Technical Reports Server (NTRS)
Bose, Deepak; Brown, James L.; Prabhu, Dinesh K.; Gnoffo, Peter; Johnston, Christopher O.; Hollis, Brian
2011-01-01
The present paper provides the background of a focused effort to assess uncertainties in predictions of heat flux and pressure in hypersonic flight (airbreathing or atmospheric entry) using state-of-the-art aerothermodynamics codes. The assessment is performed for four mission relevant problems: (1) shock turbulent boundary layer interaction on a compression corner, (2) shock turbulent boundary layer interaction due a impinging shock, (3) high-mass Mars entry and aerocapture, and (4) high speed return to Earth. A validation based uncertainty assessment approach with reliance on subject matter expertise is used. A code verification exercise with code-to-code comparisons and comparisons against well established correlations is also included in this effort. A thorough review of the literature in search of validation experiments is performed, which identified a scarcity of ground based validation experiments at hypersonic conditions. In particular, a shortage of useable experimental data at flight like enthalpies and Reynolds numbers is found. The uncertainty was quantified using metrics that measured discrepancy between model predictions and experimental data. The discrepancy data is statistically analyzed and investigated for physics based trends in order to define a meaningful quantified uncertainty. The detailed uncertainty assessment of each mission relevant problem is found in the four companion papers.
DiClemente, Carlo C; Crouch, Taylor Berens; Norwood, Amber E Q; Delahanty, Janine; Welsh, Christopher
2015-03-01
Screening, brief intervention, and referral to treatment (SBIRT) has become an empirically supported and widely implemented approach in primary and specialty care for addressing substance misuse. Accordingly, training of providers in SBIRT has increased exponentially in recent years. However, the quality and fidelity of training programs and subsequent interventions are largely unknown because of the lack of SBIRT-specific evaluation tools. The purpose of this study was to create a coding scale to assess quality and fidelity of SBIRT interactions addressing alcohol, tobacco, illicit drugs, and prescription medication misuse. The scale was developed to evaluate performance in an SBIRT residency training program. Scale development was based on training protocol and competencies with consultation from Motivational Interviewing coding experts. Trained medical residents practiced SBIRT with standardized patients during 10- to 15-min videotaped interactions. This study included 25 tapes from the Family Medicine program coded by 3 unique coder pairs with varying levels of coding experience. Interrater reliability was assessed for overall scale components and individual items via intraclass correlation coefficients. Coder pair-specific reliability was also assessed. Interrater reliability was excellent overall for the scale components (>.85) and nearly all items. Reliability was higher for more experienced coders, though still adequate for the trained coder pair. Descriptive data demonstrated a broad range of adherence and skills. Subscale correlations supported concurrent and discriminant validity. Data provide evidence that the MD3 SBIRT Coding Scale is a psychometrically reliable coding system for evaluating SBIRT interactions and can be used to evaluate implementation skills for fidelity, training, assessment, and research. Recommendations for refinement and further testing of the measure are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics
Laney, Daniel; Langer, Steven; Weber, Christopher; ...
2014-01-01
This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less
Assessment of the prevailing physics codes: LEOPARD, LASER, and EPRI-CELL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lan, J.S.
1981-01-01
In order to analyze core performance and fuel management, it is necessary to verify reactor physics codes in great detail. This kind of work not only serves the purpose of understanding and controlling the characteristics of each code, but also ensures the reliability as codes continually change due to constant modifications and machine transfers. This paper will present the results of a comprehensive verification of three code packages - LEOPARD, LASER, and EPRI-CELL.
Matney, Susan; Bakken, Suzanne; Huff, Stanley M
2003-01-01
In recent years, the Logical Observation Identifiers, Names, and Codes (LOINC) Database has been expanded to include assessment items of relevance to nursing and in 2002 met the criteria for "recognition" by the American Nurses Association. Assessment measures in LOINC include those related to vital signs, obstetric measurements, clinical assessment scales, assessments from standardized nursing terminologies, and research instruments. In order for LOINC to be of greater use in implementing information systems that support nursing practice, additional content is needed. Moreover, those implementing systems for nursing practice must be aware of the manner in which LOINC codes for assessments can be appropriately linked with other aspects of the nursing process such as diagnoses and interventions. Such linkages are necessary to document nursing contributions to healthcare outcomes within the context of a multidisciplinary care environment and to facilitate building of nursing knowledge from clinical practice. The purposes of this paper are to provide an overview of the LOINC database, to describe examples of assessments of relevance to nursing contained in LOINC, and to illustrate linkages of LOINC assessments with other nursing concepts.
The U.S. Environmental Protection Agency (EPA) conducted a Health Impact Assessment (HIA) of proposed code changes regarding residential onsite sewage disposal systems (OSDS) in Suffolk County, New York. Of the approximately 569,000 housing units in Suffolk County, 365,000 are no...
RELAP5-3D developmental assessment: Comparison of version 4.2.1i on Linux and Windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayless, Paul D.
2014-06-01
Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.2i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.
RELAP5-3D Developmental Assessment. Comparison of Version 4.3.4i on Linux and Windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayless, Paul David
2015-10-01
Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.3i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.
Diabetes Mellitus Coding Training for Family Practice Residents.
Urse, Geraldine N
2015-07-01
Although physicians regularly use numeric coding systems such as the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) to describe patient encounters, coding errors are common. One of the most complicated diagnoses to code is diabetes mellitus. The ICD-9-CM currently has 39 separate codes for diabetes mellitus; this number will be expanded to more than 50 with the introduction of ICD-10-CM in October 2015. To assess the effect of a 1-hour focused presentation on ICD-9-CM codes on diabetes mellitus coding. A 1-hour focused lecture on the correct use of diabetes mellitus codes for patient visits was presented to family practice residents at Doctors Hospital Family Practice in Columbus, Ohio. To assess resident knowledge of the topic, a pretest and posttest were given to residents before and after the lecture, respectively. Medical records of all patients with diabetes mellitus who were cared for at the hospital 6 weeks before and 6 weeks after the lecture were reviewed and compared for the use of diabetes mellitus ICD-9 codes. Eighteen residents attended the lecture and completed the pretest and posttest. The mean (SD) percentage of correct answers was 72.8% (17.1%) for the pretest and 84.4% (14.6%) for the posttest, for an improvement of 11.6 percentage points (P≤.035). The percentage of total available codes used did not substantially change from before to after the lecture, but the use of the generic ICD-9-CM code for diabetes mellitus type II controlled (250.00) declined (58 of 176 [33%] to 102 of 393 [26%]) and the use of other codes increased, indicating a greater variety in codes used after the focused lecture. After a focused lecture on diabetes mellitus coding, resident coding knowledge improved. Review of medical record data did not reveal an overall change in the number of diabetic codes used after the lecture but did reveal a greater variety in the codes used.
Tate, A Rosemary; Dungey, Sheena; Glew, Simon; Beloff, Natalia; Williams, Rachael; Williams, Tim
2017-01-25
To assess the effect of coding quality on estimates of the incidence of diabetes in the UK between 1995 and 2014. A cross-sectional analysis examining diabetes coding from 1995 to 2014 and how the choice of codes (diagnosis codes vs codes which suggest diagnosis) and quality of coding affect estimated incidence. Routine primary care data from 684 practices contributing to the UK Clinical Practice Research Datalink (data contributed from Vision (INPS) practices). Incidence rates of diabetes and how they are affected by (1) GP coding and (2) excluding 'poor' quality practices with at least 10% incident patients inaccurately coded between 2004 and 2014. Incidence rates and accuracy of coding varied widely between practices and the trends differed according to selected category of code. If diagnosis codes were used, the incidence of type 2 increased sharply until 2004 (when the UK Quality Outcomes Framework was introduced), and then flattened off, until 2009, after which they decreased. If non-diagnosis codes were included, the numbers continued to increase until 2012. Although coding quality improved over time, 15% of the 666 practices that contributed data between 2004 and 2014 were labelled 'poor' quality. When these practices were dropped from the analyses, the downward trend in the incidence of type 2 after 2009 became less marked and incidence rates were higher. In contrast to some previous reports, diabetes incidence (based on diagnostic codes) appears not to have increased since 2004 in the UK. Choice of codes can make a significant difference to incidence estimates, as can quality of recording. Codes and data quality should be checked when assessing incidence rates using GP data. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Stey, Anne M; Ko, Clifford Y; Hall, Bruce Lee; Louie, Rachel; Lawson, Elise H; Gibbons, Melinda M; Zingmond, David S; Russell, Marcia M
2014-08-01
Identifying iatrogenic injuries using existing data sources is important for improved transparency in the occurrence of intraoperative events. There is evidence that procedure codes are reliably recorded in claims data. The objective of this study was to assess whether concurrent splenic procedure codes in patients undergoing colectomy procedures are reliably coded in claims data as compared with clinical registry data. Patients who underwent colectomy procedures in the absence of neoplastic diagnosis codes were identified from American College of Surgeons (ACS) NSQIP data linked with Medicare inpatient claims data file (2005 to 2008). A κ statistic was used to assess coding concordance between ACS NSQIP and Medicare inpatient claims, with ACS NSQIP serving as the reference standard. A total of 11,367 colectomy patients were identified from 212 hospitals. There were 114 patients (1%) who had a concurrent splenic procedure code recorded in either ACS NSQIP or Medicare inpatient claims. There were 7 patients who had a splenic injury diagnosis code recorded in either data source. Agreement of splenic procedure codes between the data sources was substantial (κ statistic 0.72; 95% CI, 0.64-0.79). Medicare inpatient claims identified 81% of the splenic procedure codes recorded in ACS NSQIP, and 99% of the patients without a splenic procedure code. It is feasible to use Medicare claims data to identify splenic injuries occurring during colectomy procedures, as claims data have moderate sensitivity and excellent specificity for capturing concurrent splenic procedure codes compared with ACS NSQIP. Copyright © 2014 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Code of practice for food handler activities.
Smith, T A; Kanas, R P; McCoubrey, I A; Belton, M E
2005-08-01
The food industry regulates various aspects of food handler activities, according to legislation and customer expectations. The purpose of this paper is to provide a code of practice which delineates a set of working standards for food handler hygiene, handwashing, use of protective equipment, wearing of jewellery and body piercing. The code was developed by a working group of occupational physicians with expertise in both food manufacturing and retail, using a risk assessment approach. Views were also obtained from other occupational physicians working within the food industry and the relevant regulatory bodies. The final version of the code (available in full as Supplementary data in Occupational Medicine Online) therefore represents a broad consensus of opinion. The code of practice represents a set of minimum standards for food handler suitability and activities, based on a practical assessment of risk, for application in food businesses. It aims to provide useful working advice to food businesses of all sizes.
1979-09-01
KEY WORDS (Continue on revmrem elde It necmmemry and Identity by block number) Target Descriptions GIFT Code C0MGE0M Descriptions FASTGEN Code...which accepts the COMGEOM target description and 1 2 produces the shotline data is the GIFT ’ code. The GIFT code evolved 3 4 from and has...the COMGEOM/ GIFT methodology, while the Navy and Air Force use the PATCH/SHOTGEN-FASTGEN methodology. Lawrence W. Bain, Mathew J. Heisinger
Description of Transport Codes for Space Radiation Shielding
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Wilson, John W.; Cucinotta, Francis A.
2011-01-01
This slide presentation describes transport codes and their use for studying and designing space radiation shielding. When combined with risk projection models radiation transport codes serve as the main tool for study radiation and designing shielding. There are three criteria for assessing the accuracy of transport codes: (1) Ground-based studies with defined beams and material layouts, (2) Inter-comparison of transport code results for matched boundary conditions and (3) Comparisons to flight measurements. These three criteria have a very high degree with NASA's HZETRN/QMSFRG.
This purpose of this SOP is to define the coding strategy for the Descriptive Questionnaire. This questionnaire was developed for use in the Arizona NHEXAS project and the "Border" study. Keywords: data; coding; descriptive questionnaire.
The National Human Exposure Assessment...
The Modified Cognitive Constructions Coding System: Reliability and Validity Assessments
ERIC Educational Resources Information Center
Moran, Galia S.; Diamond, Gary M.
2006-01-01
The cognitive constructions coding system (CCCS) was designed for coding client's expressed problem constructions on four dimensions: intrapersonal-interpersonal, internal-external, responsible-not responsible, and linear-circular. This study introduces, and examines the reliability and validity of, a modified version of the CCCS--a version that…
Using a Corporate Code of Ethics to Assess Students' Ethicality: Implications for Business Education
ERIC Educational Resources Information Center
Persons, Obeua
2009-01-01
The author used a corporate code of ethics as a roadmap to create 18 scenarios for assessing business students' ethicality as measured by their behavioral intention. Using a logistic regression analysis, the author also examined 8 factors that could potentially influence students' ethicality. Results indicate 6 scenarios related to 5 areas of the…
Optical coherence tomography to evaluate variance in the extent of carious lesions in depth.
Park, Kyung-Jin; Schneider, Hartmut; Ziebolz, Dirk; Krause, Felix; Haak, Rainer
2018-05-03
Evaluation of variance in the extent of carious lesions in depth at smooth surfaces within the same ICDAS code group using optical coherence tomography (OCT) in vitro and in vivo. (1) Verification/validation of OCT to assess non-cavitated caries: 13 human molars with ICDAS code 2 at smooth surfaces were imaged using OCT and light microscopy. Regions of interest (ROI) were categorized according to the depth of carious lesions. Agreement between histology and OCT was determined by unweighted Cohen's Kappa and Wilcoxon test. (2) Assessment of 133 smooth surfaces using ICDAS and OCT in vitro, 49 surfaces in vivo. ROI were categorized according to the caries extent (ICDAS: codes 0-4, OCT: scoring based on lesion depth). A frequency distribution of the OCT scores for each ICDAS code was determined. (1) Histology and OCT agreed moderately (κ = 0.54, p ≤ 0.001) with no significant difference between both methods (p = 0.25). The lesions (76.9% (10 of 13)) _were equally scored. (2) In vitro, OCT revealed caries in 42% of ROI clinically assessed as sound. OCT detected dentin-caries in 40% of ROIs visually assessed as enamel-caries. In vivo, large differences between ICDAS and OCT were observed. Carious lesions of ICDAS codes 1 and 2 vary largely in their extent in depth.
A long-term, integrated impact assessment of alternative building energy code scenarios in China
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Sha; Eom, Jiyong; Evans, Meredydd
2014-04-01
China is the second largest building energy user in the world, ranking first and third in residential and commercial energy consumption. Beginning in the early 1980s, the Chinese government has developed a variety of building energy codes to improve building energy efficiency and reduce total energy demand. This paper studies the impact of building energy codes on energy use and CO2 emissions by using a detailed building energy model that represents four distinct climate zones each with three building types, nested in a long-term integrated assessment framework GCAM. An advanced building stock module, coupled with the building energy model, ismore » developed to reflect the characteristics of future building stock and its interaction with the development of building energy codes in China. This paper also evaluates the impacts of building codes on building energy demand in the presence of economy-wide carbon policy. We find that building energy codes would reduce Chinese building energy use by 13% - 22% depending on building code scenarios, with a similar effect preserved even under the carbon policy. The impact of building energy codes shows regional and sectoral variation due to regionally differentiated responses of heating and cooling services to shell efficiency improvement.« less
Earthquake Early Warning ShakeAlert System: Testing and certification platform
Cochran, Elizabeth S.; Kohler, Monica D.; Given, Douglas; Guiwits, Stephen; Andrews, Jennifer; Meier, Men-Andrin; Ahmad, Mohammad; Henson, Ivan; Hartog, Renate; Smith, Deborah
2017-01-01
Earthquake early warning systems provide warnings to end users of incoming moderate to strong ground shaking from earthquakes. An earthquake early warning system, ShakeAlert, is providing alerts to beta end users in the western United States, specifically California, Oregon, and Washington. An essential aspect of the earthquake early warning system is the development of a framework to test modifications to code to ensure functionality and assess performance. In 2016, a Testing and Certification Platform (TCP) was included in the development of the Production Prototype version of ShakeAlert. The purpose of the TCP is to evaluate the robustness of candidate code that is proposed for deployment on ShakeAlert Production Prototype servers. TCP consists of two main components: a real‐time in situ test that replicates the real‐time production system and an offline playback system to replay test suites. The real‐time tests of system performance assess code optimization and stability. The offline tests comprise a stress test of candidate code to assess if the code is production ready. The test suite includes over 120 events including local, regional, and teleseismic historic earthquakes, recentering and calibration events, and other anomalous and potentially problematic signals. Two assessments of alert performance are conducted. First, point‐source assessments are undertaken to compare magnitude, epicentral location, and origin time with the Advanced National Seismic System Comprehensive Catalog, as well as to evaluate alert latency. Second, we describe assessment of the quality of ground‐motion predictions at end‐user sites by comparing predicted shaking intensities to ShakeMaps for historic events and implement a threshold‐based approach that assesses how often end users initiate the appropriate action, based on their ground‐shaking threshold. TCP has been developed to be a convenient streamlined procedure for objectively testing algorithms, and it has been designed with flexibility to accommodate significant changes in development of new or modified system code. It is expected that the TCP will continue to evolve along with the ShakeAlert system, and the framework we describe here provides one example of how earthquake early warning systems can be evaluated.
Audit of accuracy of clinical coding in oral surgery.
Naran, S; Hudovsky, A; Antscherl, J; Howells, S; Nouraei, S A R
2014-10-01
We aimed to study the accuracy of clinical coding within oral surgery and to identify ways in which it can be improved. We undertook did a multidisciplinary audit of a sample of 646 day case patients who had had oral surgery procedures between 2011 and 2012. We compared the codes given with their case notes and amended any discrepancies. The accuracy of coding was assessed for primary and secondary diagnoses and procedures, and for health resource groupings (HRGs). The financial impact of coding Subjectivity, Variability and Error (SVE) was assessed by reference to national tariffs. The audit resulted in 122 (19%) changes to primary diagnoses. The codes for primary procedures changed in 224 (35%) cases; 310 (48%) morbidities and complications had been missed, and 266 (41%) secondary procedures had been missed or were incorrect. This led to at least one change of coding in 496 (77%) patients, and to the HRG changes in 348 (54%) patients. The financial impact of this was £114 in lost revenue per patient. There is a high incidence of coding errors in oral surgery because of the large number of day cases, a lack of awareness by clinicians of coding issues, and because clinical coders are not always familiar with the large number of highly specialised abbreviations used. Accuracy of coding can be improved through the use of a well-designed proforma, and standards can be maintained by the use of an ongoing data quality assurance programme. Copyright © 2014. Published by Elsevier Ltd.
Developing and Implementing the Data Mining Algorithms in RAVEN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea
The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantificationmore » analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Nicholas R.; Pointer, William David; Sieger, Matt
2016-04-01
The goal of this review is to enable application of codes or software packages for safety assessment of advanced sodium-cooled fast reactor (SFR) designs. To address near-term programmatic needs, the authors have focused on two objectives. First, the authors have focused on identification of requirements for software QA that must be satisfied to enable the application of software to future safety analyses. Second, the authors have collected best practices applied by other code development teams to minimize cost and time of initial code qualification activities and to recommend a path to the stated goal.
The purpose of this SOP is to define the strategy for the Global Coding of Scanned Forms. This procedure applies to the Arizona NHEXAS project and the "Border" study. Keywords: Coding; scannable forms.
The National Human Exposure Assessment Survey (NHEXAS) is a federal interag...
Space Shuttle Debris Impact Tool Assessment Using the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Rayos, Elonsio M.; Campbell, Charles H.; Rickman, Steven L.; Larsen, Curtis E.
2007-01-01
Complex computer codes are used to estimate thermal and structural reentry loads on the Shuttle Orbiter induced by ice and foam debris impact during ascent. Such debris can create cavities in the Shuttle Thermal Protection System. The sizes and shapes of these cavities are approximated to accommodate a code limitation that requires simple "shoebox" geometries to describe the cavities -- rectangular areas and planar walls that are at constant angles with respect to vertical. These approximations induce uncertainty in the code results. The Modern Design of Experiments (MDOE) has recently been applied to develop a series of resource-minimal computational experiments designed to generate low-order polynomial graduating functions to approximate the more complex underlying codes. These polynomial functions were then used to propagate cavity geometry errors to estimate the uncertainty they induce in the reentry load calculations performed by the underlying code. This paper describes a methodological study focused on evaluating the application of MDOE to future operational codes in a rapid and low-cost way to assess the effects of cavity geometry uncertainty.
Smith, Katherine C; Cukier, Samantha; Jernigan, David H
2014-10-01
We analyzed beer, spirits, and alcopop magazine advertisements to determine adherence to federal and voluntary advertising standards. We assessed the efficacy of these standards in curtailing potentially damaging content and protecting public health. We obtained data from a content analysis of a census of 1795 unique advertising creatives for beer, spirits, and alcopops placed in nationally available magazines between 2008 and 2010. We coded creatives for manifest content and adherence to federal regulations and industry codes. Advertisements largely adhered to existing regulations and codes. We assessed only 23 ads as noncompliant with federal regulations and 38 with industry codes. Content consistent with the codes was, however, often culturally positive in terms of aspirational depictions. In addition, creatives included degrading and sexualized images, promoted risky behavior, and made health claims associated with low-calorie content. Existing codes and regulations are largely followed regarding content but do not adequately protect against content that promotes unhealthy and irresponsible consumption and degrades potentially vulnerable populations in its depictions. Our findings suggest further limitations and enhanced federal oversight may be necessary to protect public health.
Structural reliability assessment capability in NESSUS
NASA Technical Reports Server (NTRS)
Millwater, H.; Wu, Y.-T.
1992-01-01
The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.
Structural reliability assessment capability in NESSUS
NASA Astrophysics Data System (ADS)
Millwater, H.; Wu, Y.-T.
1992-07-01
The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.
VAVUQ, Python and Matlab freeware for Verification and Validation, Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Courtney, J. E.; Zamani, K.; Bombardelli, F. A.; Fleenor, W. E.
2015-12-01
A package of scripts is presented for automated Verification and Validation (V&V) and Uncertainty Quantification (UQ) for engineering codes that approximate Partial Differential Equations (PDFs). The code post-processes model results to produce V&V and UQ information. This information can be used to assess model performance. Automated information on code performance can allow for a systematic methodology to assess the quality of model approximations. The software implements common and accepted code verification schemes. The software uses the Method of Manufactured Solutions (MMS), the Method of Exact Solution (MES), Cross-Code Verification, and Richardson Extrapolation (RE) for solution (calculation) verification. It also includes common statistical measures that can be used for model skill assessment. Complete RE can be conducted for complex geometries by implementing high-order non-oscillating numerical interpolation schemes within the software. Model approximation uncertainty is quantified by calculating lower and upper bounds of numerical error from the RE results. The software is also able to calculate the Grid Convergence Index (GCI), and to handle adaptive meshes and models that implement mixed order schemes. Four examples are provided to demonstrate the use of the software for code and solution verification, model validation and uncertainty quantification. The software is used for code verification of a mixed-order compact difference heat transport solver; the solution verification of a 2D shallow-water-wave solver for tidal flow modeling in estuaries; the model validation of a two-phase flow computation in a hydraulic jump compared to experimental data; and numerical uncertainty quantification for 3D CFD modeling of the flow patterns in a Gust erosion chamber.
Schweizer, Marin L.; Eber, Michael R.; Laxminarayan, Ramanan; Furuno, Jon P.; Popovich, Kyle J.; Hota, Bala; Rubin, Michael A.; Perencevich, Eli N.
2013-01-01
BACKGROUND AND OBJECTIVE Investigators and medical decision makers frequently rely on administrative databases to assess methicillin-resistant Staphylococcus aureus (MRSA) infection rates and outcomes. The validity of this approach remains unclear. We sought to assess the validity of the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) code for infection with drug-resistant microorganisms (V09) for identifying culture-proven MRSA infection. DESIGN Retrospective cohort study. METHODS All adults admitted to 3 geographically distinct hospitals between January 1, 2001, and December 31, 2007, were assessed for presence of incident MRSA infection, defined as an MRSA-positive clinical culture obtained during the index hospitalization, and presence of the V09 ICD-9-CM code. The k statistic was calculated to measure the agreement between presence of MRSA infection and assignment of the V09 code. Sensitivities, specificities, positive predictive values, and negative predictive values were calculated. RESULTS There were 466,819 patients discharged during the study period. Of the 4,506 discharged patients (1.0%) who had the V09 code assigned, 31% had an incident MRSA infection, 20% had prior history of MRSA colonization or infection but did not have an incident MRSA infection, and 49% had no record of MRSA infection during the index hospitalization or the previous hospitalization. The V09 code identified MRSA infection with a sensitivity of 24% (range, 21%–34%) and positive predictive value of 31% (range, 22%–53%). The agreement between assignment of the V09 code and presence of MRSA infection had a κ coefficient of 0.26 (95% confidence interval, 0.25–0.27). CONCLUSIONS In its current state, the ICD-9-CM code V09 is not an accurate predictor of MRSA infection and should not be used to measure rates of MRSA infection. PMID:21460469
Thorogood, Adrian; Joly, Yann; Knoppers, Bartha Maria; Nilsson, Tommy; Metrakos, Peter; Lazaris, Anthoula; Salman, Ayat
2014-12-23
This article outlines procedures for the feedback of individual research data to participants. This feedback framework was developed in the context of a personalized medicine research project in Canada. Researchers in this domain have an ethical obligation to return individual research results and/or material incidental findings that are clinically significant, valid and actionable to participants. Communication of individual research data must proceed in an ethical and efficient manner. Feedback involves three procedural steps: assessing the health relevance of a finding, re-identifying the affected participant, and communicating the finding. Re-identification requires researchers to break the code in place to protect participant identities. Coding systems replace personal identifiers with a numerical code. Double coding systems provide added privacy protection by separating research data from personal identifying data with a third "linkage" database. A trusted and independent intermediary, the "keyholder", controls access to this linkage database. Procedural guidelines for the return of individual research results and incidental findings are lacking. This article outlines a procedural framework for the three steps of feedback: assessment, re-identification, and communication. This framework clarifies the roles of the researcher, Research Ethics Board, and keyholder in the process. The framework also addresses challenges posed by coding systems. Breaking the code involves privacy risks and should only be carried out in clearly defined circumstances. Where a double coding system is used, the keyholder plays an important role in balancing the benefits of individual feedback with the privacy risks of re-identification. Feedback policies should explicitly outline procedures for the assessment of findings, and the re-identification and contact of participants. The responsibilities of researchers, the Research Ethics Board, and the keyholder must be clearly defined. We provide general guidelines for keyholders involved in feedback. We also recommend that Research Ethics Boards should not be directly involved in the assessment of individual findings. Hospitals should instead establish formal, interdisciplinary clinical advisory committees to help researchers determine whether or not an uncertain finding should be returned.
Assessment of Current Jet Noise Prediction Capabilities
NASA Technical Reports Server (NTRS)
Hunter, Craid A.; Bridges, James E.; Khavaran, Abbas
2008-01-01
An assessment was made of the capability of jet noise prediction codes over a broad range of jet flows, with the objective of quantifying current capabilities and identifying areas requiring future research investment. Three separate codes in NASA s possession, representative of two classes of jet noise prediction codes, were evaluated, one empirical and two statistical. The empirical code is the Stone Jet Noise Module (ST2JET) contained within the ANOPP aircraft noise prediction code. It is well documented, and represents the state of the art in semi-empirical acoustic prediction codes where virtual sources are attributed to various aspects of noise generation in each jet. These sources, in combination, predict the spectral directivity of a jet plume. A total of 258 jet noise cases were examined on the ST2JET code, each run requiring only fractions of a second to complete. Two statistical jet noise prediction codes were also evaluated, JeNo v1, and Jet3D. Fewer cases were run for the statistical prediction methods because they require substantially more resources, typically a Reynolds-Averaged Navier-Stokes solution of the jet, volume integration of the source statistical models over the entire plume, and a numerical solution of the governing propagation equation within the jet. In the evaluation process, substantial justification of experimental datasets used in the evaluations was made. In the end, none of the current codes can predict jet noise within experimental uncertainty. The empirical code came within 2dB on a 1/3 octave spectral basis for a wide range of flows. The statistical code Jet3D was within experimental uncertainty at broadside angles for hot supersonic jets, but errors in peak frequency and amplitude put it out of experimental uncertainty at cooler, lower speed conditions. Jet3D did not predict changes in directivity in the downstream angles. The statistical code JeNo,v1 was within experimental uncertainty predicting noise from cold subsonic jets at all angles, but did not predict changes with heating of the jet and did not account for directivity changes at supersonic conditions. Shortcomings addressed here give direction for future work relevant to the statistical-based prediction methods. A full report will be released as a chapter in a NASA publication assessing the state of the art in aircraft noise prediction.
Evaluation of the DRAGON code for VHTR design analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taiwo, T. A.; Kim, T. K.; Nuclear Engineering Division
2006-01-12
This letter report summarizes three activities that were undertaken in FY 2005 to gather information on the DRAGON code and to perform limited evaluations of the code performance when used in the analysis of the Very High Temperature Reactor (VHTR) designs. These activities include: (1) Use of the code to model the fuel elements of the helium-cooled and liquid-salt-cooled VHTR designs. Results were compared to those from another deterministic lattice code (WIMS8) and a Monte Carlo code (MCNP). (2) The preliminary assessment of the nuclear data library currently used with the code and libraries that have been provided by themore » IAEA WIMS-D4 Library Update Project (WLUP). (3) DRAGON workshop held to discuss the code capabilities for modeling the VHTR.« less
Dunn, Madeleine J; Rodriguez, Erin M; Miller, Kimberly S; Gerhardt, Cynthia A; Vannatta, Kathryn; Saylor, Megan; Scheule, C Melanie; Compas, Bruce E
2011-06-01
To examine the acceptability and feasibility of coding observed verbal and nonverbal behavioral and emotional components of mother-child communication among families of children with cancer. Mother-child dyads (N=33, children ages 5-17 years) were asked to engage in a videotaped 15-min conversation about the child's cancer. Coding was done using the Iowa Family Interaction Rating Scale (IFIRS). Acceptability and feasibility of direct observation in this population were partially supported: 58% consented and 81% of those (47% of all eligible dyads) completed the task; trained raters achieved 78% agreement in ratings across codes. The construct validity of the IFIRS was demonstrated by expected associations within and between positive and negative behavioral/emotional code ratings and between mothers' and children's corresponding code ratings. Direct observation of mother-child communication about childhood cancer has the potential to be an acceptable and feasible method of assessing verbal and nonverbal behavior and emotion in this population.
Al Jawaldeh, Ayoub; Sayed, Ghada
2018-04-05
Optimal breastfeeding practices and appropriate complementary feeding improve child health, survival and development. The countries of the Eastern Mediterranean Region have made significant strides in formulation and implementation of legislation to protect and promote breastfeeding based on The International Code of Marketing of Breast-milk Substitutes (the Code) and subsequent relevant World Health Assembly resolutions. To assess the implementation of the Code in the Region. Assessment was conducted by the World Health Organization (WHO) Regional Office for the Eastern Mediterranean using a WHO standard questionnaire. Seventeen countries in the Region have enacted legislation to protect breastfeeding. Only 6 countries have comprehensive legislation or other legal measures reflecting all or most provisions of the Code; 4 countries have legal measures incorporating many provisions of the Code; 7 countries have legal measures that contain a few provisions of the Code; 4 countries are currently studying the issue; and only 1 country has no measures in place. Further analysis of the legislation found that the text of articles in the laws fully reflected the Code articles in only 6 countries. Most countries need to revisit and amend existing national legislation to implement fully the Code and relevant World Health Assembly resolutions, supported by systematic monitoring and reporting. Copyright © World Health Organization (WHO) 2018. Some rights reserved. This work is available under the CC BY-NC-SA 3.0 IGO license (https://creativecommons.org/licenses/by-nc-sa/3.0/igo).
A Regularization Approach to Blind Deblurring and Denoising of QR Barcodes.
van Gennip, Yves; Athavale, Prashant; Gilles, Jérôme; Choksi, Rustum
2015-09-01
QR bar codes are prototypical images for which part of the image is a priori known (required patterns). Open source bar code readers, such as ZBar, are readily available. We exploit both these facts to provide and assess purely regularization-based methods for blind deblurring of QR bar codes in the presence of noise.
NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR CODING: ARIZONA LAB DATA (UA-D-13.0)
The purpose of this SOP is to define the coding strategy for Arizona Lab Data. This strategy was developed for use in the Arizona NHEXAS project and the "Border" study. Keywords: data; coding; lab data forms.
The National Human Exposure Assessment Survey (NHEXAS) is a federal ...
A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy David; Krolik, Julian H.
2013-01-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
2011-01-01
Introduction Many trauma registries have used the Abbreviated Injury Scale 1990 Revision Update 98 (AIS98) to classify injuries. In the current AIS version (Abbreviated Injury Scale 2005 Update 2008 - AIS08), injury classification and specificity differ substantially from AIS98, and the mapping tools provided in the AIS08 dictionary are incomplete. As a result, data from different AIS versions cannot currently be compared. The aim of this study was to develop an additional AIS98 to AIS08 mapping tool to complement the current AIS dictionary map, and then to evaluate the completed map (produced by combining these two maps) using double-coded data. The value of additional information provided by free text descriptions accompanying assigned codes was also assessed. Methods Using a modified Delphi process, a panel of expert AIS coders established plausible AIS08 equivalents for the 153 AIS98 codes which currently have no AIS08 map. A series of major trauma patients whose injuries had been double-coded in AIS98 and AIS08 was used to assess the maps; both of the AIS datasets had already been mapped to another AIS version using the AIS dictionary maps. Following application of the completed (enhanced) map with or without free text evaluation, up to six AIS codes were available for each injury. Datasets were assessed for agreement in injury severity measures, and the relative performances of the maps in accurately describing the trauma population were evaluated. Results The double-coded injuries sustained by 109 patients were used to assess the maps. For data conversion from AIS98, both the enhanced map and the enhanced map with free text description resulted in higher levels of accuracy and agreement with directly coded AIS08 data than the currently available dictionary map. Paired comparisons demonstrated significant differences between direct coding and the dictionary maps, but not with either of the enhanced maps. Conclusions The newly-developed AIS98 to AIS08 complementary map enabled transformation of the trauma population description given by AIS98 into an AIS08 estimate which was statistically indistinguishable from directly coded AIS08 data. It is recommended that the enhanced map should be adopted for dataset conversion, using free text descriptions if available. PMID:21548991
Palmer, Cameron S; Franklyn, Melanie; Read-Allsopp, Christine; McLellan, Susan; Niggemeyer, Louise E
2011-05-08
Many trauma registries have used the Abbreviated Injury Scale 1990 Revision Update 98 (AIS98) to classify injuries. In the current AIS version (Abbreviated Injury Scale 2005 Update 2008 - AIS08), injury classification and specificity differ substantially from AIS98, and the mapping tools provided in the AIS08 dictionary are incomplete. As a result, data from different AIS versions cannot currently be compared. The aim of this study was to develop an additional AIS98 to AIS08 mapping tool to complement the current AIS dictionary map, and then to evaluate the completed map (produced by combining these two maps) using double-coded data. The value of additional information provided by free text descriptions accompanying assigned codes was also assessed. Using a modified Delphi process, a panel of expert AIS coders established plausible AIS08 equivalents for the 153 AIS98 codes which currently have no AIS08 map. A series of major trauma patients whose injuries had been double-coded in AIS98 and AIS08 was used to assess the maps; both of the AIS datasets had already been mapped to another AIS version using the AIS dictionary maps. Following application of the completed (enhanced) map with or without free text evaluation, up to six AIS codes were available for each injury. Datasets were assessed for agreement in injury severity measures, and the relative performances of the maps in accurately describing the trauma population were evaluated. The double-coded injuries sustained by 109 patients were used to assess the maps. For data conversion from AIS98, both the enhanced map and the enhanced map with free text description resulted in higher levels of accuracy and agreement with directly coded AIS08 data than the currently available dictionary map. Paired comparisons demonstrated significant differences between direct coding and the dictionary maps, but not with either of the enhanced maps. The newly-developed AIS98 to AIS08 complementary map enabled transformation of the trauma population description given by AIS98 into an AIS08 estimate which was statistically indistinguishable from directly coded AIS08 data. It is recommended that the enhanced map should be adopted for dataset conversion, using free text descriptions if available.
Snipelisky, David; Ray, Jordan; Matcha, Gautam; Roy, Archana; Chirila, Razvan; Maniaci, Michael; Bosworth, Veronica; Whitman, Anastasia; Lewis, Patricia; Vadeboncoeur, Tyler; Kusumoto, Fred; Burton, M Caroline
2015-07-01
Code status discussions are important during a hospitalization, yet variation in its practice exists. No data have assessed the likelihood of patients to change code status following a cardiopulmonary arrest. A retrospective review of all patients that experienced a cardiopulmonary arrest between May 1, 2008 and June 30, 2014 at an academic medical center was performed. The proportion of code status modifications to do not resuscitate (DNR) from full code was assessed. Baseline clinical characteristics, resuscitation factors, and 24-h post-resuscitation, hospital, and overall survival rates were compared between the two subsets. A total of 157 patients survived the index event and were included. One hundred and fifteen (73.2%) patients did not have a change in code status following the index event, while 42 (26.8%) changed code status to DNR. Clinical characteristics were similar between subsets, although patients in the change to DNR subset were older (average age 67.7 years) compared to the full code subset (average age 59.2 years; p = 0.005). Patients in the DNR subset had longer overall resuscitation efforts with less attempts at defibrillation. Compared to the DNR subset, patients that remained full code demonstrated higher 24-h post-resuscitation (n = 108, 93.9% versus n = 32, 76.2%; p = 0.001) and hospital (n = 50, 43.5% versus n = 6, 14.3%; p = 0.001) survival rates. Patients in the DNR subset were more likely to have neurologic deficits on discharge and shorter overall survival. Patient code status wishes do tend to change during critical periods within a hospitalization, adding emphasis for continued code status evaluation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Kharrazi, Rebekah J; Nash, Denis; Mielenz, Thelma J
2015-09-01
To investigate whether changes in death certificate coding and reporting practices explain part or all of the recent increase in the rate of fatal falls in adults aged 65 and older in the United States. Trends in coding and reporting practices of fatal falls were evaluated under mortality coding schemes for International Classification of Diseases (ICD), Ninth Revision (1992-1998) and Tenth Revision (1999-2005). United States, 1992 to 2005. Individuals aged 65 and older with falls listed as the underlying cause of death (UCD) on their death certificates. The primary outcome was annual fatal falls rates per 100,000 U.S. residents aged 65 and older. Coding practice was assessed through analysis of trends in rates of specific UCD fall ICD e-codes over time. Reporting quality was assessed by examining changes in the location on the death certificate where fall e-codes were reported, in particular, the percentage of fall e-codes recorded in the proper location on the death certificate. Fatal falls rates increased over both time periods: 1992 to 1998 and 1999 to 2005. A single falls e-code was responsible for the increasing trend of fatal falls overall from 1992 to 1998 (E888, other and unspecified fall) and from 1999 to 2005 (W18, other falls on the same level), whereas trends for other falls e-codes remained stable. Reporting quality improved steadily throughout the study period. Better reporting quality, not coding practices, contributed to the increasing rate of fatal falls in older adults in the United States from 1992 to 2005. © 2015, Copyright the Authors Journal compilation © 2015, The American Geriatrics Society.
ERIC Educational Resources Information Center
Bird, Fiona L.; Yucel, Robyn
2015-01-01
Effective feedback can build self-assessment skills in students so that they become more competent and confident to identify and self-correct weaknesses in their work. In this study, we trialled a feedback code as part of an integrated programme of formative and summative assessment tasks, which provided feedback to first-year students on their…
ERIC Educational Resources Information Center
Haro, Elizabeth K.; Haro, Luis S.
2014-01-01
The multiple-choice question (MCQ) is the foundation of knowledge assessment in K-12, higher education, and standardized entrance exams (including the GRE, MCAT, and DAT). However, standard MCQ exams are limited with respect to the types of questions that can be asked when there are only five choices. MCQs offering additional choices more…
Documents Pertaining to Resource Conservation and Recovery Act Corrective Action Event Codes
Document containing RCRA Corrective Action event codes and definitions, including national requirements, initiating sources, dates, and guidance, from the first facility assessment until the Corrective Action is terminated.
Assessment of the MHD capability in the ATHENA code using data from the ALEX facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roth, P.A.
1989-03-01
The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code is a system transient analysis code with multi-loop, multi-fluid capabilities, which is available to the fusion community at the National Magnetic Fusion Energy Computing Center (NMFECC). The work reported here assesses the ATHENA magnetohydrodynamic (MHD) pressure drop model for liquid metals flowing through a strong magnetic field. An ATHENA model was developed for two simple geometry, adiabatic test sections used in the Argonne Liquid Metal Experiment (ALEX) at Argonne National Laboratory (ANL). The pressure drops calculated by ATHENA agreed well with the experimental results from the ALEX facility.
VICTORIA: A mechanistic model for radionuclide behavior in the reactor coolant system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaperow, J.H.; Bixler, N.E.
1996-12-31
VICTORIA is the U.S. Nuclear Regulatory Commission`s (NRC`s) mechanistic, best-estimate code for analysis of fission product release from the core and subsequent transport in the reactor vessel and reactor coolant system. VICTORIA requires thermal-hydraulic data (i.e., temperatures, pressures, and velocities) as input. In the past, these data have been taken from the results of calculations from thermal-hydraulic codes such as SCDAP/RELAP5, MELCOR, and MAAP. Validation and assessment of VICTORIA 1.0 have been completed. An independent peer review of VICTORIA, directed by Brookhaven National Laboratory and supported by experts in the areas of fuel release, fission product chemistry, and aerosol physics,more » has been undertaken. This peer review, which will independently assess the code`s capabilities, is nearing completion with the peer review committee`s final report expected in Dec 1996. A limited amount of additional development is expected as a result of the peer review. Following this additional development, the NRC plans to release VICTORIA 1.1 and an updated and improved code manual. Future plans mainly involve use of the code for plant calculations to investigate specific safety issues as they arise. Also, the code will continue to be used in support of the Phebus experiments.« less
Clinical application of ICF key codes to evaluate patients with dysphagia following stroke
Dong, Yi; Zhang, Chang-Jie; Shi, Jie; Deng, Jinggui; Lan, Chun-Na
2016-01-01
Abstract This study was aimed to identify and evaluate the International Classification of Functioning (ICF) key codes for dysphagia in stroke patients. Thirty patients with dysphagia after stroke were enrolled in our study. To evaluate the ICF dysphagia scale, 6 scales were used as comparisons, namely the Barthel Index (BI), Repetitive Saliva Swallowing Test (RSST), Kubota Water Swallowing Test (KWST), Frenchay Dysarthria Assessment, Mini-Mental State Examination (MMSE), and the Montreal Cognitive Assessment (MoCA). Multiple regression analysis was performed to quantitate the relationship between the ICF scale and the other 7 scales. In addition, 60 ICF scales were analyzed by the least absolute shrinkage and selection operator (LASSO) method. A total of 21 ICF codes were identified, which were closely related with the other scales. These included 13 codes from Body Function, 1 from Body Structure, 3 from Activities and Participation, and 4 from Environmental Factors. A topographic network map with 30 ICF key codes was also generated to visualize their relationships. The number of ICF codes identified is in line with other well-established evaluation methods. The network topographic map generated here could be used as an instruction tool in future evaluations. We also found that attention functions and biting were critical codes of these scales, and could be used as treatment targets. PMID:27661012
Kivisalu, Trisha M; Lewey, Jennifer H; Shaffer, Thomas W; Canfield, Merle L
2016-01-01
The Rorschach Performance Assessment System (R-PAS) aims to provide an evidence-based approach to administration, coding, and interpretation of the Rorschach Inkblot Method (RIM). R-PAS analyzes individualized communications given by respondents to each card to code a wide pool of possible variables. Due to the large number of possible codes that can be assigned to these responses, it is important to consider the concordance rates among different assessors. This study investigated interrater reliability for R-PAS protocols. Data were analyzed from a nonpatient convenience sample of 50 participants who were recruited through networking, local marketing, and advertising efforts from January 2013 through October 2014. Blind recoding was used and discrepancies between the initial and blind coders' ratings were analyzed for each variable with SPSS yielding percent agreement and intraclass correlation values. Data for Location, Space, Contents, Synthesis, Vague, Pairs, Form Quality, Populars, Determinants, and Cognitive and Thematic codes are presented. Rates of agreement for 1,168 responses were higher for more simplistic coding (e.g., Location), whereas agreement was lower for more complex codes (e.g., Cognitive and Thematic codes). Overall, concordance rates achieved good to excellent agreement. Results suggest R-PAS is an effective method with high interrater reliability supporting its empirical basis.
A Numerical Study of the Effects of Curvature and Convergence on Dilution Jet Mixing
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Reynolds, R.; White, C.
1987-01-01
An analytical program was conducted to assemble and assess a three-dimensional turbulent viscous flow computer code capable of analyzing the flow field in the transition liners of small gas turbine engines. This code is of the TEACH type with hybrid numerics, and uses the power law and SIMPLER algorithms, an orthogonal curvilinear coordinate system, and an algebraic Reynolds stress turbulence model. The assessments performed in this study, consistent with results in the literature, showed that in its present form this code is capable of predicting trends and qualitative results. The assembled code was used to perform a numerical experiment to investigate the effects of curvature and convergence in the transition liner on the mixing of single and opposed rows of cool dilution jets injected into a hot mainstream flow.
A numerical study of the effects of curvature and convergence on dilution jet mixing
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Reynolds, R.; White, C.
1987-01-01
An analytical program was conducted to assemble and assess a three-dimensional turbulent viscous flow computer code capable of analyzing the flow field in the transition liners of small gas turbine engines. This code is of the TEACH type with hybrid numerics, and uses the power law and SIMPLER algorithms, an orthogonal curvilinear coordinate system, and an algebraic Reynolds stress turbulence model. The assessments performed in this study, consistent with results in the literature, showed that in its present form this code is capable of predicting trends and qualitative results. The assembled code was used to perform a numerical experiment to investigate the effects of curvature and convergence in the transition liner on the mixing of single and opposed rows of cool dilution jets injected into a hot mainstream flow.
Ross, Jaclyn M.; Girard, Jeffrey M.; Wright, Aidan G.C.; Beeney, Joseph E.; Scott, Lori N.; Hallquist, Michael N.; Lazarus, Sophie A.; Stepp, Stephanie D.; Pilkonis, Paul A.
2016-01-01
Relationships are among the most salient factors affecting happiness and wellbeing for individuals and families. Relationship science has identified the study of dyadic behavioral patterns between couple members during conflict as an important window in to relational functioning with both short-term and long-term consequences. Several methods have been developed for the momentary assessment of behavior during interpersonal transactions. Among these, the most popular is the Specific Affect Coding System (SPAFF), which organizes social behavior into a set of discrete behavioral constructs. This study examines the interpersonal meaning of the SPAFF codes through the lens of interpersonal theory, which uses the fundamental dimensions of Dominance and Affiliation to organize interpersonal behavior. A sample of 67 couples completed a conflict task, which was video recorded and coded using SPAFF and a method for rating momentary interpersonal behavior, the Continuous Assessment of Interpersonal Dynamics (CAID). Actor partner interdependence models in a multilevel structural equation modeling framework were used to study the covariation of SPAFF codes and CAID ratings. Results showed that a number of SPAFF codes had clear interpersonal signatures, but many did not. Additionally, actor and partner effects for the same codes were strongly consistent with interpersonal theory’s principle of complementarity. Thus, findings reveal points of convergence and divergence in the two systems and provide support for central tenets of interpersonal theory. Future directions based on these initial findings are discussed. PMID:27148786
Ross, Jaclyn M; Girard, Jeffrey M; Wright, Aidan G C; Beeney, Joseph E; Scott, Lori N; Hallquist, Michael N; Lazarus, Sophie A; Stepp, Stephanie D; Pilkonis, Paul A
2017-02-01
Relationships are among the most salient factors affecting happiness and wellbeing for individuals and families. Relationship science has identified the study of dyadic behavioral patterns between couple members during conflict as an important window in to relational functioning with both short-term and long-term consequences. Several methods have been developed for the momentary assessment of behavior during interpersonal transactions. Among these, the most popular is the Specific Affect Coding System (SPAFF), which organizes social behavior into a set of discrete behavioral constructs. This study examines the interpersonal meaning of the SPAFF codes through the lens of interpersonal theory, which uses the fundamental dimensions of Dominance and Affiliation to organize interpersonal behavior. A sample of 67 couples completed a conflict task, which was video recorded and coded using SPAFF and a method for rating momentary interpersonal behavior, the Continuous Assessment of Interpersonal Dynamics (CAID). Actor partner interdependence models in a multilevel structural equation modeling framework were used to study the covariation of SPAFF codes and CAID ratings. Results showed that a number of SPAFF codes had clear interpersonal signatures, but many did not. Additionally, actor and partner effects for the same codes were strongly consistent with interpersonal theory's principle of complementarity. Thus, findings reveal points of convergence and divergence in the 2 systems and provide support for central tenets of interpersonal theory. Future directions based on these initial findings are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Assessing Teachers' Science Content Knowledge: A Strategy for Assessing Depth of Understanding
NASA Astrophysics Data System (ADS)
McConnell, Tom J.; Parker, Joyce M.; Eberhardt, Jan
2013-06-01
One of the characteristics of effective science teachers is a deep understanding of science concepts. The ability to identify, explain and apply concepts is critical in designing, delivering and assessing instruction. Because some teachers have not completed extensive courses in some areas of science, especially in middle and elementary grades, many professional development programs attempt to strengthen teachers' content knowledge. Assessing this content knowledge is challenging. Concept inventories are reliable and efficient, but do not reveal depth of knowledge. Interviews and observations are time-consuming. The Problem Based Learning Project for Teachers implemented a strategy that includes pre-post instruments in eight content strands that permits blind coding of responses and comparison across teachers and groups of teachers. The instruments include two types of open-ended questions that assess both general knowledge and the ability to apply Big Ideas related to specific science topics. The coding scheme is useful in revealing patterns in prior knowledge and learning, and identifying ideas that are challenging or not addressed by learning activities. The strengths and limitations of the scoring scheme are identified through comparison of the findings to case studies of four participating teachers from middle and elementary schools. The cases include examples of coded pre- and post-test responses to illustrate some of the themes seen in teacher learning. The findings raise questions for future investigation that can be conducted using analyses of the coded responses.
The small stellated dodecahedron code and friends.
Conrad, J; Chamberland, C; Breuckmann, N P; Terhal, B M
2018-07-13
We explore a distance-3 homological CSS quantum code, namely the small stellated dodecahedron code, for dense storage of quantum information and we compare its performance with the distance-3 surface code. The data and ancilla qubits of the small stellated dodecahedron code can be located on the edges respectively vertices of a small stellated dodecahedron, making this code suitable for three-dimensional connectivity. This code encodes eight logical qubits into 30 physical qubits (plus 22 ancilla qubits for parity check measurements) in contrast with one logical qubit into nine physical qubits (plus eight ancilla qubits) for the surface code. We develop fault-tolerant parity check circuits and a decoder for this code, allowing us to numerically assess the circuit-based pseudo-threshold.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Authors.
Capabilities overview of the MORET 5 Monte Carlo code
NASA Astrophysics Data System (ADS)
Cochet, B.; Jinaphanh, A.; Heulers, L.; Jacquet, O.
2014-06-01
The MORET code is a simulation tool that solves the transport equation for neutrons using the Monte Carlo method. It allows users to model complex three-dimensional geometrical configurations, describe the materials, define their own tallies in order to analyse the results. The MORET code has been initially designed to perform calculations for criticality safety assessments. New features has been introduced in the MORET 5 code to expand its use for reactor applications. This paper presents an overview of the MORET 5 code capabilities, going through the description of materials, the geometry modelling, the transport simulation and the definition of the outputs.
Chun, Guan-Chun; Chiang, Hsing-Jung; Lin, Kuan-Hung; Li, Chien-Ming; Chen, Pei-Jarn; Chen, Tainsong
2015-01-01
The biomechanical properties of soft tissues vary with pathological phenomenon. Ultrasound elasticity imaging is a noninvasive method used to analyze the local biomechanical properties of soft tissues in clinical diagnosis. However, the echo signal-to-noise ratio (eSNR) is diminished because of the attenuation of ultrasonic energy by soft tissues. Therefore, to improve the quality of elastography, the eSNR and depth of ultrasound penetration must be increased using chirp-coded excitation. Moreover, the low axial resolution of ultrasound images generated by a chirp-coded pulse must be increased using an appropriate compression filter. The main aim of this study is to develop an ultrasound elasticity imaging system with chirp-coded excitation using a Tukey window for assessing the biomechanical properties of soft tissues. In this study, we propose an ultrasound elasticity imaging system equipped with a 7.5-MHz single-element transducer and polymethylpentene compression plate to measure strains in soft tissues. Soft tissue strains were analyzed using cross correlation (CC) and absolution difference (AD) algorithms. The optimal parameters of CC and AD algorithms used for the ultrasound elasticity imaging system with chirp-coded excitation were determined by measuring the elastographic signal-to-noise ratio (SNRe) of a homogeneous phantom. Moreover, chirp-coded excitation and short pulse excitation were used to measure the elasticity properties of the phantom. The elastographic qualities of the tissue-mimicking phantom were assessed in terms of Young’s modulus and elastographic contrast-to-noise ratio (CNRe). The results show that the developed ultrasound elasticity imaging system with chirp-coded excitation modulated by a Tukey window can acquire accurate, high-quality elastography images. PMID:28793718
Validation of the NCC Code for Staged Transverse Injection and Computations for a RBCC Combustor
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Liu, Nan-Suey
2005-01-01
The NCC code was validated for a case involving staged transverse injection into Mach 2 flow behind a rearward facing step. Comparisons with experimental data and with solutions from the FPVortex code was then used to perform computations to study fuel-air mixing for the combustor of a candidate rocket based combined cycle engine geometry. Comparisons with a one-dimensional analysis and a three-dimensional code (VULCAN) were performed to assess the qualitative and quantitative performance of the NCC solver.
Advanced technology development for image gathering, coding, and processing
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1990-01-01
Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.
The purpose of this SOP is to define the coding strategy for the 24-Hour Food Diary. This diary was developed for use during the Arizona NHEXAS project and the "Border" study. Keywords: data; coding; 24-hour food diary.
The National Human Exposure Assessment Survey (NHEXAS) i...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Jun Soo; Choi, Yong Joon; Smith, Curtis Lee
2016-09-01
This document addresses two subjects involved with the RELAP-7 Software Verification and Validation Plan (SVVP): (i) the principles and plan to assure the independence of RELAP-7 assessment through the code development process, and (ii) the work performed to establish the RELAP-7 assessment plan, i.e., the assessment strategy, literature review, and identification of RELAP-7 requirements. Then, the Requirements Traceability Matrices (RTMs) proposed in previous document (INL-EXT-15-36684) are updated. These RTMs provide an efficient way to evaluate the RELAP-7 development status as well as the maturity of RELAP-7 assessment through the development process.
Cukier, Samantha; Jernigan, David H.
2014-01-01
Objectives. We analyzed beer, spirits, and alcopop magazine advertisements to determine adherence to federal and voluntary advertising standards. We assessed the efficacy of these standards in curtailing potentially damaging content and protecting public health. Methods. We obtained data from a content analysis of a census of 1795 unique advertising creatives for beer, spirits, and alcopops placed in nationally available magazines between 2008 and 2010. We coded creatives for manifest content and adherence to federal regulations and industry codes. Results. Advertisements largely adhered to existing regulations and codes. We assessed only 23 ads as noncompliant with federal regulations and 38 with industry codes. Content consistent with the codes was, however, often culturally positive in terms of aspirational depictions. In addition, creatives included degrading and sexualized images, promoted risky behavior, and made health claims associated with low-calorie content. Conclusions. Existing codes and regulations are largely followed regarding content but do not adequately protect against content that promotes unhealthy and irresponsible consumption and degrades potentially vulnerable populations in its depictions. Our findings suggest further limitations and enhanced federal oversight may be necessary to protect public health. PMID:24228667
OLTARIS: On-Line Tool for the Assessment of Radiation in Space
NASA Technical Reports Server (NTRS)
Singleterry, Robert C., Jr.; Blattnig, Steve R.; Clowdsley, Martha S.; Qualls, Garry D.; Sandridge, Chris A.; Simonsen, Lisa C.; Norbury, John W.; Slaba, Tony C.; Walker, Steve A.; Badavi, Francis F.;
2009-01-01
The On-Line Tool for the Assessment of Radiation In Space (OLTARIS) is a World Wide Web based tool that assesses the effects of space radiation to humans in items such as spacecraft, habitats, rovers, and spacesuits. This document explains the basis behind the interface and framework used to input the data, perform the assessment, and output the results to the user as well as the physics, engineering, and computer science used to develop OLTARIS. The physics is based on the HZETRN2005 and NUCFRG2 research codes. The OLTARIS website is the successor to the SIREST website from the early 2000 s. Modifications have been made to the code to enable easy maintenance, additions, and configuration management along with a more modern web interface. Over all, the code has been verified, tested, and modified to enable faster and more accurate assessments. The next major areas of modification are more accurate transport algorithms, better uncertainty estimates, and electronic response functions. Improvements in the existing algorithms and data occur continuously and are logged in the change log section of the website.
Feasibility and Top Level Design of a Scalable Emergency Response System for Oceangoing Assets
2008-10-20
hazard response. The DC is responsible for the initial response. In a small-scale hazard situation, the DC will assign a Risk Assessment Code (RAC) and...Qualification Standard R&D Research and Development RAC Risk Assessment Code RADSAFE Radiological Safety RAM Rolling Airframe Missile RFID Radio...easily be used for other environmental remediation efforts including Superfund sites, decommissioned Navy vessels and Brownfield locations, among others
Acoustic Prediction State of the Art Assessment
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2007-01-01
The acoustic assessment task for both the Subsonic Fixed Wing and the Supersonic projects under NASA s Fundamental Aeronautics Program was designed to assess the current state-of-the-art in noise prediction capability and to establish baselines for gauging future progress. The documentation of our current capabilities included quantifying the differences between predictions of noise from computer codes and measurements of noise from experimental tests. Quantifying the accuracy of both the computed and experimental results further enhanced the credibility of the assessment. This presentation gives sample results from codes representative of NASA s capabilities in aircraft noise prediction both for systems and components. These include semi-empirical, statistical, analytical, and numerical codes. System level results are shown for both aircraft and engines. Component level results are shown for a landing gear prototype, for fan broadband noise, for jet noise from a subsonic round nozzle, and for propulsion airframe aeroacoustic interactions. Additional results are shown for modeling of the acoustic behavior of duct acoustic lining and the attenuation of sound in lined ducts with flow.
McKenzie, Kirsten; Walker, Sue; Tong, Shilu
It remains unclear whether the change from a manual to an automated coding system (ACS) for deaths has significantly affected the consistency of Australian mortality data. The underlying causes of 34,000 deaths registered in 1997 in Australia were dual coded, in ICD-9 manually, and by using an automated computer coding program. The diseases most affected by the change from manual to ACS were senile/presenile dementia, and pneumonia. The most common disease to which a manually assigned underlying cause of senile dementia was coded with ACS was unspecified psychoses (37.2%). Only 12.5% of codes assigned by ACS as senile dementia were coded the same by manual coders. This study indicates some important differences in mortality rates when comparing mortality data that have been coded manually with those coded using an automated computer coding program. These differences may be related to both the different interpretation of ICD coding rules between manual and automated coding, and different co-morbidities or co-existing conditions among demographic groups.
Tonarelli, Silvina B; Tibbs, Michael; Vazquez, Gabriela; Lakshminarayan, Kamakshi; Rodriguez, Gustavo J; Qureshi, Adnan I
2012-02-01
A new International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnosis code, V45.88, was approved by the Centers for Medicare and Medicaid Services (CMS) on October 1, 2008. This code identifies patients in whom intravenous (IV) recombinant tissue plasminogen activator (rt-PA) is initiated in one hospital's emergency department, followed by transfer within 24 hours to a comprehensive stroke center, a paradigm commonly referred to as "drip-and-ship." This study assessed the use and accuracy of the new V45.88 code for identifying ischemic stroke patients who meet the criteria for drip-and-ship at 2 advanced certified primary stroke centers. Consecutive patients over a 12-month period were identified by primary ICD-9-CM diagnosis codes related to ischemic stroke. The accuracy of V45.88 code utilization using administrative data provided by Health Information Management Services was assessed through a comparison with data collected in prospective stroke registries maintained at each hospital by a trained abstractor. Out of a total of 428 patients discharged from both hospitals with a diagnosis of ischemic stroke, 37 patients were given ICD-9-CM code V45.88. The internally validated data from the prospective stroke database demonstrated that a total of 40 patients met the criteria for drip-and-ship. A concurrent comparison found that 92% (sensitivity) of the patients treated with drip-and-ship were coded with V45.88. None of the non-drip-and-ship stroke cases received the V45.88 code (100% specificity). The new ICD-9-CM code for drip-and-ship appears to have high specificity and sensitivity, allowing effective data collection by the CMS. Copyright © 2012 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Development of code evaluation criteria for assessing predictive capability and performance
NASA Technical Reports Server (NTRS)
Lin, Shyi-Jang; Barson, S. L.; Sindir, M. M.; Prueger, G. H.
1993-01-01
Computational Fluid Dynamics (CFD), because of its unique ability to predict complex three-dimensional flows, is being applied with increasing frequency in the aerospace industry. Currently, no consistent code validation procedure is applied within the industry. Such a procedure is needed to increase confidence in CFD and reduce risk in the use of these codes as a design and analysis tool. This final contract report defines classifications for three levels of code validation, directly relating the use of CFD codes to the engineering design cycle. Evaluation criteria by which codes are measured and classified are recommended and discussed. Criteria for selecting experimental data against which CFD results can be compared are outlined. A four phase CFD code validation procedure is described in detail. Finally, the code validation procedure is demonstrated through application of the REACT CFD code to a series of cases culminating in a code to data comparison on the Space Shuttle Main Engine High Pressure Fuel Turbopump Impeller.
Evidence-Based Reading and Writing Assessment for Dyslexia in Adolescents and Young Adults
Nielsen, Kathleen; Abbott, Robert; Griffin, Whitney; Lott, Joe; Raskind, Wendy; Berninger, Virginia W.
2016-01-01
The same working memory and reading and writing achievement phenotypes (behavioral markers of genetic variants) validated in prior research with younger children and older adults in a multi-generational family genetics study of dyslexia were used to study 81 adolescent and young adults (ages 16 to 25) from that study. Dyslexia is impaired word reading and spelling skills below the population mean and ability to use oral language to express thinking. These working memory predictor measures were given and used to predict reading and writing achievement: Coding (storing and processing) heard and spoken words (phonological coding), read and written words (orthographic coding), base words and affixes (morphological coding), and accumulating words over time (syntax coding); Cross-Code Integration (phonological loop for linking phonological name and orthographic letter codes and orthographic loop for linking orthographic letter codes and finger sequencing codes), and Supervisory Attention (focused and switching attention and self-monitoring during written word finding). Multiple regressions showed that most predictors explained individual difference in at least one reading or writing outcome, but which predictors explained unique variance beyond shared variance depended on outcome. ANOVAs confirmed that research-supported criteria for dyslexia validated for younger children and their parents could be used to diagnose which adolescents and young adults did (n=31) or did not (n=50) meet research criteria for dyslexia. Findings are discussed in reference to the heterogeneity of phenotypes (behavioral markers of genetic variables) and their application to assessment for accommodations and ongoing instruction for adolescents and young adults with dyslexia. PMID:26855554
Code Saturation Versus Meaning Saturation: How Many Interviews Are Enough?
Hennink, Monique M; Kaiser, Bonnie N; Marconi, Vincent C
2017-03-01
Saturation is a core guiding principle to determine sample sizes in qualitative research, yet little methodological research exists on parameters that influence saturation. Our study compared two approaches to assessing saturation: code saturation and meaning saturation. We examined sample sizes needed to reach saturation in each approach, what saturation meant, and how to assess saturation. Examining 25 in-depth interviews, we found that code saturation was reached at nine interviews, whereby the range of thematic issues was identified. However, 16 to 24 interviews were needed to reach meaning saturation where we developed a richly textured understanding of issues. Thus, code saturation may indicate when researchers have "heard it all," but meaning saturation is needed to "understand it all." We used our results to develop parameters that influence saturation, which may be used to estimate sample sizes for qualitative research proposals or to document in publications the grounds on which saturation was achieved.
Kassardjian, Charles D; Willems, Jacqueline D; Skrabka, Krystyna; Nisenbaum, Rosane; Barnaby, Judith; Kostyrko, Pawel; Selchen, Daniel; Saposnik, Gustavo
2017-08-01
Stroke is a relatively common and challenging condition in hospitalized patients. Previous studies have shown delays in recognition and assessment of inpatient strokes leading to poor outcomes. The goal of this quality improvement initiative was to evaluate an in-hospital code stroke algorithm and educational program aimed at reducing the response times for inpatient stroke. An inpatient code stroke algorithm was developed, and an educational intervention was implemented over 5 months. Data were recorded and compared between the 36-month period before and the 15-month period after the intervention was implemented. Outcome measures included time from last seen normal to initial assessment and from last seen normal to brain imaging. During the study period, there were 218 inpatient strokes (131 before the intervention and 87 after the intervention). Inpatient strokes were more common on cardiovascular wards (45% of cases) and occurred mainly during the perioperative period (60% of cases). After implementation of an inpatient code stroke intervention and educational initiative, there were consistent reductions in all timed outcome measures (median time to initial assessment fell from 600 [109-1460] to 160 [35-630] minutes and time to computed tomographic scan fell from 925 [213-1965] to 348.5 [128-1587] minutes). Our study reveals the efficacy of an inpatient code stroke algorithm and educational intervention directed at nurses and allied health personnel to optimize the prompt management of inpatient strokes. Prompt assessment may lead to faster stroke interventions, which are associated with better outcomes. © 2017 American Heart Association, Inc.
Computer codes developed and under development at Lewis
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1992-01-01
The objective of this summary is to provide a brief description of: (1) codes developed or under development at LeRC; and (2) the development status of IPACS with some typical early results. The computer codes that have been developed and/or are under development at LeRC are listed in the accompanying charts. This list includes: (1) the code acronym; (2) select physics descriptors; (3) current enhancements; and (4) present (9/91) code status with respect to its availability and documentation. The computer codes list is grouped by related functions such as: (1) composite mechanics; (2) composite structures; (3) integrated and 3-D analysis; (4) structural tailoring; and (5) probabilistic structural analysis. These codes provide a broad computational simulation infrastructure (technology base-readiness) for assessing the structural integrity/durability/reliability of propulsion systems. These codes serve two other very important functions: they provide an effective means of technology transfer; and they constitute a depository of corporate memory.
Current and anticipated uses of thermal-hydraulic codes in Germany
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teschendorff, V.; Sommer, F.; Depisch, F.
1997-07-01
In Germany, one third of the electrical power is generated by nuclear plants. ATHLET and S-RELAP5 are successfully applied for safety analyses of the existing PWR and BWR reactors and possible future reactors, e.g. EPR. Continuous development and assessment of thermal-hydraulic codes are necessary in order to meet present and future needs of licensing organizations, utilities, and vendors. Desired improvements include thermal-hydraulic models, multi-dimensional simulation, computational speed, interfaces to coupled codes, and code architecture. Real-time capability will be essential for application in full-scope simulators. Comprehensive code validation and quantification of uncertainties are prerequisites for future best-estimate analyses.
Semantic Interoperability of Health Risk Assessments
Rajda, Jay; Vreeman, Daniel J.; Wei, Henry G.
2011-01-01
The health insurance and benefits industry has administered Health Risk Assessments (HRAs) at an increasing rate. These are used to collect data on modifiable health risk factors for wellness and disease management programs. However, there is significant variability in the semantics of these assessments, making it difficult to compare data sets from the output of 2 different HRAs. There is also an increasing need to exchange this data with Health Information Exchanges and Electronic Medical Records. To standardize the data and concepts from these tools, we outline a process to determine presence of certain common elements of modifiable health risk extracted from these surveys. This information is coded using concept identifiers, which allows cross-survey comparison and analysis. We propose that using LOINC codes or other universal coding schema may allow semantic interoperability of a variety of HRA tools across the industry, research, and clinical settings. PMID:22195174
Henneberg, M.F.; Strause, J.L.
2002-01-01
This report presents the instructions required to use the Scour Critical Bridge Indicator (SCBI) Code and Scour Assessment Rating (SAR) calculator developed by the Pennsylvania Department of Transportation (PennDOT) and the U.S. Geological Survey to identify Pennsylvania bridges with excessive scour conditions or a high potential for scour. Use of the calculator will enable PennDOT bridge personnel to quickly calculate these scour indices if site conditions change, new bridges are constructed, or new information needs to be included. Both indices are calculated for a bridge simultaneously because they must be used together to be interpreted accurately. The SCBI Code and SAR calculator program is run by a World Wide Web browser from a remote computer. The user can 1) add additional scenarios for bridges in the SCBI Code and SAR calculator database or 2) enter data for new bridges and run the program to calculate the SCBI Code and calculate the SAR. The calculator program allows the user to print the results and to save multiple scenarios for a bridge.
NASA Astrophysics Data System (ADS)
McNeill, Alexander, III; Balkey, Kenneth R.
1995-05-01
The current inservice inspection activities at a U.S. nuclear facility are based upon the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code, Section XI. The Code selects examination locations based upon a sampling criteria which includes component geometry, stress, and usage among other criteria. This can result in a significant number of required examinations. As a result of regulatory action each nuclear facility has conducted probabilistic risk assessments (PRA) or individual plant examinations (IPE), producing plant specific risk-based information. Several initiatives have been introduced to apply this new plant risk information. Among these initiatives is risk-based inservice inspection. A code case has been introduced for piping inspections based upon this new risk- based technology. This effort brought forward to the ASME Section XI Code committee, has been initiated and championed by the ASME Research Task Force on Risk-Based Inspection Guidelines -- LWR Nuclear Power Plant Application. Preliminary assessments associated with the code case have revealed that potential advantages exist in a risk-based inservice inspection program with regard to a number of exams, risk, personnel exposure, and cost.
Chronic myelogenous leukemia in eastern Pennsylvania: an assessment of registry reporting.
Mertz, Kristen J; Buchanich, Jeanine M; Washington, Terri L; Irvin-Barnwell, Elizabeth A; Woytowitz, Donald V; Smith, Roy E
2015-01-01
Chronic myelogenous leukemia (CML) has been reportable to the Pennsylvania Cancer Registry (PCR) since the 1980s, but the completeness of reporting is unknown. This study assessed CML reporting in eastern Pennsylvania where a cluster of another myeloproliferative neoplasm was previously identified. Cases were identified from 2 sources: 1) PCR case reports for residents of Carbon, Luzerne, or Schuylkill County with International Classification of Diseases for Oncology, Third Edition (ICD-O-3) codes 9875 (CML, BCR-ABL+), 9863 (CML, NOS), and 9860 (myeloid leukemia) and date of diagnosis 2001-2009, and 2) review of billing records at hematology practices. Participants were interviewed and their medical records were reviewed by board-certified hematologists. PCR reports included 99 cases coded 9875 or 9863 and 9 cases coded 9860; 2 additional cases were identified by review of billing records. Of the 110 identified cases, 93 were mailed consent forms, 23 consented, and 12 medical records were reviewed. Hematologists confirmed 11 of 12 reviewed cases as CML cases; all 11 confirmed cases were BCR/ABL positive, but only 1 was coded as positive (code 9875). Very few unreported CML cases were identified, suggesting relatively complete reporting to the PCR. Cases reviewed were accurately diagnosed, but ICD-0-3 coding often did not reflect BCR-ABL-positive tests. Cancer registry abstracters should look for these test results and code accordingly.
RMP Guidance for Warehouses - Appendix A/B: 40 CFR part 68/Selected NAICS Codes
These appendices contain the full text of 40 Code of Federal Regulations Part 68, Chemical Accident Prevention Provisions; which includes hazard assessment, emergency response, substance thresholds, reporting requirements, and the Risk Management Plan.
An Examination of the Reliability of the Organizational Assessment Package (OAP).
1981-07-01
reactiv- ity or pretest sensitization (Bracht and Glass, 1968) may occur. In this case, the change from pretest to posttest can be caused just by the...content items. The blocks for supervisor’s code were left blank, work group code was coded as all ones , and each person’s seminar number was coded in...63 5 19 .91 .74 5 (Work Group Effective- ness) 822 19 .83 .42 7 17 .90 .57 7 (Job Related Sati sfacti on ) 823 16 .91 .84 2 18 .93 .87 2 (Job Related
Development of probabilistic internal dosimetry computer code
NASA Astrophysics Data System (ADS)
Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki
2017-02-01
Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values ( e.g. the 2.5th, 5th, median, 95th, and 97.5th percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various situations. In cases of severe internal exposure, the causation probability of a deterministic health effect can be derived from the dose distribution, and a high statistical value ( e.g., the 95th percentile of the distribution) can be used to determine the appropriate intervention. The distribution-based sensitivity analysis can also be used to quantify the contribution of each factor to the dose uncertainty, which is essential information for reducing and optimizing the uncertainty in the internal dose assessment. Therefore, the present study can contribute to retrospective dose assessment for accidental internal exposure scenarios, as well as to internal dose monitoring optimization and uncertainty reduction.
Bumper 3 Update for IADC Protection Manual
NASA Technical Reports Server (NTRS)
Christiansen, Eric L.; Nagy, Kornel; Hyde, Jim
2016-01-01
The Bumper code has been the standard in use by NASA and contractors to perform meteoroid/debris risk assessments since 1990. It has undergone extensive revisions and updates [NASA JSC HITF website; Christiansen et al., 1992, 1997]. NASA Johnson Space Center (JSC) has applied BUMPER to risk assessments for Space Station, Shuttle, Mir, Extravehicular Mobility Units (EMU) space suits, and other spacecraft (e.g., LDEF, Iridium, TDRS, and Hubble Space Telescope). Bumper continues to be updated with changes in the ballistic limit equations describing failure threshold of various spacecraft components, as well as changes in the meteoroid and debris environment models. Significant efforts are expended to validate Bumper and benchmark it to other meteoroid/debris risk assessment codes. Bumper 3 is a refactored version of Bumper II. The structure of the code was extensively modified to improve maintenance, performance and flexibility. The architecture was changed to separate the frequently updated ballistic limit equations from the relatively stable common core functions of the program. These updates allow NASA to produce specific editions of the Bumper 3 that are tailored for specific customer requirements. The core consists of common code necessary to process the Micrometeoroid and Orbital Debris (MMOD) environment models, assess shadowing and calculate MMOD risk. The library of target response subroutines includes a board range of different types of MMOD shield ballistic limit equations as well as equations describing damage to various spacecraft subsystems or hardware (thermal protection materials, windows, radiators, solar arrays, cables, etc.). The core and library of ballistic response subroutines are maintained under configuration control. A change in the core will affect all editions of the code, whereas a change in one or more of the response subroutines will affect all editions of the code that contain the particular response subroutines which are modified. Note that the Bumper II program is no longer maintained or distributed by NASA.
An Overview of the Greyscales Lethality Assessment Methodology
2011-01-01
code has already been integrated into the Weapon Systems Division MECA and DUEL missile engagement simulations. It can also be integrated into...incorporated into a variety of simulations. The code has already been integrated into the Weapon Systems Division MECA and DUEL missile engagement...capable of being incorporated into a variety of simulations. The code has already been integrated into the Weapon Systems Division MECA and DUEL missile
Improved numerical methods for turbulent viscous recirculating flows
NASA Technical Reports Server (NTRS)
Turan, A.
1985-01-01
The hybrid-upwind finite difference schemes employed in generally available combustor codes possess excessive numerical diffusion errors which preclude accurate quantative calculations. The present study has as its primary objective the identification and assessment of an improved solution algorithm as well as discretization schemes applicable to analysis of turbulent viscous recirculating flows. The assessment is carried out primarily in two dimensional/axisymetric geometries with a view to identifying an appropriate technique to be incorporated in a three-dimensional code.
Lorencatto, Fabiana; West, Robert; Seymour, Natalie; Michie, Susan
2013-06-01
There is a difference between interventions as planned and as delivered in practice. Unless we know what was actually delivered, we cannot understand "what worked" in effective interventions. This study aimed to (a) assess whether an established taxonomy of 53 smoking cessation behavior change techniques (BCTs) may be applied or adapted as a method for reliably specifying the content of smoking cessation behavioral support consultations and (b) develop an effective method for training researchers and practitioners in the reliable application of the taxonomy. Fifteen transcripts of audio-recorded consultations delivered by England's Stop Smoking Services were coded into component BCTs using the taxonomy. Interrater reliability and potential adaptations to the taxonomy to improve coding were discussed following 3 coding waves. A coding training manual was developed through expert consensus and piloted on 10 trainees, assessing coding reliability and self-perceived competence before and after training. An average of 33 BCTs from the taxonomy were identified at least once across sessions and coding waves. Consultations contained on average 12 BCTs (range = 8-31). Average interrater reliability was high (88% agreement). The taxonomy was adapted to simplify coding by merging co-occurring BCTs and refining BCT definitions. Coding reliability and self-perceived competence significantly improved posttraining for all trainees. It is possible to apply a taxonomy to reliably identify and classify BCTs in smoking cessation behavioral support delivered in practice, and train inexperienced coders to do so reliably. This method can be used to investigate variability in provision of behavioral support across services, monitor fidelity of delivery, and identify training needs.
NASA Astrophysics Data System (ADS)
Mosunova, N. A.
2018-05-01
The article describes the basic models included in the EUCLID/V1 integrated code intended for safety analysis of liquid metal (sodium, lead, and lead-bismuth) cooled fast reactors using fuel rods with a gas gap and pellet dioxide, mixed oxide or nitride uranium-plutonium fuel under normal operation, under anticipated operational occurrences and accident conditions by carrying out interconnected thermal-hydraulic, neutronics, and thermal-mechanical calculations. Information about the Russian and foreign analogs of the EUCLID/V1 integrated code is given. Modeled objects, equation systems in differential form solved in each module of the EUCLID/V1 integrated code (the thermal-hydraulic, neutronics, fuel rod analysis module, and the burnup and decay heat calculation modules), the main calculated quantities, and also the limitations on application of the code are presented. The article also gives data on the scope of functions performed by the integrated code's thermal-hydraulic module, using which it is possible to describe both one- and twophase processes occurring in the coolant. It is shown that, owing to the availability of the fuel rod analysis module in the integrated code, it becomes possible to estimate the performance of fuel rods in different regimes of the reactor operation. It is also shown that the models implemented in the code for calculating neutron-physical processes make it possible to take into account the neutron field distribution over the fuel assembly cross section as well as other features important for the safety assessment of fast reactors.
The Relationship Between Financial Incentives and Quality of Diabetes Care in Ontario, Canada
Kiran, Tara; Victor, J. Charles; Kopp, Alexander; Shah, Baiju R.; Glazier, Richard H.
2012-01-01
OBJECTIVE We assessed the impact of a diabetes incentive code introduced for primary care physicians in Ontario, Canada, in 2002 on quality of diabetes care at the population and patient level. RESEARCH DESIGN AND METHODS We analyzed administrative data for 757,928 Ontarians with diabetes to examine the use of the code and receipt of three evidence-based monitoring tests from 2006 to 2008. We assessed testing rates over time and before and after billing of the incentive code. RESULTS One-quarter of Ontarians with diabetes had an incentive code billed by their physician. The proportion receiving the optimal number of all three monitoring tests (HbA1c, cholesterol, and eye tests) rose gradually from 16% in 2000 to 27% in 2008. Individuals who were younger, lived in rural areas, were not enrolled in a primary care model, or had a mental illness were less likely to receive all three recommended tests. Patients with higher numbers of incentive code billings in 2006–2008 were more likely to receive recommended testing but also were more likely to have received the highest level of recommended testing prior to introduction of the incentive code. Following the same patients over time, improvement in recommended testing was no greater after billing of the first incentive code than before. CONCLUSIONS The diabetes incentive code led to minimal improvement in quality of diabetes care at the population and patient level. Our findings suggest that physicians who provide the highest quality care prior to incentives may be those most likely to claim incentive payments. PMID:22456866
Information theoretical assessment of image gathering and coding for digital restoration
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; John, Sarah; Reichenbach, Stephen E.
1990-01-01
The process of image-gathering, coding, and restoration is presently treated in its entirety rather than as a catenation of isolated tasks, on the basis of the relationship between the spectral information density of a transmitted signal and the restorability of images from the signal. This 'information-theoretic' assessment accounts for the information density and efficiency of the acquired signal as a function of the image-gathering system's design and radiance-field statistics, as well as for the information efficiency and data compression that are obtainable through the combination of image gathering with coding to reduce signal redundancy. It is found that high information efficiency is achievable only through minimization of image-gathering degradation as well as signal redundancy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roth, P.A.
1988-10-28
The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code is a system transient analysis code with multi-loop, multi-fluid capabilities, which is available to the fusion community at the National Magnetic Fusion Energy Computing Center (NMFECC). The work reported here assesses the ATHENA magnetohydrodynamic (MHD) pressure drop model for liquid metals flowing through a strong magnetic field. An ATHENA model was developed for two simple geometry, adiabatic test sections used in the Argonne Liquid Metal Experiment (ALEX) at Argonne National Laboratory (ANL). The pressure drops calculated by ATHENA agreed well with the experimental results from the ALEX facility. 13 refs., 4more » figs., 2 tabs.« less
EXPERIENCES FROM THE SOURCE-TERM ANALYSIS OF A LOW AND INTERMEDIATE LEVEL RADWASTE DISPOSAL FACILITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park,Jin Beak; Park, Joo-Wan; Lee, Eun-Young
2003-02-27
Enhancement of a computer code SAGE for evaluation of the Korean concept for a LILW waste disposal facility is discussed. Several features of source term analysis are embedded into SAGE to analyze: (1) effects of degradation mode of an engineered barrier, (2) effects of dispersion phenomena in the unsaturated zone and (3) effects of time dependent sorption coefficient in the unsaturated zone. IAEA's Vault Safety Case (VSC) approach is used to demonstrate the ability of this assessment code. Results of MASCOT are used for comparison purposes. These enhancements of the safety assessment code, SAGE, can contribute to realistic evaluation ofmore » the Korean concept of the LILW disposal project in the near future.« less
Status of thermalhydraulic modelling and assessment: Open issues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bestion, D.; Barre, F.
1997-07-01
This paper presents the status of the physical modelling in present codes used for Nuclear Reactor Thermalhydraulics (TRAC, RELAP 5, CATHARE, ATHLET,...) and attempts to list the unresolved or partially resolved issues. First, the capabilities and limitations of present codes are presented. They are mainly known from a synthesis of the assessment calculations performed for both separate effect tests and integral effect tests. It is also interesting to list all the assumptions and simplifications which were made in the establishment of the system of equations and of the constitutive relations. Many of the present limitations are associated to physical situationsmore » where these assumptions are not valid. Then, recommendations are proposed to extend the capabilities of these codes.« less
A Clustering-Based Approach to Enriching Code Foraging Environment.
Niu, Nan; Jin, Xiaoyu; Niu, Zhendong; Cheng, Jing-Ru C; Li, Ling; Kataev, Mikhail Yu
2016-09-01
Developers often spend valuable time navigating and seeking relevant code in software maintenance. Currently, there is a lack of theoretical foundations to guide tool design and evaluation to best shape the code base to developers. This paper contributes a unified code navigation theory in light of the optimal food-foraging principles. We further develop a novel framework for automatically assessing the foraging mechanisms in the context of program investigation. We use the framework to examine to what extent the clustering of software entities affects code foraging. Our quantitative analysis of long-lived open-source projects suggests that clustering enriches the software environment and improves foraging efficiency. Our qualitative inquiry reveals concrete insights into real developer's behavior. Our research opens the avenue toward building a new set of ecologically valid code navigation tools.
Vector processing efficiency of plasma MHD codes by use of the FACOM 230-75 APU
NASA Astrophysics Data System (ADS)
Matsuura, T.; Tanaka, Y.; Naraoka, K.; Takizuka, T.; Tsunematsu, T.; Tokuda, S.; Azumi, M.; Kurita, G.; Takeda, T.
1982-06-01
In the framework of pipelined vector architecture, the efficiency of vector processing is assessed with respect to plasma MHD codes in nuclear fusion research. By using a vector processor, the FACOM 230-75 APU, the limit of the enhancement factor due to parallelism of current vector machines is examined for three numerical codes based on a fluid model. Reasonable speed-up factors of approximately 6,6 and 4 times faster than the highly optimized scalar version are obtained for ERATO (linear stability code), AEOLUS-R1 (nonlinear stability code) and APOLLO (1-1/2D transport code), respectively. Problems of the pipelined vector processors are discussed from the viewpoint of restructuring, optimization and choice of algorithms. In conclusion, the important concept of "concurrency within pipelined parallelism" is emphasized.
Reliability in Cross-National Content Analysis.
ERIC Educational Resources Information Center
Peter, Jochen; Lauf, Edmund
2002-01-01
Investigates how coder characteristics such as language skills, political knowledge, coding experience, and coding certainty affected inter-coder and coder-training reliability. Shows that language skills influenced both reliability types. Suggests that cross-national researchers should pay more attention to cross-national assessments of…
NASA Technical Reports Server (NTRS)
Eklund, Dean R.; Northam, G. B.; Mcdaniel, J. C.; Smith, Cliff
1992-01-01
A CFD (Computational Fluid Dynamics) competition was held at the Third Scramjet Combustor Modeling Workshop to assess the current state-of-the-art in CFD codes for the analysis of scramjet combustors. Solutions from six three-dimensional Navier-Stokes codes were compared for the case of staged injection of air behind a step into a Mach 2 flow. This case was investigated experimentally at the University of Virginia and extensive in-stream data was obtained. Code-to-code comparisons have been made with regard to both accuracy and efficiency. The turbulence models employed in the solutions are believed to be a major source of discrepancy between the six solutions.
Methods for nuclear air-cleaning-system accident-consequence assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrae, R.W.; Bolstad, J.W.; Gregory, W.S.
1982-01-01
This paper describes a multilaboratory research program that is directed toward addressing many questions that analysts face when performing air cleaning accident consequence assessments. The program involves developing analytical tools and supportive experimental data that will be useful in making more realistic assessments of accident source terms within and up to the atmospheric boundaries of nuclear fuel cycle facilities. The types of accidents considered in this study includes fires, explosions, spills, tornadoes, criticalities, and equipment failures. The main focus of the program is developing an accident analysis handbook (AAH). We will describe the contents of the AAH, which include descriptionsmore » of selected nuclear fuel cycle facilities, process unit operations, source-term development, and accident consequence analyses. Three computer codes designed to predict gas and material propagation through facility air cleaning systems are described. These computer codes address accidents involving fires (FIRAC), explosions (EXPAC), and tornadoes (TORAC). The handbook relies on many illustrative examples to show the analyst how to approach accident consequence assessments. We will use the FIRAC code and a hypothetical fire scenario to illustrate the accident analysis capability.« less
[Forensic-psychiatric assessment of pedophilia].
Nitschke, J; Osterheider, M; Mokros, A
2011-09-01
The present paper illustrates the approach of a forensic psychiatric expert witness regarding the assessment of pedophilia. In a first step it is inevitable to differentiate if the defendant is suffering from pedophilia or if the alleged crime might have been committed because of other motivations (antisociality, sexual activity as redirection, impulsivity). A sound diagnostic assessment is indispendable for this task. In a second step the level of severity needs to be gauged in order to clarify whether the requirement of the entry criteria of §§ 20, 21 of the German penal code are fulfilled. In a third step, significant impairments of self-control mechanisms need to be elucidated. The present article reviews indicators of such impairments regarding pedophilia. With respect to a mandatory treatment order (§ 63 German penal code) or preventive detention (§ 66 German penal code) the legal prognosis of the defendant needs to be considered. The present paper gives an overview of the current state of risk assessment research and discusses the transfer to an individual prognosis critically. © Georg Thieme Verlag KG Stuttgart · New York.
Analysis of transient fission gas behaviour in oxide fuel using BISON and TRANSURANUS
NASA Astrophysics Data System (ADS)
Barani, T.; Bruschi, E.; Pizzocri, D.; Pastore, G.; Van Uffelen, P.; Williamson, R. L.; Luzzi, L.
2017-04-01
The modelling of fission gas behaviour is a crucial aspect of nuclear fuel performance analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. In particular, experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of the burst release process in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which is applied as an extension of conventional diffusion-based models to introduce the burst release effect. The concept and governing equations of the model are presented, and the sensitivity of results to the newly introduced parameters is evaluated through an analytic sensitivity analysis. The model is assessed for application to integral fuel rod analysis by implementation in two structurally different fuel performance codes: BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D code). Model assessment is based on the analysis of 19 light water reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the quantitative predictions of integral fuel rod FGR and the qualitative representation of the FGR kinetics with the transient model relative to the canonical, purely diffusion-based models of the codes. The overall quantitative improvement of the integral FGR predictions in the two codes is comparable. Moreover, calculated radial profiles of xenon concentration after irradiation are investigated and compared to experimental data, illustrating the underlying representation of the physical mechanisms of burst release.
Martin, Billie-Jean; Chen, Guanmin; Graham, Michelle; Quan, Hude
2014-02-13
Obesity is a pervasive problem and a popular subject of academic assessment. The ability to take advantage of existing data, such as administrative databases, to study obesity is appealing. The objective of our study was to assess the validity of obesity coding in an administrative database and compare the association between obesity and outcomes in an administrative database versus registry. This study was conducted using a coronary catheterization registry and an administrative database (Discharge Abstract Database (DAD)). A Body Mass Index (BMI) ≥30 kg/m2 within the registry defined obesity. In the DAD obesity was defined by diagnosis codes E65-E68 (ICD-10). The sensitivity, specificity, negative predictive value (NPV) and positive predictive value (PPV) of an obesity diagnosis in the DAD was determined using obesity diagnosis in the registry as the referent. The association between obesity and outcomes was assessed. The study population of 17380 subjects was largely male (68.8%) with a mean BMI of 27.0 kg/m2. Obesity prevalence was lower in the DAD than registry (2.4% vs. 20.3%). A diagnosis of obesity in the DAD had a sensitivity 7.75%, specificity 98.98%, NPV 80.84% and PPV 65.94%. Obesity was associated with decreased risk of death or re-hospitalization, though non-significantly within the DAD. Obesity was significantly associated with an increased risk of cardiac procedure in both databases. Overall, obesity was poorly coded in the DAD. However, when coded, it was coded accurately. Administrative databases are not an optimal datasource for obesity prevalence and incidence surveillance but could be used to define obese cohorts for follow-up.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barani, T.; Bruschi, E.; Pizzocri, D.
The modelling of fission gas behaviour is a crucial aspect of nuclear fuel analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. Experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of burst release in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which ismore » applied as an extension of diffusion-based models to allow for the burst release effect. The concept and governing equations of the model are presented, and the effect of the newly introduced parameters is evaluated through an analytic sensitivity analysis. Then, the model is assessed for application to integral fuel rod analysis. The approach that we take for model assessment involves implementation in two structurally different fuel performance codes, namely, BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D semi-analytic code). The model is validated against 19 Light Water Reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the qualitative representation of the FGR kinetics and the quantitative predictions of integral fuel rod FGR, relative to the canonical, purely diffusion-based models, with both codes. The overall quantitative improvement of the FGR predictions in the two codes is comparable. Furthermore, calculated radial profiles of xenon concentration are investigated and compared to experimental data, demonstrating the representation of the underlying mechanisms of burst release by the new model.« less
Summary of papers on current and anticipated uses of thermal-hydraulic codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caruso, R.
1997-07-01
The author reviews a range of recent papers which discuss possible uses and future development needs for thermal/hydraulic codes in the nuclear industry. From this review, eight common recommendations are extracted. They are: improve the user interface so that more people can use the code, so that models are easier and less expensive to prepare and maintain, and so that the results are scrutable; design the code so that it can easily be coupled to other codes, such as core physics, containment, fission product behaviour during severe accidents; improve the numerical methods to make the code more robust and especiallymore » faster running, particularly for low pressure transients; ensure that future code development includes assessment of code uncertainties as integral part of code verification and validation; provide extensive user guidelines or structure the code so that the `user effect` is minimized; include the capability to model multiple fluids (gas and liquid phase); design the code in a modular fashion so that new models can be added easily; provide the ability to include detailed or simplified component models; build on work previously done with other codes (RETRAN, RELAP, TRAC, CATHARE) and other code validation efforts (CSAU, CSNI SET and IET matrices).« less
What if pediatric residents could bill for their outpatient services?
Ng, M; Lawless, S T
2001-10-01
We prospectively studied the potential of billing and coding practices of pediatric residents in outpatient clinics and extrapolated our results to assess the financial implications of billing inaccuracies. Using Medicare as a common measure of "currency," we also used the relative value unit (RVU) and ambulatory payment class methodologies as means of assessing the productivity and financial value of resident-staffed pediatric clinics. Residents were asked to submit voluntarily shadow billing forms and documentation of outpatient clinic visits. Documentation of work was assessed by a blinded reviewer, and current procedure terminology evaluation and management codes were assigned. Comparisons between resident codes and calculated codes were made. Financial implications of physician productivity were calculated in terms of dollar amounts and RVUs. Resource intensity was measured using the ambulatory payment class methodology. A total of 344 charts were reviewed. Coding agreement for health maintenance visits was 86%, whereas agreement for acute care visits was 38%. Eighty-three percent of coding disagreement in the latter group was resulting from undercoding by residents. Errors accounted for a 4.79% difference in potential reimbursement for all visit types and a 19.10% difference for acute care visits. No significant differences in shadow billing discrepancies were found between different levels of training. Residents were predicted to generate $67 230, $87 593, and $96 072 in Medicare revenue in the outpatient clinic setting during each successive year of training. On average, residents generated 1.17 +/- 0.01 and 0.81 +/- 0.02 work RVUs for each health maintenance visit and office visit, respectively. Annual productivity from outpatient clinic settings was estimated at 548, 735, and 893 work RVUs in the postgraduate levels 1, 2, and 3, respectively. When pediatric residents are not trained adequately in proper coding practices, the potential for billing discrepancies is high and potential reimbursement differences may be substantial. Discussion of financial issues should be considered in curriculum development.
Hunt, Elizabeth A; Walker, Allen R; Shaffner, Donald H; Miller, Marlene R; Pronovost, Peter J
2008-01-01
Outcomes of in-hospital pediatric cardiopulmonary arrest are dismal. Recent data suggest that the quality of basic and advanced life support delivered to adults is low and contributes to poor outcomes, but few data regarding pediatric events have been reported. The objectives of this study were to (1) measure the median elapsed time to initiate important resuscitation maneuvers in simulated pediatric medical emergencies (ie, "mock codes") and (2) identify the types and frequency of errors committed during pediatric mock codes. A prospective, observational study was conducted of 34 consecutive hospital-based mock codes. A mannequin or computerized simulator was used to enact unannounced, simulated crisis situations involving children with respiratory distress or insufficiency, respiratory arrest, hemodynamic instability, and/or cardiopulmonary arrest. Assessment included time elapsed to initiation of specific resuscitation maneuvers and deviation from American Heart Association guidelines. Among the 34 mock codes, the median time to assessment of airway and breathing was 1.3 minutes, to administration of oxygen was 2.0 minutes, to assessment of circulation was 4.0 minutes, to arrival of any physician was 3.0 minutes, and to arrival of first member of code team was 6.0 minutes. Among cardiopulmonary arrest scenarios, elapsed time to initiation of compressions was 1.5 minutes and to request for defibrillator was 4.3 minutes. In 75% of mock codes, the team deviated from American Heart Association pediatric basic life support protocols, and in 100% of mock codes there was a communication error. Alarming delays and deviations occur in the major components of pediatric resuscitation. Future educational and organizational interventions should focus on improving the quality of care delivered during the first 5 minutes of resuscitation. Simulation of pediatric crises can identify targets for educational intervention to improve pediatric cardiopulmonary resuscitation and, ideally, outcomes.
Energy Savings Analysis of the Proposed Revision of the Washington D.C. Non-Residential Energy Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Athalye, Rahul A.; Hart, Philip R.
This report presents the results of an assessment of savings for the proposed Washington D.C. energy code relative to ASHRAE Standard 90.1-2010. It includes annual and life cycle savings for site energy, source energy, energy cost, and carbon dioxide emissions that would result from adoption and enforcement of the proposed code for newly constructed buildings in Washington D.C. over a five year period.
Lam, Raymond; Kruger, Estie; Tennant, Marc
2014-12-01
One disadvantage of the remarkable achievements in dentistry is that treatment options have never been more varied or confusing. This has made the concept of Evidenced Based Dentistry more applicable to modern dental practice. Despite merit in the concept whereby clinical decisions are guided by scientific evidence, there are problems with establishing a scientific base. This is no more challenging than in modern dentistry where the gap between rapidly developing products/procedures and its evidence base are widening. Furthermore, the burden of oral disease continues to remain high at the population level. These problems have prompted new approaches to enhancing research. The aim of this paper is to outline how a modified approach to dental coding may benefit clinical and population level research. Using publically assessable data obtained from the Australian Chronic Disease Dental Scheme and item codes contained within the Australian Schedule of Dental Services and Glossary, a suggested approach to dental informatics is illustrated. A selection of item codes have been selected and expanded with the addition of suffixes. These suffixes provided circumstantial information that will assist in assessing clinical outcomes such as success rates and prognosis. The use of item codes in administering the CDDS yielded a large database of item codes. These codes are amenable to dental informatics which has been shown to enhance research at both the clinical and population level. This is a cost effective method to supplement existing research methods. Copyright © 2014 Elsevier Inc. All rights reserved.
Tiltrotor Aeroacoustic Code (TRAC) Prediction Assessment and Initial Comparisons with Tram Test Data
NASA Technical Reports Server (NTRS)
Burley, Casey L.; Brooks, Thomas F.; Charles, Bruce D.; McCluer, Megan
1999-01-01
A prediction sensitivity assessment to inputs and blade modeling is presented for the TiltRotor Aeroacoustic Code (TRAC). For this study, the non-CFD prediction system option in TRAC is used. Here, the comprehensive rotorcraft code, CAMRAD.Mod1, coupled with the high-resolution sectional loads code HIRES, predicts unsteady blade loads to be used in the noise prediction code WOPWOP. The sensitivity of the predicted blade motions, blade airloads, wake geometry, and acoustics is examined with respect to rotor rpm, blade twist and chord, and to blade dynamic modeling. To accomplish this assessment, an interim input-deck for the TRAM test model and an input-deck for a reference test model are utilized in both rigid and elastic modes. Both of these test models are regarded as near scale models of the V-22 proprotor (tiltrotor). With basic TRAC sensitivities established, initial TRAC predictions are compared to results of an extensive test of an isolated model proprotor. The test was that of the TiltRotor Aeroacoustic Model (TRAM) conducted in the Duits-Nederlandse Windtunnel (DNW). Predictions are compared to measured noise for the proprotor operating over an extensive range of conditions. The variation of predictions demonstrates the great care that must be taken in defining the blade motion. However, even with this variability, the predictions using the different blade modeling successfully capture (bracket) the levels and trends of the noise for conditions ranging from descent to ascent.
Tiltrotor Aeroacoustic Code (TRAC) Prediction Assessment and Initial Comparisons With TRAM Test Data
NASA Technical Reports Server (NTRS)
Burley, Casey L.; Brooks, Thomas F.; Charles, Bruce D.; McCluer, Megan
1999-01-01
A prediction sensitivity assessment to inputs and blade modeling is presented for the TiltRotor Aeroacoustic Code (TRAC). For this study, the non-CFD prediction system option in TRAC is used. Here, the comprehensive rotorcraft code, CAMRAD.Mod 1, coupled with the high-resolution sectional loads code HIRES, predicts unsteady blade loads to be used in the noise prediction code WOPWOP. The sensitivity of the predicted blade motions, blade airloads, wake geometry, and acoustics is examined with respect to rotor rpm, blade twist and chord, and to blade dynamic modeling. To accomplish this assessment. an interim input-deck for the TRAM test model and an input-deck for a reference test model are utilized in both rigid and elastic modes. Both of these test models are regarded as near scale models of the V-22 proprotor (tiltrotor). With basic TRAC sensitivities established, initial TRAC predictions are compared to results of an extensive test of an isolated model proprotor. The test was that of the TiltRotor Aeroacoustic Model (TRAM) conducted in the Duits-Nederlandse Windtunnel (DNW). Predictions are compared to measured noise for the proprotor operating over an extensive range of conditions. The variation of predictions demonstrates the great care that must be taken in defining the blade motion. However, even with this variability, the predictions using the different blade modeling successfully capture (bracket) the levels and trends of the noise for conditions ranging from descent to ascent.
Medical Surveillance Monthly Report (MSMR). Volume 22, Number 9, September 2015
2015-09-01
MEDICAL SURVEILLANCE MONTHLY REPORT PAGE 2 PAGE 6 PAGE 12 Assessment of ICD-9-based case definitions for influenza -like illness surveillance...appropriate when there is a need to maximize specifi city. Assessment of ICD-9-based Case Definitions for Influenza -like Illness Surveillance Angelia A. Eick...matched to the spec- imen; if such a match was not possible, T A 8 L E 1. ICD-9 codes for original influenza -like illness case definition ICD-9 code
Turbofan forced mixer-nozzle internal flowfield. Volume 2: Computational fluid dynamic predictions
NASA Technical Reports Server (NTRS)
Werle, M. J.; Vasta, V. N.
1982-01-01
A general program was conducted to develop and assess a computational method for predicting the flow properties in a turbofan forced mixed duct. The detail assessment of the resulting computer code is presented. It was found that the code provided excellent predictions of the kinematics of the mixing process throughout the entire length of the mixer nozzle. The thermal mixing process between the hot core and cold fan flows was found to be well represented in the low speed portion of the flowfield.
Extension of applicable neutron energy of DARWIN up to 1 GeV.
Satoh, D; Sato, T; Endo, A; Matsufuji, N; Takada, M
2007-01-01
The radiation-dose monitor, DARWIN, needs a set of response functions of the liquid organic scintillator to assess a neutron dose. SCINFUL-QMD is a Monte Carlo based computer code to evaluate the response functions. In order to improve the accuracy of the code, a new light-output function based on the experimental data was developed for the production and transport of protons deuterons, tritons, (3)He nuclei and alpha particles, and incorporated into the code. The applicable energy of DARWIN was extended to 1 GeV using the response functions calculated by the modified SCINFUL-QMD code.
Olea, Ricardo A.; Luppens, James A.
2014-01-01
3 JORC (Joint Ore Reserves Committee), 2012, Australasian code for reporting of exploration results, mineral resources and ore reserves: Accessed September 2014 at http://www.jorc.org/docs/jorc_code2012.pdf.
78 FR 23497 - Propiconazole; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-19
...). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS.... Aggregate Risk Assessment and Determination of Safety Section 408(b)(2)(A)(i) of FFDCA allows EPA to... dose at which adverse effects of concern are identified (the LOAEL). Uncertainty/safety factors are...
Code modernization and modularization of APEX and SWAT watershed simulation models
USDA-ARS?s Scientific Manuscript database
SWAT (Soil and Water Assessment Tool) and APEX (Agricultural Policy / Environmental eXtender) are respectively large and small watershed simulation models derived from EPIC Environmental Policy Integrated Climate), a field-scale agroecology simulation model. All three models are coded in FORTRAN an...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jankovsky, Zachary Kyle; Denman, Matthew R.
It is difficult to assess the consequences of a transient in a sodium-cooled fast reactor (SFR) using traditional probabilistic risk assessment (PRA) methods, as numerous safety-related sys- tems have passive characteristics. Often there is significant dependence on the value of con- tinuous stochastic parameters rather than binary success/failure determinations. One form of dynamic PRA uses a system simulator to represent the progression of a transient, tracking events through time in a discrete dynamic event tree (DDET). In order to function in a DDET environment, a simulator must have characteristics that make it amenable to changing physical parameters midway through themore » analysis. The SAS4A SFR system analysis code did not have these characteristics as received. This report describes the code modifications made to allow dynamic operation as well as the linking to a Sandia DDET driver code. A test case is briefly described to demonstrate the utility of the changes.« less
Modeling of two-phase flow instabilities during startup transients utilizing RAMONA-4B methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paniagua, J.; Rohatgi, U.S.; Prasad, V.
1996-10-01
RAMONA-4B code is currently under development for simulating thermal hydraulic instabilities that can occur in Boiling Water Reactors (BWRs) and the Simplified Boiling Water Reactor (SBWR). As one of the missions of RAMONA-4B is to simulate SBWR startup transients, where geysering or condensation-induced instability may be encountered, the code needs to be assessed for this application. This paper outlines the results of the assessments of the current version of RAMONA-4B and the modifications necessary for simulating the geysering or condensation-induced instability. The test selected for assessment are the geysering tests performed by Prof Aritomi (1993).
NASA Technical Reports Server (NTRS)
Goldman, L. J.; Seasholtz, R. G.
1982-01-01
Experimental measurements of the velocity components in the blade to blade (axial tangential) plane were obtained with an axial flow turbine stator passage and were compared with calculations from three turbomachinery computer programs. The theoretical results were calculated from a quasi three dimensional inviscid code, a three dimensional inviscid code, and a three dimensional viscous code. Parameter estimation techniques and a particle dynamics calculation were used to assess the accuracy of the laser measurements, which allow a rational basis for comparison of the experimenal and theoretical results. The general agreement of the experimental data with the results from the two inviscid computer codes indicates the usefulness of these calculation procedures for turbomachinery blading. The comparison with the viscous code, while generally reasonable, was not as good as for the inviscid codes.
Trellis phase codes for power-bandwith efficient satellite communications
NASA Technical Reports Server (NTRS)
Wilson, S. G.; Highfill, J. H.; Hsu, C. D.; Harkness, R.
1981-01-01
Support work on improved power and spectrum utilization on digital satellite channels was performed. Specific attention is given to the class of signalling schemes known as continuous phase modulation (CPM). The specific work described in this report addresses: analytical bounds on error probability for multi-h phase codes, power and bandwidth characterization of 4-ary multi-h codes, and initial results of channel simulation to assess the impact of band limiting filters and nonlinear amplifiers on CPM performance.
Survey of Codes Employing Nuclear Damage Assessment
1977-10-01
surveyed codes were com- DO 73Mu 1473 ETN OF 1NOVSSSOLETE UNCLASSIFIED 1 SECURITY CLASSIFICATION OF THIS f AGE (Wh*11 Date Efntered)S<>-~C. I UNCLASSIFIED...level and above) TALLEY/TOTEM not nuclear TARTARUS too highly aggregated (battalion level and above) UNICORN highly aggregated force allocation code...vulnerability data can bq input by the user as he receives them, and there is the abil ’ity to replay any situation using hindsight. The age of target
Vaerenberg, Bart; Péan, Vincent; Lesbros, Guillaume; De Ceulaer, Geert; Schauwers, Karen; Daemers, Kristin; Gnansia, Dan; Govaerts, Paul J
2013-06-01
To assess the auditory performance of Digisonic(®) cochlear implant users with electric stimulation (ES) and electro-acoustic stimulation (EAS) with special attention to the processing of low-frequency temporal fine structure. Six patients implanted with a Digisonic(®) SP implant and showing low-frequency residual hearing were fitted with the Zebra(®) speech processor providing both electric and acoustic stimulation. Assessment consisted of monosyllabic speech identification tests in quiet and in noise at different presentation levels, and a pitch discrimination task using harmonic and disharmonic intonating complex sounds ( Vaerenberg et al., 2011 ). These tests investigate place and time coding through pitch discrimination. All tasks were performed with ES only and with EAS. Speech results in noise showed significant improvement with EAS when compared to ES. Whereas EAS did not yield better results in the harmonic intonation test, the improvements in the disharmonic intonation test were remarkable, suggesting better coding of pitch cues requiring phase locking. These results suggest that patients with residual hearing in the low-frequency range still have good phase-locking capacities, allowing them to process fine temporal information. ES relies mainly on place coding but provides poor low-frequency temporal coding, whereas EAS also provides temporal coding in the low-frequency range. Patients with residual phase-locking capacities can make use of these cues.
Zhou, Yuefang; Cameron, Elaine; Forbes, Gillian; Humphris, Gerry
2012-08-01
To develop and validate the St Andrews Behavioural Interaction Coding Scheme (SABICS): a tool to record nurse-child interactive behaviours. The SABICS was developed primarily from observation of video recorded interactions; and refined through an iterative process of applying the scheme to new data sets. Its practical applicability was assessed via implementation of the scheme on specialised behavioural coding software. Reliability was calculated using Cohen's Kappa. Discriminant validity was assessed using logistic regression. The SABICS contains 48 codes. Fifty-five nurse-child interactions were successfully coded through administering the scheme on The Observer XT8.0 system. Two visualization results of interaction patterns demonstrated the scheme's capability of capturing complex interaction processes. Cohen's Kappa was 0.66 (inter-coder) and 0.88 and 0.78 (two intra-coders). The frequency of nurse behaviours, such as "instruction" (OR = 1.32, p = 0.027) and "praise" (OR = 2.04, p = 0.027), predicted a child receiving the intervention. The SABICS is a unique system to record interactions between dental nurses and 3-5 years old children. It records and displays complex nurse-child interactive behaviours. It is easily administered and demonstrates reasonable psychometric properties. The SABICS has potential for other paediatric settings. Its development procedure may be helpful for other similar coding scheme development. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Activation assessment of the soil around the ESS accelerator tunnel
NASA Astrophysics Data System (ADS)
Rakhno, I. L.; Mokhov, N. V.; Tropin, I. S.; Ene, D.
2018-06-01
Activation of the soil surrounding the ESS accelerator tunnel calculated by the MARS15 code is presented. A detailed composition of the soil, that comprises about 30 chemical elements, is considered. Spatial distributions of the produced activity are provided in both transverse and longitudinal directions. A realistic irradiation profile for the entire planned lifetime of the facility is used. The nuclear transmutation and decay of the produced radionuclides is calculated with the DeTra code which is a built-in tool for the MARS15 code. Radionuclide production by low-energy neutrons is calculated using the ENDF/B-VII evaluated nuclear data library. In order to estimate quality of this activation assessment, a comparison between calculated and measured activation of various foils in a similar radiation environment is presented.
Validation of Living Donor Nephrectomy Codes
Lam, Ngan N.; Lentine, Krista L.; Klarenbach, Scott; Sood, Manish M.; Kuwornu, Paul J.; Naylor, Kyla L.; Knoll, Gregory A.; Kim, S. Joseph; Young, Ann; Garg, Amit X.
2018-01-01
Background: Use of administrative data for outcomes assessment in living kidney donors is increasing given the rarity of complications and challenges with loss to follow-up. Objective: To assess the validity of living donor nephrectomy in health care administrative databases compared with the reference standard of manual chart review. Design: Retrospective cohort study. Setting: 5 major transplant centers in Ontario, Canada. Patients: Living kidney donors between 2003 and 2010. Measurements: Sensitivity and positive predictive value (PPV). Methods: Using administrative databases, we conducted a retrospective study to determine the validity of diagnostic and procedural codes for living donor nephrectomies. The reference standard was living donor nephrectomies identified through the province’s tissue and organ procurement agency, with verification by manual chart review. Operating characteristics (sensitivity and PPV) of various algorithms using diagnostic, procedural, and physician billing codes were calculated. Results: During the study period, there were a total of 1199 living donor nephrectomies. Overall, the best algorithm for identifying living kidney donors was the presence of 1 diagnostic code for kidney donor (ICD-10 Z52.4) and 1 procedural code for kidney procurement/excision (1PC58, 1PC89, 1PC91). Compared with the reference standard, this algorithm had a sensitivity of 97% and a PPV of 90%. The diagnostic and procedural codes performed better than the physician billing codes (sensitivity 60%, PPV 78%). Limitations: The donor chart review and validation study was performed in Ontario and may not be generalizable to other regions. Conclusions: An algorithm consisting of 1 diagnostic and 1 procedural code can be reliably used to conduct health services research that requires the accurate determination of living kidney donors at the population level. PMID:29662679
Funduluka, P; Bosomprah, S; Chilengi, R; Mugode, R H; Bwembya, P A; Mudenda, B
2018-03-01
We sought to assess the level of non-compliance with the International Code of Marketing breast-milk substitutes (BMS) and/or Statutory Instrument (SI) Number 48 of 2006 of the Laws of Zambia in two suburbs, Kalingalinga and Chelstone, in Zambia. This was a cross sectional survey. Shop owners (80), health workers (8) and mothers (214) were interviewed. BMS labels and advertisements (62) were observed. The primary outcome was mean non-compliance defined as the number of article violations divided by the total 'obtainable' violations. The score ranges from 0 to 1 with 0 representing no violations in all the articles and one representing violations in all the articles. A total of 62 BMS were assessed. The mean non-compliance score by manufacturers in terms of violations in labelling of BMS was 0.33 (SD = 0.28; 95% CI: 0.26, 0.40). These violations were mainly due to labels containing pictures or graphics representing an infant. 80 shops were also assessed with mean non-compliance score in respect of violations in tie-in-sales, special display, and contact with mothers at the shop estimated as 0.14 (SD = 0.14; 95% CI: 0.11, 0.18). Non-compliance with the Code and/or the local SI is high after 10 years of domesticating the Code.
NASA Technical Reports Server (NTRS)
Finley, Dennis B.
1995-01-01
This report documents results from the Euler Technology Assessment program. The objective was to evaluate the efficacy of Euler computational fluid dynamics (CFD) codes for use in preliminary aircraft design. Both the accuracy of the predictions and the rapidity of calculations were to be assessed. This portion of the study was conducted by Lockheed Fort Worth Company, using a recently developed in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages for this study, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaptation of the volume grid during the solution convergence to resolve high-gradient flow regions. This proved beneficial in resolving the large vortical structures in the flow for several configurations examined in the present study. The SPLITFLOW code predictions of the configuration forces and moments are shown to be adequate for preliminary design analysis, including predictions of sideslip effects and the effects of geometry variations at low and high angles of attack. The time required to generate the results from initial surface definition is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, P. T.; Dickson, T. L.; Yin, S.
The current regulations to insure that nuclear reactor pressure vessels (RPVs) maintain their structural integrity when subjected to transients such as pressurized thermal shock (PTS) events were derived from computational models developed in the early-to-mid 1980s. Since that time, advancements and refinements in relevant technologies that impact RPV integrity assessment have led to an effort by the NRC to re-evaluate its PTS regulations. Updated computational methodologies have been developed through interactions between experts in the relevant disciplines of thermal hydraulics, probabilistic risk assessment, materials embrittlement, fracture mechanics, and inspection (flaw characterization). Contributors to the development of these methodologies include themore » NRC staff, their contractors, and representatives from the nuclear industry. These updated methodologies have been integrated into the Fracture Analysis of Vessels -- Oak Ridge (FAVOR, v06.1) computer code developed for the NRC by the Heavy Section Steel Technology (HSST) program at Oak Ridge National Laboratory (ORNL). The FAVOR, v04.1, code represents the baseline NRC-selected applications tool for re-assessing the current PTS regulations. This report is intended to document the technical bases for the assumptions, algorithms, methods, and correlations employed in the development of the FAVOR, v06.1, code.« less
Idaho National Engineering Laboratory code assessment of the Rocky Flats transuranic waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-07-01
This report is an assessment of the content codes associated with transuranic waste shipped from the Rocky Flats Plant in Golden, Colorado, to INEL. The primary objective of this document is to characterize and describe the transuranic wastes shipped to INEL from Rocky Flats by item description code (IDC). This information will aid INEL in determining if the waste meets the waste acceptance criteria (WAC) of the Waste Isolation Pilot Plant (WIPP). The waste covered by this content code assessment was shipped from Rocky Flats between 1985 and 1989. These years coincide with the dates for information available in themore » Rocky Flats Solid Waste Information Management System (SWIMS). The majority of waste shipped during this time was certified to the existing WIPP WAC. This waste is referred to as precertified waste. Reassessment of these precertified waste containers is necessary because of changes in the WIPP WAC. To accomplish this assessment, the analytical and process knowledge available on the various IDCs used at Rocky Flats were evaluated. Rocky Flats sources for this information include employee interviews, SWIMS, Transuranic Waste Certification Program, Transuranic Waste Inspection Procedure, Backlog Waste Baseline Books, WIPP Experimental Waste Characterization Program (headspace analysis), and other related documents, procedures, and programs. Summaries are provided of: (a) certification information, (b) waste description, (c) generation source, (d) recovery method, (e) waste packaging and handling information, (f) container preparation information, (g) assay information, (h) inspection information, (i) analytical data, and (j) RCRA characterization.« less
Holland Code, Job Satisfaction and Productivity in Clothing Factory Workers.
ERIC Educational Resources Information Center
Heesacker, Martin; And Others
Published research on vocational interests and personality has not often assessed the characteristics of workers and the work environment in blue-collar, women-dominated industries. This study administered the Self-Directed Search (Form E) to 318 sewing machine operators in three clothing factories. Holland codes, productivity, job satisfaction,…
The "Motherese" of Mr. Rogers: A Description of the Dialogue of Educational Television Programs.
ERIC Educational Resources Information Center
Rice, Mabel L.; Haight, Patti L.
Dialogue from 30-minute samples from "Sesame Street" and "Mr. Rogers' Neighborhood" was coded for grammar, content, and discourse. Grammatical analysis used the LINGQUEST computer-assisted language assessment program (Mordecai, Palen, and Palmer 1982). Content coding was based on categories developed by Rice (1984) and…
2010-01-01
Background In recent years, several primary care databases recording information from computerized medical records have been established and used for quality assessment of medical care and research. However, to be useful for research purposes, the data generated routinely from every day practice require registration of high quality. In this study we aimed to investigate (i) the frequency and validity of ICD code and drug prescription registration in the new Skaraborg primary care database (SPCD) and (ii) to investigate the sources of variation in this registration. Methods SPCD contains anonymous electronic medical records (ProfDoc III) automatically retrieved from all 24 public health care centres (HCC) in Skaraborg, Sweden. The frequencies of ICD code registration for the selected diagnoses diabetes mellitus, hypertension and chronic cardiovascular disease and the relevant drug prescriptions in the time period between May 2002 and October 2003 were analysed. The validity of data registration in the SPCD was assessed in a random sample of 50 medical records from each HCC (n = 1200 records) using the medical record text as gold standard. The variance of ICD code registration was studied with multi-level logistic regression analysis and expressed as median odds ratio (MOR). Results For diabetes mellitus and hypertension ICD codes were registered in 80-90% of cases, while for congestive heart failure and ischemic heart disease ICD codes were registered more seldom (60-70%). Drug prescription registration was overall high (88%). A correlation between the frequency of ICD coded visits and the sensitivity of the ICD code registration was found for hypertension and congestive heart failure but not for diabetes or ischemic heart disease. The frequency of ICD code registration varied from 42 to 90% between HCCs, and the greatest variation was found at the physician level (MORPHYSICIAN = 4.2 and MORHCC = 2.3). Conclusions Since the frequency of ICD code registration varies between different diagnoses, each diagnosis must be separately validated. Improved frequency and quality of ICD code registration might be achieved by interventions directed towards the physicians where the greatest amount of variation was found. PMID:20416069
Hjerpe, Per; Merlo, Juan; Ohlsson, Henrik; Bengtsson Boström, Kristina; Lindblad, Ulf
2010-04-23
In recent years, several primary care databases recording information from computerized medical records have been established and used for quality assessment of medical care and research. However, to be useful for research purposes, the data generated routinely from every day practice require registration of high quality. In this study we aimed to investigate (i) the frequency and validity of ICD code and drug prescription registration in the new Skaraborg primary care database (SPCD) and (ii) to investigate the sources of variation in this registration. SPCD contains anonymous electronic medical records (ProfDoc III) automatically retrieved from all 24 public health care centres (HCC) in Skaraborg, Sweden. The frequencies of ICD code registration for the selected diagnoses diabetes mellitus, hypertension and chronic cardiovascular disease and the relevant drug prescriptions in the time period between May 2002 and October 2003 were analysed. The validity of data registration in the SPCD was assessed in a random sample of 50 medical records from each HCC (n = 1200 records) using the medical record text as gold standard. The variance of ICD code registration was studied with multi-level logistic regression analysis and expressed as median odds ratio (MOR). For diabetes mellitus and hypertension ICD codes were registered in 80-90% of cases, while for congestive heart failure and ischemic heart disease ICD codes were registered more seldom (60-70%). Drug prescription registration was overall high (88%). A correlation between the frequency of ICD coded visits and the sensitivity of the ICD code registration was found for hypertension and congestive heart failure but not for diabetes or ischemic heart disease.The frequency of ICD code registration varied from 42 to 90% between HCCs, and the greatest variation was found at the physician level (MORPHYSICIAN = 4.2 and MORHCC = 2.3). Since the frequency of ICD code registration varies between different diagnoses, each diagnosis must be separately validated. Improved frequency and quality of ICD code registration might be achieved by interventions directed towards the physicians where the greatest amount of variation was found.
Relativity Screens for Misvalued Medical Services: Impact on Noninvasive Diagnostic Radiology.
Rosenkrantz, Andrew B; Silva, Ezequiel; Hawkins, C Matthew
2017-11-01
In 2006, the AMA/Specialty Society Relative Value Scale Update Committee (RUC) introduced ongoing relativity screens to identify potentially misvalued medical services for payment adjustments. We assess the impact of these screens upon the valuation of noninvasive diagnostic radiology services. Data regarding relativity screens and relative value unit (RVU) changes were obtained from the 2016 AMA Relativity Assessment Status Report. All global codes in the 2016 Medicare Physician Fee Schedule with associated work RVUs were classified as noninvasive diagnostic radiology services versus remaining services. The frequency of having ever undergone a screen was compared between the two groups. Screened radiology codes were further evaluated regarding the RVU impact of subsequent revaluation. Of noninvasive diagnostic radiology codes, 46.0% (201 of 437) were screened versus 22.2% (1,460 of 6,575) of remaining codes (P < .001). Most common screens for which radiology codes were identified as potentially misvalued were (1) high expenditures (27.5%) and (2) high utilization (25.6%). The modality and body region most likely to be identified in a screen were CT (82.1%) and breast (90.9%), respectively. Among screened radiology codes, work RVUs, practice expense RVUs, and nonfacility total RVUs decreased in 20.3%, 65.9%, and 75.3%, respectively. All screened CT, MRI, brain, and spine codes exhibited decreased total RVUs. Policymakers' ongoing search for potentially misvalued medical services has disproportionately impacted noninvasive diagnostic radiology services, risking the introduction of unintended or artificial shifts in physician practice. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Wood, Jerry R.; Schmidt, James F.; Steinke, Ronald J.; Chima, Rodrick V.; Kunik, William G.
1987-01-01
Increased emphasis on sustained supersonic or hypersonic cruise has revived interest in the supersonic throughflow fan as a possible component in advanced propulsion systems. Use of a fan that can operate with a supersonic inlet axial Mach number is attractive from the standpoint of reducing the inlet losses incurred in diffusing the flow from a supersonic flight Mach number to a subsonic one at the fan face. The design of the experiment using advanced computational codes to calculate the components required is described. The rotor was designed using existing turbomachinery design and analysis codes modified to handle fully supersonic axial flow through the rotor. A two-dimensional axisymmetric throughflow design code plus a blade element code were used to generate fan rotor velocity diagrams and blade shapes. A quasi-three-dimensional, thin shear layer Navier-Stokes code was used to assess the performance of the fan rotor blade shapes. The final design was stacked and checked for three-dimensional effects using a three-dimensional Euler code interactively coupled with a two-dimensional boundary layer code. The nozzle design in the expansion region was analyzed with a three-dimensional parabolized viscous code which corroborated the results from the Euler code. A translating supersonic diffuser was designed using these same codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brochard, J.; Charras, T.; Ghoudi, M.
Modifications to a computer code for ductile fracture assessment of piping systems with postulated circumferential through-wall cracks under static or dynamic loading are very briefly described. The modifications extend the capabilities of the CASTEM2000 code to the determination of fracture parameters under creep conditions. The main advantage of the approach is that thermal loads can be evaluated as secondary stresses. The code is applicable to piping systems for which crack propagation predictions differ significantly depending on whether thermal stresses are considered as primary or secondary stresses.
Methodology, status and plans for development and assessment of HEXTRAN, TRAB and APROS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vanttola, T.; Rajamaeki, M.; Tiihonen, O.
1997-07-01
A number of transient and accident analysis codes have been developed in Finland during the past twenty years mainly for the needs of their own power plants, but some of the codes have also been utilized elsewhere. The continuous validation, simultaneous development and experiences obtained in commercial applications have considerably improved the performance and range of application of the codes. At present, the methods allow fairly covering accident analysis of the Finnish nuclear power plants.
Cloud Computing for Complex Performance Codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Jun Soo; Choi, Yong Joon
The RELAP-7 code verification and validation activities are ongoing under the code assessment plan proposed in the previous document (INL-EXT-16-40015). Among the list of V&V test problems in the ‘RELAP-7 code V&V RTM (Requirements Traceability Matrix)’, the RELAP-7 7-equation model has been tested with additional demonstration problems and the results of these tests are reported in this document. In this report, we describe the testing process, the test cases that were conducted, and the results of the evaluation.
Larouche, Geneviève; Chiquette, Jocelyne; Plante, Marie; Pelletier, Sylvie; Simard, Jacques; Dorval, Michel
2016-11-01
In Canada, recommendations for clinical management of hereditary breast and ovarian cancer among individuals carrying a deleterious BRCA1 or BRCA2 mutation have been available since 2007. Eight years later, very little is known about the uptake of screening and risk-reduction measures in this population. Because Canada's public health care system falls under provincial jurisdictions, using provincial health care administrative databases appears a valuable option to assess management of BRCA1/2 mutation carriers. The objective was to explore the usefulness of public health insurance administrative databases in British Columbia, Ontario, and Quebec to assess management after BRCA1/2 genetic testing. Official public health insurance documents were considered potentially useful if they had specific procedure codes, and pertained to procedures performed in the public and private health care systems. All 3 administrative databases have specific procedures codes for mammography and breast ultrasounds. Only Quebec and Ontario have a specific procedure code for breast magnetic resonance imaging. It is impossible to assess, on an individual basis, the frequency of others screening exams, with the exception of CA-125 testing in British Columbia. Screenings done in private practice are excluded from the administrative databases unless covered by special agreements for reimbursement, such as all breast imaging exams in Ontario and mammograms in British Columbia and Quebec. There are no specific procedure codes for risk-reduction surgeries for breast and ovarian cancer. Population-based assessment of breast and ovarian cancer risk management strategies other than mammographic screening, using only administrative data, is currently challenging in the 3 Canadian provinces studied. Copyright © 2016 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.
Connaughton, Veronica M; Amiruddin, Azhani; Clunies-Ross, Karen L; French, Noel; Fox, Allison M
2017-05-01
A major model of the cerebral circuits that underpin arithmetic calculation is the triple-code model of numerical processing. This model proposes that the lateralization of mathematical operations is organized across three circuits: a left-hemispheric dominant verbal code; a bilateral magnitude representation of numbers and a bilateral Arabic number code. This study simultaneously measured the blood flow of both middle cerebral arteries using functional transcranial Doppler ultrasonography to assess hemispheric specialization during the performance of both language and arithmetic tasks. The propositions of the triple-code model were assessed in a non-clinical adult group by measuring cerebral blood flow during the performance of multiplication and subtraction problems. Participants were 17 adults aged between 18-27 years. We obtained laterality indices for each type of mathematical operation and compared these in participants with left-hemispheric language dominance. It was hypothesized that blood flow would lateralize to the left hemisphere during the performance of multiplication operations, but would not lateralize during the performance of subtraction operations. Hemispheric blood flow was significantly left lateralized during the multiplication task, but was not lateralized during the subtraction task. Compared to high spatial resolution neuroimaging techniques previously used to measure cerebral lateralization, functional transcranial Doppler ultrasonography is a cost-effective measure that provides a superior temporal representation of arithmetic cognition. These results provide support for the triple-code model of arithmetic processing and offer complementary evidence that multiplication operations are processed differently in the adult brain compared to subtraction operations. Copyright © 2017 Elsevier B.V. All rights reserved.
Codes of environmental management practice: Assessing their potential as a tool for change
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nash, J.; Ehrenfeld, J.
1997-12-31
Codes of environmental management practice emerged as a tool of environmental policy in the late 1980s. Industry and other groups have developed codes for two purposes: to change the environmental behavior of participating firms and to increase public confidence in industry`s commitment to environmental protection. This review examines five codes of environmental management practice: Responsible Care, the International Chamber of Commerce`s Business Charter for Sustainable Development, ISO 14000, the CERES Principles, and The Natural Step. The first three codes have been drafted and promoted primarily by industry; the others have been developed by non-industry groups. These codes have spurred participatingmore » firms to introduce new practices, including the institution of environmental management systems, public environmental reporting, and community advisory panels. The extent to which codes are introducing a process of cultural change is considered in terms of four dimensions: new consciousness, norms, organization, and tools. 94 refs., 3 tabs.« less
Esophageal function testing: Billing and coding update.
Khan, A; Massey, B; Rao, S; Pandolfino, J
2018-01-01
Esophageal function testing is being increasingly utilized in diagnosis and management of esophageal disorders. There have been several recent technological advances in the field to allow practitioners the ability to more accurately assess and treat such conditions, but there has been a relative lack of education in the literature regarding the associated Common Procedural Terminology (CPT) codes and methods of reimbursement. This review, commissioned and supported by the American Neurogastroenterology and Motility Society Council, aims to summarize each of the CPT codes for esophageal function testing and show the trends of associated reimbursement, as well as recommend coding methods in a practical context. We also aim to encourage many of these codes to be reviewed on a gastrointestinal (GI) societal level, by providing evidence of both discrepancies in coding definitions and inadequate reimbursement in this new era of esophageal function testing. © 2017 John Wiley & Sons Ltd.
Autonomy, responsibility and the Italian Code of Deontology for Nurses.
Barazzetti, Gaia; Radaelli, Stefania; Sala, Roberta
2007-01-01
This article is a first assessment of the Italian Code of deontology for nurses (revised in 1999) on the basis of data collected from focus groups with nurses taking part in the Ethical Codes in Nursing (ECN) project. We illustrate the professional context in which the Code was introduced and explain why the 1999 revision was necessary in the light of changes affecting the Italian nursing profession. The most remarkable findings concern professional autonomy and responsibility, and how the Code is thought of as a set of guidelines for nursing practice. We discuss these issues, underlining that the 1999 Code represents a valuable instrument for ethical reflection and examination, a stimulus for putting the moral sense of the nursing profession into action, and that it represents a new era for professional nursing practice in Italy. The results of the analysis also deserve further qualitative study and future consideration.
Toward a CFD nose-to-tail capability - Hypersonic unsteady Navier-Stokes code validation
NASA Technical Reports Server (NTRS)
Edwards, Thomas A.; Flores, Jolen
1989-01-01
Computational fluid dynamics (CFD) research for hypersonic flows presents new problems in code validation because of the added complexity of the physical models. This paper surveys code validation procedures applicable to hypersonic flow models that include real gas effects. The current status of hypersonic CFD flow analysis is assessed with the Compressible Navier-Stokes (CNS) code as a case study. The methods of code validation discussed to beyond comparison with experimental data to include comparisons with other codes and formulations, component analyses, and estimation of numerical errors. Current results indicate that predicting hypersonic flows of perfect gases and equilibrium air are well in hand. Pressure, shock location, and integrated quantities are relatively easy to predict accurately, while surface quantities such as heat transfer are more sensitive to the solution procedure. Modeling transition to turbulence needs refinement, though preliminary results are promising.
Assessment of Spacecraft Systems Integration Using the Electric Propulsion Interactions Code (EPIC)
NASA Technical Reports Server (NTRS)
Mikellides, Ioannis G.; Kuharski, Robert A.; Mandell, Myron J.; Gardner, Barbara M.; Kauffman, William J. (Technical Monitor)
2002-01-01
SAIC is currently developing the Electric Propulsion Interactions Code 'EPIC', an interactive computer tool that allows the construction of a 3-D spacecraft model, and the assessment of interactions between its subsystems and the plume from an electric thruster. EPIC unites different computer tools to address the complexity associated with the interaction processes. This paper describes the overall architecture and capability of EPIC including the physics and algorithms that comprise its various components. Results from selected modeling efforts of different spacecraft-thruster systems are also presented.
NASA Technical Reports Server (NTRS)
Povinelli, L. A.
1984-01-01
An assessment of several three dimensional inviscid turbine aerodynamic computer codes and loss models used at the NASA Lewis Research Center is presented. Five flow situations are examined, for which both experimental data and computational results are available. The five flows form a basis for the evaluation of the computational procedures. It was concluded that stator flows may be calculated with a high degree of accuracy, whereas, rotor flow fields are less accurately determined. Exploitation of contouring, learning, bowing, and sweeping will require a three dimensional viscous analysis technique.
1993-10-06
1975) was also used to determine cooked yields from raw ingredients and appropriate USDA processing codes were selected from the CAN System to estimate...be assumed that there were some losses of the heat labile vitamins (particularly thiamin and vitamin C). While the USDA processing codes provided for...cups Li 1-2 cups Li Less than 1 cup Li Don’t know 29. I consume... H Coffee H Decaffeinated Coffee L Kool-Aid Cola - Diet Cola Decaffeinated Cola
Probabilistic Assessment of National Wind Tunnel
NASA Technical Reports Server (NTRS)
Shah, A. R.; Shiao, M.; Chamis, C. C.
1996-01-01
A preliminary probabilistic structural assessment of the critical section of National Wind Tunnel (NWT) is performed using NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) computer code. Thereby, the capabilities of NESSUS code have been demonstrated to address reliability issues of the NWT. Uncertainties in the geometry, material properties, loads and stiffener location on the NWT are considered to perform the reliability assessment. Probabilistic stress, frequency, buckling, fatigue and proof load analyses are performed. These analyses cover the major global and some local design requirements. Based on the assumed uncertainties, the results reveal the assurance of minimum 0.999 reliability for the NWT. Preliminary life prediction analysis results show that the life of the NWT is governed by the fatigue of welds. Also, reliability based proof test assessment is performed.
Phonological, visual, and semantic coding strategies and children's short-term picture memory span.
Henry, Lucy A; Messer, David; Luger-Klein, Scarlett; Crane, Laura
2012-01-01
Three experiments addressed controversies in the previous literature on the development of phonological and other forms of short-term memory coding in children, using assessments of picture memory span that ruled out potentially confounding effects of verbal input and output. Picture materials were varied in terms of phonological similarity, visual similarity, semantic similarity, and word length. Older children (6/8-year-olds), but not younger children (4/5-year-olds), demonstrated robust and consistent phonological similarity and word length effects, indicating that they were using phonological coding strategies. This confirmed findings initially reported by Conrad (1971), but subsequently questioned by other authors. However, in contrast to some previous research, little evidence was found for a distinct visual coding stage at 4 years, casting doubt on assumptions that this is a developmental stage that consistently precedes phonological coding. There was some evidence for a dual visual and phonological coding stage prior to exclusive use of phonological coding at around 5-6 years. Evidence for semantic similarity effects was limited, suggesting that semantic coding is not a key method by which young children recall lists of pictures.
Current and anticipated uses of thermal hydraulic codes in Korea
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kyung-Doo; Chang, Won-Pyo
1997-07-01
In Korea, the current uses of thermal hydraulic codes are categorized into 3 areas. The first application is in designing both nuclear fuel and NSSS. The codes have usually been introduced based on the technology transfer programs agreed between KAERI and the foreign vendors. Another area is in the supporting of the plant operations and licensing by the utility. The third category is research purposes. In this area assessments and some applications to the safety issue resolutions are major activities using the best estimate thermal hydraulic codes such as RELAP5/MOD3 and CATHARE2. Recently KEPCO plans to couple thermal hydraulic codesmore » with a neutronics code for the design of the evolutionary type reactor by 2004. KAERI also plans to develop its own best estimate thermal hydraulic code, however, application range is different from KEPCO developing code. Considering these activities, it is anticipated that use of the best estimate hydraulic analysis code developed in Korea may be possible in the area of safety evaluation within 10 years.« less
Information theoretical assessment of digital imaging systems
NASA Technical Reports Server (NTRS)
John, Sarah; Rahman, Zia-Ur; Huck, Friedrich O.; Reichenbach, Stephen E.
1990-01-01
The end-to-end performance of image gathering, coding, and restoration as a whole is considered. This approach is based on the pivotal relationship that exists between the spectral information density of the transmitted signal and the restorability of images from this signal. The information-theoretical assessment accounts for (1) the information density and efficiency of the acquired signal as a function of the image-gathering system design and the radiance-field statistics, and (2) the improvement in information efficiency and data compression that can be gained by combining image gathering with coding to reduce the signal redundancy and irrelevancy. It is concluded that images can be restored with better quality and from fewer data as the information efficiency of the data is increased. The restoration correctly explains the image gathering and coding processes and effectively suppresses the image-display degradations.
Information theoretical assessment of digital imaging systems
NASA Astrophysics Data System (ADS)
John, Sarah; Rahman, Zia-Ur; Huck, Friedrich O.; Reichenbach, Stephen E.
1990-10-01
The end-to-end performance of image gathering, coding, and restoration as a whole is considered. This approach is based on the pivotal relationship that exists between the spectral information density of the transmitted signal and the restorability of images from this signal. The information-theoretical assessment accounts for (1) the information density and efficiency of the acquired signal as a function of the image-gathering system design and the radiance-field statistics, and (2) the improvement in information efficiency and data compression that can be gained by combining image gathering with coding to reduce the signal redundancy and irrelevancy. It is concluded that images can be restored with better quality and from fewer data as the information efficiency of the data is increased. The restoration correctly explains the image gathering and coding processes and effectively suppresses the image-display degradations.
FLUKA simulation of TEPC response to cosmic radiation.
Beck, P; Ferrari, A; Pelliccioni, M; Rollet, S; Villari, R
2005-01-01
The aircrew exposure to cosmic radiation can be assessed by calculation with codes validated by measurements. However, the relationship between doses in the free atmosphere, as calculated by the codes and from results of measurements performed within the aircraft, is still unclear. The response of a tissue-equivalent proportional counter (TEPC) has already been simulated successfully by the Monte Carlo transport code FLUKA. Absorbed dose rate and ambient dose equivalent rate distributions as functions of lineal energy have been simulated for several reference sources and mixed radiation fields. The agreement between simulation and measurements has been well demonstrated. In order to evaluate the influence of aircraft structures on aircrew exposure assessment, the response of TEPC in the free atmosphere and on-board is now simulated. The calculated results are discussed and compared with other calculations and measurements.
Activation Assessment of the Soil Around the ESS Accelerator Tunnel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rakhno, I. L.; Mokhov, N. V.; Tropin, I. S.
Activation of the soil surrounding the ESS accelerator tunnel calculated by the MARS15 code is presented. A detailed composition of the soil, that comprises about 30 different chemical elements, is considered. Spatial distributions of the produced activity are provided in both transverse and longitudinal direction. A realistic irradiation profile for the entire planned lifetime of the facility is used. The nuclear transmutation and decay of the produced radionuclides is calculated with the DeTra code which is a built-in tool for the MARS15 code. Radionuclide production by low-energy neutrons is calculated using the ENDF/B-VII evaluated nuclear data library. In order tomore » estimate quality of this activation assessment, a comparison between calculated and measured activation of various foils in a similar radiation environment is presented.« less
Avidan, Alexander; Weissman, Charles; Levin, Phillip D
2015-04-01
Quick response (QR) codes containing anesthesia syllabus data were introduced into an anesthesia information management system. The code was generated automatically at the conclusion of each case and available for resident case logging using a smartphone or tablet. The goal of this study was to evaluate the use and usability/user-friendliness of such system. Resident case logging practices were assessed prior to introducing the QR codes. QR code use and satisfactions amongst residents was reassessed at three and six months. Before QR code introduction only 12/23 (52.2%) residents maintained a case log. Most of the remaining residents (9/23, 39.1%) expected to receive a case list from the anesthesia information management system database at the end of their residency. At three months and six months 17/26 (65.4%) and 15/25 (60.0%) residents, respectively, were using the QR codes. Satisfaction was rated as very good or good. QR codes for residents' case logging with smartphones or tablets were successfully introduced in an anesthesia information management system and used by most residents. QR codes can be successfully implemented into medical practice to support data transfer. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Seligman, Sarah C; Giovannetti, Tania; Sestito, John; Libon, David J
2014-01-01
Mild functional difficulties have been associated with early cognitive decline in older adults and increased risk for conversion to dementia in mild cognitive impairment, but our understanding of this decline has been limited by a dearth of objective methods. This study evaluated the reliability and validity of a new system to code subtle errors on an established performance-based measure of everyday action and described preliminary findings within the context of a theoretical model of action disruption. Here 45 older adults completed the Naturalistic Action Test (NAT) and neuropsychological measures. NAT performance was coded for overt errors, and subtle action difficulties were scored using a novel coding system. An inter-rater reliability coefficient was calculated. Validity of the coding system was assessed using a repeated-measures ANOVA with NAT task (simple versus complex) and error type (overt versus subtle) as within-group factors. Correlation/regression analyses were conducted among overt NAT errors, subtle NAT errors, and neuropsychological variables. The coding of subtle action errors was reliable and valid, and episodic memory breakdown predicted subtle action disruption. Results suggest that the NAT can be useful in objectively assessing subtle functional decline. Treatments targeting episodic memory may be most effective in addressing early functional impairment in older age.
Bhattacharya, Moumita; Jurkovitz, Claudine; Shatkay, Hagit
2018-04-12
Patients associated with multiple co-occurring health conditions often face aggravated complications and less favorable outcomes. Co-occurring conditions are especially prevalent among individuals suffering from kidney disease, an increasingly widespread condition affecting 13% of the general population in the US. This study aims to identify and characterize patterns of co-occurring medical conditions in patients employing a probabilistic framework. Specifically, we apply topic modeling in a non-traditional way to find associations across SNOMED-CT codes assigned and recorded in the EHRs of >13,000 patients diagnosed with kidney disease. Unlike most prior work on topic modeling, we apply the method to codes rather than to natural language. Moreover, we quantitatively evaluate the topics, assessing their tightness and distinctiveness, and also assess the medical validity of our results. Our experiments show that each topic is succinctly characterized by a few highly probable and unique disease codes, indicating that the topics are tight. Furthermore, inter-topic distance between each pair of topics is typically high, illustrating distinctiveness. Last, most coded conditions grouped together within a topic, are indeed reported to co-occur in the medical literature. Notably, our results uncover a few indirect associations among conditions that have hitherto not been reported as correlated in the medical literature. Copyright © 2018. Published by Elsevier Inc.
Palmer, Cameron S; Lang, Jacelle; Russell, Glen; Dallow, Natalie; Harvey, Kathy; Gabbe, Belinda; Cameron, Peter
2013-11-01
Many trauma registries have used the 1990 revision of the Abbreviated Injury Scale (AIS; AIS90) to code injuries sustained by trauma patients. Due to changes made to the AIS codeset since its release, AIS90-coded data lacks currency in the assessment of injury severity. The ability to map between the 1998 revision of AIS (AIS98) and the current (2008) AIS version (AIS08) already exists. The development of a map for transforming AIS90-coded data into AIS98 would therefore enable contemporary injury severity estimates to be derived from AIS90-coded data. Differences between the AIS90 and AIS98 codesets were identified, and AIS98 maps were generated for AIS90 codes which changed or were not present in AIS98. The effectiveness of this map in describing the severity of trauma using AIS90 and AIS98 was evaluated using a large state registry dataset, which coded injury data using AIS90 over several years. Changes in Injury Severity Scores (ISS) calculated using AIS90 and mapped AIS98 codesets were assessed using three distinct methods. Forty-nine codes (out of 1312) from the AIS90 codeset changed or were not present in AIS98. Twenty-four codes required the assignment of maps to AIS98 equivalents. AIS90-coded data from 78,075 trauma cases were used to evaluate the map. Agreement in calculated ISS between coded AIS90 data and mapped AIS98 data was very high (kappa=0.971). The ISS changed in 1902 cases (2.4%), and the mean difference in ISS across all cases was 0.006 points. The number of cases classified as major trauma using AIS98 decreased by 0.8% compared with AIS90. A total of 3102 cases (4.0%) sustained at least one AIS90 injury which required mapping to AIS98. This study identified the differences between the AIS90 and AIS98 codesets, and generated maps for the conversion process. In practice, the differences between AIS90- and AIS98-coded data were very small. As a result, AIS90-coded data can be mapped to the current AIS version (AIS08) via AIS98, with little apparent impact on the functional accuracy of the mapped dataset produced. Copyright © 2012 Elsevier Ltd. All rights reserved.
Beuhler, Michael C; Wittler, Mary A; Ford, Marsha; Dulaney, Anna R
2011-08-01
Many public health entities employ computer-based syndromic surveillance to monitor for aberrations including possible exposures to weapons of mass destruction (WMD). Often, this is done by screening signs and symptoms reported for cases against syndromic definitions. Poison centers (PCs) may offer significant contributions to public health surveillance because of their detailed clinical effect data field coding and real-time data entry. Because improper clinical effect coding may impede syndromic surveillance, it is important to assess this accuracy for PCs. An AAPCC-certified regional PC assessed the accuracy of clinical effect coding by specialists in poison information (SPIs) listening to audio recordings of standard cases. Eighteen different standardized cases were used, consisting of six cyanide, six botulism, and six control cases. Cases were scripted to simulate clinically relevant telephone conversations and converted to audio recordings. Ten SPIs were randomly selected from the center's staff to listen to and code case information from the recorded cases. Kappa scores and the percentage of correctly coding a present clinical effect were calculated for individual clinical effects summed over all test cases along with corresponding 95% confidence intervals. The rate of the case coding by the SPIs triggering the PC's automated botulism and cyanide alerts was also determined. The kappa scores and the percentage of correctly coding a present clinical effect varied depending on the specific clinical effect, with greater accuracy observed for the clinical effects of vomiting and agitation/irritability, and poor accuracy observed for the clinical effects of visual defect and anion gap increase. Lack of correct coding resulted in only 60 and 86% of the cases that met the botulism and cyanide surveillance definitions, respectively, triggering the corresponding alert. There was no difference observed in the percentage of coding a present clinical effect between certified (9.0 years experience) and non-certified (2.4 years experience) specialists. There were no cases of coding errors that resulted in the triggering of a false positive alert. The success of syndromic surveillance depends on accurate coding of signs and symptoms. Although PCs generally contribute high-quality data to public health surveillance, it is important to recognize this potential weak link in surveillance methods.
A Struggle for Dominance: Relational Communication Messages in Television Programming.
ERIC Educational Resources Information Center
Barbatsis, Gretchen S.; And Others
Television's messages about sex role behavior were analyzed by collecting and coding spot samples of the ten top ranked programs in prime viewing time and proportionate numbers of daytime soap operas and Saturday morning children's programs. The content analysis was based on a relational coding system developed to assess interpersonal…
Using Inspections to Improve the Quality of Product Documentation and Code.
ERIC Educational Resources Information Center
Zuchero, John
1995-01-01
Describes how, by adapting software inspections to assess documentation and code, technical writers can collaborate with development personnel, editors, and customers to dramatically improve both the quality of documentation and the very process of inspecting that documentation. Notes that the five steps involved in the inspection process are:…
Quantitative Analysis of Standardized Dress Code and Minority Academic Achievement
ERIC Educational Resources Information Center
Proctor, J. R.
2013-01-01
This study was designed to investigate if a statistically significant variance exists in African American and Hispanic students' attendance and Texas Assessment of Knowledge and Skills test scores in mathematics before and after the implementation of a standardized dress code. For almost two decades supporters and opponents of public school…
Teaching Reading to the Disadvantaged Adult.
ERIC Educational Resources Information Center
Dinnan, James A.; Ulmer, Curtis, Ed.
This manual is designed to assess the background of the individual and to bring him to the stage of unlocking the symbolic codes called Reading and Mathematics. The manual begins with Introduction to a Symbolic Code (The Thinking Process and The Key to Learning Basis), and continues with Basic Reading Skills (Readiness, Visual Discrimination,…
A Coding Scheme for Analysing Problem-Solving Processes of First-Year Engineering Students
ERIC Educational Resources Information Center
Grigg, Sarah J.; Benson, Lisa C.
2014-01-01
This study describes the development and structure of a coding scheme for analysing solutions to well-structured problems in terms of cognitive processes and problem-solving deficiencies for first-year engineering students. A task analysis approach was used to assess students' problem solutions using the hierarchical structure from a…
78 FR 47677 - DOE Activities and Methodology for Assessing Compliance With Building Energy Codes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
... construction. Post- construction evaluations were implemented in one study in an effort to reduce these costs... these pilot studies have led to a number of recommendations and potential changes to the DOE methodology... fundamental assumptions and approaches to measuring compliance with building energy codes. This notice...
Assessment of the TRACE Reactor Analysis Code Against Selected PANDA Transient Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavisca, M.; Ghaderi, M.; Khatib-Rahbar, M.
2006-07-01
The TRACE (TRAC/RELAP Advanced Computational Engine) code is an advanced, best-estimate thermal-hydraulic program intended to simulate the transient behavior of light-water reactor systems, using a two-fluid (steam and water, with non-condensable gas), seven-equation representation of the conservation equations and flow-regime dependent constitutive relations in a component-based model with one-, two-, or three-dimensional elements, as well as solid heat structures and logical elements for the control system. The U.S. Nuclear Regulatory Commission is currently supporting the development of the TRACE code and its assessment against a variety of experimental data pertinent to existing and evolutionary reactor designs. This paper presents themore » results of TRACE post-test prediction of P-series of experiments (i.e., tests comprising the ISP-42 blind and open phases) conducted at the PANDA large-scale test facility in 1990's. These results show reasonable agreement with the reported test results, indicating good performance of the code and relevant underlying thermal-hydraulic and heat transfer models. (authors)« less
NASA Astrophysics Data System (ADS)
Shekhar, Himanshu; Doyley, Marvin M.
2013-03-01
Nonlinear (subharmonic/harmonic) imaging with ultrasound contrast agents (UCA) could characterize the vasa vasorum, which could help assess the risk associated with atherosclerosis. However, the sensitivity and specificity of high-frequency nonlinear imaging must be improved to enable its clinical translation. The current excitation scheme employs sine-bursts — a strategy that requires high-peak pressures to produce strong nonlinear response from UCA. In this paper, chirp-coded excitation was evaluated to assess its ability to enhance the subharmonic and harmonic response of UCA. Acoustic measurements were conducted with a pair of single-element transducers at 10-MHz transmit frequencies to evaluate the subharmonic and harmonic response of Targestar-P® (Targeson Inc., San Diego, CA, USA), a commercially available phospholipid-encapsulated contrast agent. The results of this study demonstrated a 2 - 3 fold reduction in the subharmonic threshold, and a 4 - 14 dB increase in nonlinear signal-to-noise ratio, with chirp-coded excitation. Therefore, chirp-coded excitation could be well suited for improving the imaging performance of high-frequency harmonic and subharmonic imaging.
Allen, Kevin; Fuchs, Elke C.; Jaschonek, Hannah; Bannerman, David M.; Monyer, Hannah
2011-01-01
Gap junctions containing connexin-36 (Cx36) electrically couple interneurons in many brain regions and synchronize their activity. We used Cx36 knockout mice (Cx36−/−) to study the importance of electrical coupling between interneurons for spatial coding in the hippocampus and for different forms of hippocampus-dependent spatial memory. Recordings in behaving mice revealed that the spatial selectivity of hippocampal pyramidal neurons was reduced and less stable in Cx36−/− mice. Altered network activity was reflected in slower theta oscillations in the mutants. Temporal coding, assessed by determining the presence and characteristics of theta phase precession, had different dynamics in Cx36−/− mice compared to controls. At the behavioral level, Cx36−/− mice displayed impaired short-term spatial memory but normal spatial reference memory. These results highlight the functional role of electrically coupled interneurons for spatial coding and cognition. Moreover, they suggest that the precise spatial selectivity of place cells is not essential for normal performance on spatial tasks assessing associative long-term memory. PMID:21525295
NASA Technical Reports Server (NTRS)
Finley, Dennis B.; Karman, Steve L., Jr.
1996-01-01
The objective of the second phase of the Euler Technology Assessment program was to evaluate the ability of Euler computational fluid dynamics codes to predict compressible flow effects over a generic fighter wind tunnel model. This portion of the study was conducted by Lockheed Martin Tactical Aircraft Systems, using an in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaption of the volume grid during the solution to resolve high-gradient regions. The SPLITFLOW code predictions of configuration forces and moments are shown to be adequate for preliminary design, including predictions of sideslip effects and the effects of geometry variations at low and high angles-of-attack. The transonic pressure prediction capabilities of SPLITFLOW are shown to be improved over subsonic comparisons. The time required to generate the results from initial surface data is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.
McEvoy, Matthew D.; Smalley, Jeremy C.; Nietert, Paul J.; Field, Larry C.; Furse, Cory M.; Blenko, John W.; Cobb, Benjamin G.; Walters, Jenna L.; Pendarvis, Allen; Dalal, Nishita S.; Schaefer, John J.
2012-01-01
Introduction Defining valid, reliable, defensible, and generalizable standards for the evaluation of learner performance is a key issue in assessing both baseline competence and mastery in medical education. However, prior to setting these standards of performance, the reliability of the scores yielding from a grading tool must be assessed. Accordingly, the purpose of this study was to assess the reliability of scores generated from a set of grading checklists used by non-expert raters during simulations of American Heart Association (AHA) MegaCodes. Methods The reliability of scores generated from a detailed set of checklists, when used by four non-expert raters, was tested by grading team leader performance in eight MegaCode scenarios. Videos of the scenarios were reviewed and rated by trained faculty facilitators and by a group of non-expert raters. The videos were reviewed “continuously” and “with pauses.” Two content experts served as the reference standard for grading, and four non-expert raters were used to test the reliability of the checklists. Results Our results demonstrate that non-expert raters are able to produce reliable grades when using the checklists under consideration, demonstrating excellent intra-rater reliability and agreement with a reference standard. The results also demonstrate that non-expert raters can be trained in the proper use of the checklist in a short amount of time, with no discernible learning curve thereafter. Finally, our results show that a single trained rater can achieve reliable scores of team leader performance during AHA MegaCodes when using our checklist in continuous mode, as measures of agreement in total scoring were very strong (Lin’s Concordance Correlation Coefficient = 0.96; Intraclass Correlation Coefficient = 0.97). Discussion We have shown that our checklists can yield reliable scores, are appropriate for use by non-expert raters, and are able to be employed during continuous assessment of team leader performance during the review of a simulated MegaCode. This checklist may be more appropriate for use by Advanced Cardiac Life Support (ACLS) instructors during MegaCode assessments than current tools provided by the AHA. PMID:22863996
Frequency- and Time-Domain Methods in Soil-Structure Interaction Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolisetti, Chandrakanth; Whittaker, Andrew S.; Coleman, Justin L.
2015-06-01
Soil-structure interaction (SSI) analysis in the nuclear industry is currently performed using linear codes that function in the frequency domain. There is a consensus that these frequency-domain codes give reasonably accurate results for low-intensity ground motions that result in almost linear response. For higher intensity ground motions, which may result in nonlinear response in the soil, structure or at the vicinity of the foundation, the adequacy of frequency-domain codes is unproven. Nonlinear analysis, which is only possible in the time domain, is theoretically more appropriate in such cases. These methods are available but are rarely used due to the largemore » computational requirements and a lack of experience with analysts and regulators. This paper presents an assessment of the linear frequency-domain code, SASSI, which is widely used in the nuclear industry, and the time-domain commercial finite-element code, LS-DYNA, for SSI analysis. The assessment involves benchmarking the SSI analysis procedure in LS-DYNA against SASSI for linearly elastic models. After affirming that SASSI and LS-DYNA result in almost identical responses for these models, they are used to perform nonlinear SSI analyses of two structures founded on soft soil. An examination of the results shows that, in spite of using identical material properties, the predictions of frequency- and time-domain codes are significantly different in the presence of nonlinear behavior such as gapping and sliding of the foundation.« less
Quinot, Catherine; Amsellem-Dubourget, Sylvie; Temam, Sofia; Sevin, Etienne; Barreto, Christine; Tackin, Arzu; Félicité, Jérémy; Lyon-Caen, Sarah; Siroux, Valérie; Girard, Raphaële; Descatha, Alexis; Le Moual, Nicole; Dumas, Orianne
2018-05-14
Healthcare workers are highly exposed to various types of disinfectants and cleaning products. Assessment of exposure to these products remains a challenge. We aimed to investigate the feasibility of a method, based on a smartphone application and bar codes, to improve occupational exposure assessment among hospital/cleaning workers in epidemiological studies. A database of disinfectants and cleaning products used in French hospitals, including their names, bar codes and composition, was developed using several sources: ProdHyBase (a database of disinfectants managed by hospital hygiene experts), and specific regulatory agencies and industrial websites. A smartphone application has been created to scan bar codes of products and fill a short questionnaire. The application was tested in a French hospital. The ease of use and the ability to record information through this new approach were estimated. The method was tested in a French hospital (7 units, 14 participants). Through the application, 126 records (one record referred to one product entered by one participant/unit) were registered, majority of which were liquids (55.5%) or sprays (23.8%); 20.6% were used to clean surfaces and 15.9% to clean toilets. Workers used mostly products with alcohol and quaternary ammonium compounds (>90% with weekly use), followed by hypochlorite bleach and hydrogen peroxide (28.6%). For most records, information was available on the name (93.7%) and bar code (77.0%). Information on product compounds was available for all products and recorded in the database. This innovative and easy-to-use method could help to improve the assessment of occupational exposure to disinfectants/cleaning products in epidemiological studies. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Color coding of control room displays: the psychocartography of visual layering effects.
Van Laar, Darren; Deshe, Ofer
2007-06-01
To evaluate which of three color coding methods (monochrome, maximally discriminable, and visual layering) used to code four types of control room display format (bars, tables, trend, mimic) was superior in two classes of task (search, compare). It has recently been shown that color coding of visual layers, as used in cartography, may be used to color code any type of information display, but this has yet to be fully evaluated. Twenty-four people took part in a 2 (task) x 3 (coding method) x 4 (format) wholly repeated measures design. The dependent variables assessed were target location reaction time, error rates, workload, and subjective feedback. Overall, the visual layers coding method produced significantly faster reaction times than did the maximally discriminable and the monochrome methods for both the search and compare tasks. No significant difference in errors was observed between conditions for either task type. Significantly less perceived workload was experienced with the visual layers coding method, which was also rated more highly than the other coding methods on a 14-item visual display quality questionnaire. The visual layers coding method is superior to other color coding methods for control room displays when the method supports the user's task. The visual layers color coding method has wide applicability to the design of all complex information displays utilizing color coding, from the most maplike (e.g., air traffic control) to the most abstract (e.g., abstracted ecological display).
Byrd, Gary D; Devine, Patricia J; Corcoran, Kate E
2014-10-01
The Medical Library Association (MLA) Board of Directors and president charged an Ethical Awareness Task Force and recommended a survey to determine MLA members' awareness of and opinions about the current Code of Ethics for Health Sciences Librarianship. THE TASK FORCE AND MLA STAFF CRAFTED A SURVEY TO DETERMINE: (1) awareness of the MLA code and its provisions, (2) use of the MLA code to resolve professional ethical issues, (3) consultation of other ethical codes or guides, (4) views regarding the relative importance of the eleven MLA code statements, (5) challenges experienced in following any MLA code provisions, and (6) ethical problems not clearly addressed by the code. Over 500 members responded (similar to previous MLA surveys), and while most were aware of the code, over 30% could not remember when they had last read or thought about it, and nearly half had also referred to other codes or guidelines. The large majority thought that: (1) all code statements were equally important, (2) none were particularly difficult or challenging to follow, and (3) the code covered every ethical challenge encountered in their professional work. Comments provided by respondents who disagreed with the majority views suggest that the MLA code could usefully include a supplementary guide with practical advice on how to reason through a number of ethically challenging situations that are typically encountered by health sciences librarians.
Byrd, Gary D.; Devine, Patricia J.; Corcoran, Kate E.
2014-01-01
Objective: The Medical Library Association (MLA) Board of Directors and president charged an Ethical Awareness Task Force and recommended a survey to determine MLA members' awareness of and opinions about the current Code of Ethics for Health Sciences Librarianship. Methods: The task force and MLA staff crafted a survey to determine: (1) awareness of the MLA code and its provisions, (2) use of the MLA code to resolve professional ethical issues, (3) consultation of other ethical codes or guides, (4) views regarding the relative importance of the eleven MLA code statements, (5) challenges experienced in following any MLA code provisions, and (6) ethical problems not clearly addressed by the code. Results: Over 500 members responded (similar to previous MLA surveys), and while most were aware of the code, over 30% could not remember when they had last read or thought about it, and nearly half had also referred to other codes or guidelines. The large majority thought that: (1) all code statements were equally important, (2) none were particularly difficult or challenging to follow, and (3) the code covered every ethical challenge encountered in their professional work. Implications: Comments provided by respondents who disagreed with the majority views suggest that the MLA code could usefully include a supplementary guide with practical advice on how to reason through a number of ethically challenging situations that are typically encountered by health sciences librarians. PMID:25349544
Emergency readmissions to paediatric surgery and urology: The impact of inappropriate coding.
Peeraully, R; Henderson, K; Davies, B
2016-04-01
Introduction In England, emergency readmissions within 30 days of hospital discharge after an elective admission are not reimbursed if they do not meet Payment by Results (PbR) exclusion criteria. However, coding errors could inappropriately penalise hospitals. We aimed to assess the accuracy of coding for emergency readmissions. Methods Emergency readmissions attributed to paediatric surgery and urology between September 2012 and August 2014 to our tertiary referral centre were retrospectively reviewed. Payment by Results (PbR) coding data were obtained from the hospital's Family Health Directorate. Clinical details were obtained from contemporaneous records. All readmissions were categorised as appropriately coded (postoperative or nonoperative) or inappropriately coded (planned surgical readmission, unrelated surgical admission, unrelated medical admission or coding error). Results Over the 24-month period, 241 patients were coded as 30-day readmissions, with 143 (59%) meeting the PbR exclusion criteria. Of the remaining 98 (41%) patients, 24 (25%) were inappropriately coded as emergency readmissions. These readmissions resulted in 352 extra bed days, of which 117 (33%) were attributable to inappropriately coded cases. Conclusions One-quarter of non-excluded emergency readmissions were inappropriately coded, accounting for one-third of additional bed days. As a stay on a paediatric ward costs up to £500 a day, the potential cost to our institution due to inappropriate readmission coding was over £50,000. Diagnoses and the reason for admission for each care episode should be accurately documented and coded, and readmission data should be reviewed at a senior clinician level.
Bearing performance degradation assessment based on time-frequency code features and SOM network
NASA Astrophysics Data System (ADS)
Zhang, Yan; Tang, Baoping; Han, Yan; Deng, Lei
2017-04-01
Bearing performance degradation assessment and prognostics are extremely important in supporting maintenance decision and guaranteeing the system’s reliability. To achieve this goal, this paper proposes a novel feature extraction method for the degradation assessment and prognostics of bearings. Features of time-frequency codes (TFCs) are extracted from the time-frequency distribution using a hybrid procedure based on short-time Fourier transform (STFT) and non-negative matrix factorization (NMF) theory. An alternative way to design the health indicator is investigated by quantifying the similarity between feature vectors using a self-organizing map (SOM) network. On the basis of this idea, a new health indicator called time-frequency code quantification error (TFCQE) is proposed to assess the performance degradation of the bearing. This indicator is constructed based on the bearing real-time behavior and the SOM model that is previously trained with only the TFC vectors under the normal condition. Vibration signals collected from the bearing run-to-failure tests are used to validate the developed method. The comparison results demonstrate the superiority of the proposed TFCQE indicator over many other traditional features in terms of feature quality metrics, incipient degradation identification and achieving accurate prediction. Highlights • Time-frequency codes are extracted to reflect the signals’ characteristics. • SOM network served as a tool to quantify the similarity between feature vectors. • A new health indicator is proposed to demonstrate the whole stage of degradation development. • The method is useful for extracting the degradation features and detecting the incipient degradation. • The superiority of the proposed method is verified using experimental data.
Likis, Frances E; Sathe, Nila A; Carnahan, Ryan; McPheeters, Melissa L
2013-12-30
To identify and assess diagnosis, procedure and pharmacy dispensing codes used to identify stillbirths and spontaneous abortion in administrative and claims databases from the United States or Canada. We searched the MEDLINE database from 1991 to September 2012 using controlled vocabulary and key terms related to stillbirth or spontaneous abortion. We also searched the reference lists of included studies. Two investigators independently assessed the full text of studies against pre-determined inclusion criteria. Two reviewers independently extracted data regarding participant and algorithm characteristics and assessed each study's methodological rigor using a pre-defined approach. Ten publications addressing stillbirth and four addressing spontaneous abortion met our inclusion criteria. The International Classification of Diseases, Ninth Revision (ICD-9) codes most commonly used in algorithms for stillbirth were those for intrauterine death (656.4) and stillborn outcomes of delivery (V27.1, V27.3-V27.4, and V27.6-V27.7). Papers identifying spontaneous abortion used codes for missed abortion and spontaneous abortion: 632, 634.x, as well as V27.0-V27.7. Only two studies identifying stillbirth reported validation of algorithms. The overall positive predictive value of the algorithms was high (99%-100%), and one study reported an algorithm with 86% sensitivity. However, the predictive value of individual codes was not assessed and study populations were limited to specific geographic areas. Additional validation studies with a nationally representative sample are needed to confirm the optimal algorithm to identify stillbirths or spontaneous abortion in administrative and claims databases.' Copyright © 2013 Elsevier Ltd. All rights reserved.
Micrometeoroid and Orbital Debris Threat Assessment: Mars Sample Return Earth Entry Vehicle
NASA Technical Reports Server (NTRS)
Christiansen, Eric L.; Hyde, James L.; Bjorkman, Michael D.; Hoffman, Kevin D.; Lear, Dana M.; Prior, Thomas G.
2011-01-01
This report provides results of a Micrometeoroid and Orbital Debris (MMOD) risk assessment of the Mars Sample Return Earth Entry Vehicle (MSR EEV). The assessment was performed using standard risk assessment methodology illustrated in Figure 1-1. Central to the process is the Bumper risk assessment code (Figure 1-2), which calculates the critical penetration risk based on geometry, shielding configurations and flight parameters. The assessment process begins by building a finite element model (FEM) of the spacecraft, which defines the size and shape of the spacecraft as well as the locations of the various shielding configurations. This model is built using the NX I-deas software package from Siemens PLM Software. The FEM is constructed using triangular and quadrilateral elements that define the outer shell of the spacecraft. Bumper-II uses the model file to determine the geometry of the spacecraft for the analysis. The next step of the process is to identify the ballistic limit characteristics for the various shield types. These ballistic limits define the critical size particle that will penetrate a shield at a given impact angle and impact velocity. When the finite element model is built, each individual element is assigned a property identifier (PID) to act as an index for its shielding properties. Using the ballistic limit equations (BLEs) built into the Bumper-II code, the shield characteristics are defined for each and every PID in the model. The final stage of the analysis is to determine the probability of no penetration (PNP) on the spacecraft. This is done using the micrometeoroid and orbital debris environment definitions that are built into the Bumper-II code. These engineering models take into account orbit inclination, altitude, attitude and analysis date in order to predict an impacting particle flux on the spacecraft. Using the geometry and shielding characteristics previously defined for the spacecraft and combining that information with the environment model calculations, the Bumper-II code calculates a probability of no penetration for the spacecraft.
Reformation of Regulatory Technical Standards for Nuclear Power Generation Equipments in Japan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikio Kurihara; Masahiro Aoki; Yu Maruyama
2006-07-01
Comprehensive reformation of the regulatory system has been introduced in Japan in order to apply recent technical progress in a timely manner. 'The Technical Standards for Nuclear Power Generation Equipments', known as the Ordinance No.622) of the Ministry of International Trade and Industry, which is used for detailed design, construction and operating stage of Nuclear Power Plants, was being modified to performance specifications with the consensus codes and standards being used as prescriptive specifications, in order to facilitate prompt review of the Ordinance with response to technological innovation. The activities on modification were performed by the Nuclear and Industrial Safetymore » Agency (NISA), the regulatory body in Japan, with support of the Japan Nuclear Energy Safety Organization (JNES), a technical support organization. The revised Ordinance No.62 was issued on July 1, 2005 and is enforced from January 1 2006. During the period from the issuance to the enforcement, JNES carried out to prepare enforceable regulatory guide which complies with each provisions of the Ordinance No.62, and also made technical assessment to endorse the applicability of consensus codes and standards, in response to NISA's request. Some consensus codes and standards were re-assessed since they were already used in regulatory review of the construction plan submitted by licensee. Other consensus codes and standards were newly assessed for endorsement. In case that proper consensus code or standards were not prepared, details of regulatory requirements were described in the regulatory guide as immediate measures. At the same time, appropriate standards developing bodies were requested to prepare those consensus code or standards. Supplementary note which provides background information on the modification, applicable examples etc. was prepared for convenience to the users of the Ordinance No. 62. This paper shows the activities on modification and the results, following the NISA's presentation at ICONE-13 that introduced the framework of the performance specifications and the modification process of the Ordinance NO. 62. (authors)« less
Analysis of transient fission gas behaviour in oxide fuel using BISON and TRANSURANUS
Barani, T.; Bruschi, E.; Pizzocri, D.; ...
2017-01-03
The modelling of fission gas behaviour is a crucial aspect of nuclear fuel analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. Experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of burst release in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which ismore » applied as an extension of diffusion-based models to allow for the burst release effect. The concept and governing equations of the model are presented, and the effect of the newly introduced parameters is evaluated through an analytic sensitivity analysis. Then, the model is assessed for application to integral fuel rod analysis. The approach that we take for model assessment involves implementation in two structurally different fuel performance codes, namely, BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D semi-analytic code). The model is validated against 19 Light Water Reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the qualitative representation of the FGR kinetics and the quantitative predictions of integral fuel rod FGR, relative to the canonical, purely diffusion-based models, with both codes. The overall quantitative improvement of the FGR predictions in the two codes is comparable. Furthermore, calculated radial profiles of xenon concentration are investigated and compared to experimental data, demonstrating the representation of the underlying mechanisms of burst release by the new model.« less
Smart Growth Self-Assessment for Rural Communities
Tool to help small towns and rural communities assess their existing policies, plans, codes, and zoning regulations to determine how well they work to create healthy, environmentally resilient, and economically robust places.
Benchmarking of Neutron Production of Heavy-Ion Transport Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence
Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less
Benchmarking of Heavy Ion Transport Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence
Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less
NASA Technical Reports Server (NTRS)
Gaffney, J. E., Jr.; Judge, R. W.
1981-01-01
A model of a software development process is described. The software development process is seen to consist of a sequence of activities, such as 'program design' and 'module development' (or coding). A manpower estimate is made by multiplying code size by the rates (man months per thousand lines of code) for each of the activities relevant to the particular case of interest and summing up the results. The effect of four objectively determinable factors (organization, software product type, computer type, and code type) on productivity values for each of nine principal software development activities was assessed. Four factors were identified which account for 39% of the observed productivity variation.
Items Supporting the Hanford Internal Dosimetry Program Implementation of the IMBA Computer Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbaugh, Eugene H.; Bihl, Donald E.
2008-01-07
The Hanford Internal Dosimetry Program has adopted the computer code IMBA (Integrated Modules for Bioassay Analysis) as its primary code for bioassay data evaluation and dose assessment using methodologies of ICRP Publications 60, 66, 67, 68, and 78. The adoption of this code was part of the implementation plan for the June 8, 2007 amendments to 10 CFR 835. This information release includes action items unique to IMBA that were required by PNNL quality assurance standards for implementation of safety software. Copie of the IMBA software verification test plan and the outline of the briefing given to new users aremore » also included.« less
Radiology and Ethics Education.
Camargo, Aline; Liu, Li; Yousem, David M
2017-09-01
The purpose of this study is to assess medical ethics knowledge among trainees and practicing radiologists through an online survey that included questions about the American College of Radiology Code of Ethics and the American Medical Association Code of Medical Ethics. Most survey respondents reported that they had never read the American Medical Association Code of Medical Ethics or the American College of Radiology Code of Ethics (77.2% and 67.4% of respondents, respectively). With regard to ethics education during medical school and residency, 57.3% and 70.0% of respondents, respectively, found such education to be insufficient. Medical ethics training should be highlighted during residency, at specialty society meetings, and in journals and online resources for radiologists.
Development of Safety Assessment Code for Decommissioning of Nuclear Facilities
NASA Astrophysics Data System (ADS)
Shimada, Taro; Ohshima, Soichiro; Sukegawa, Takenori
A safety assessment code, DecDose, for decommissioning of nuclear facilities has been developed, based on the experiences of the decommissioning project of Japan Power Demonstration Reactor (JPDR) at Japan Atomic Energy Research Institute (currently JAEA). DecDose evaluates the annual exposure dose of the public and workers according to the progress of decommissioning, and also evaluates the public dose at accidental situations including fire and explosion. As for the public, both the internal and the external doses are calculated by considering inhalation, ingestion, direct radiation from radioactive aerosols and radioactive depositions, and skyshine radiation from waste containers. For external dose for workers, the dose rate from contaminated components and structures to be dismantled is calculated. Internal dose for workers is calculated by considering dismantling conditions, e.g. cutting speed, cutting length of the components and exhaust velocity. Estimation models for dose rate and staying time were verified by comparison with the actual external dose of workers which were acquired during JPDR decommissioning project. DecDose code is expected to contribute the safety assessment for decommissioning of nuclear facilities.
An Approach for Assessing Delamination Propagation Capabilities in Commercial Finite Element Codes
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2007-01-01
An approach for assessing the delamination propagation capabilities in commercial finite element codes is presented and demonstrated for one code. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. Good agreement between the load-displacement relationship obtained from the propagation analysis results and the benchmark results could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as may be expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.
Angelow, Aniela; Reber, Katrin Christiane; Schmidt, Carsten Oliver; Baumeister, Sebastian Edgar; Chenot, Jean-Francois
2018-06-04
The study assesses the validity of ICD-10 coded cardiovascular risk factors in claims data using gold-standard measurements from a population-based study for arterial hypertension, diabetes, dyslipidemia, smoking and obesity as a reference. Data of 1941 participants (46 % male, mean age 58±13 years) of the Study of Health in Pomerania (SHIP) were linked to electronic medical records from the regional association of statutory health insurance physicians from 2008 to 2012 used for billing purposes. Clinical data from SHIP was used as a gold standard to assess the agreement with claims data for ICD-10 codes I10.- (arterial hypertension), E10.- to E14.- (diabetes mellitus), E78.- (dyslipidemia), F17.- (smoking) and E65.- to E68.- (obesity). A higher agreement between ICD-coded and clinical diagnosis was found for diabetes (sensitivity (sens) 84%, specificity (spec) 95%, positive predictive value (ppv) 80%) and hypertension (sens 72%, spec 93%, ppv 97%) and a low level of agreement for smoking (sens 18%, spec 99%, ppv 89%), obesity (sens 22%, spec 99%, ppv 99%) and dyslipidemia (sens 40%, spec 60%, ppv 70%). Depending on the investigated cardiovascular risk factor, medication, documented additional cardiovascular co-morbidities, age, sex and clinical severity were associated with the ICD-coded cardiovascular risk factor. The quality of ICD-coding in ambulatory care is highly variable for different cardiovascular risk factors and outcomes. Diagnoses were generally undercoded, but those relevant for billing were coded more frequently. Our results can be used to quantify errors in population-based estimates of prevalence based on claims data for the investigated cardiovascular risk factors. © Georg Thieme Verlag KG Stuttgart · New York.
Woods, Carl T; Keller, Brad S; McKeown, Ian; Robertson, Sam
2016-09-01
Woods, CT, Keller, BS, McKeown, I, and Robertson, S. A comparison of athletic movement among talent-identified juniors from different football codes in Australia: implications for talent development. J Strength Cond Res 30(9): 2440-2445, 2016-This study aimed to compare the athletic movement skill of talent-identified (TID) junior Australian Rules football (ARF) and soccer players. The athletic movement skill of 17 TID junior ARF players (17.5-18.3 years) was compared against 17 TID junior soccer players (17.9-18.7 years). Players in both groups were members of an elite junior talent development program within their respective football codes. All players performed an athletic movement assessment that included an overhead squat, double lunge, single-leg Romanian deadlift (both movements performed on right and left legs), a push-up, and a chin-up. Each movement was scored across 3 essential assessment criteria using a 3-point scale. The total score for each movement (maximum of 9) and the overall total score (maximum of 63) were used as the criterion variables for analysis. A multivariate analysis of variance tested the main effect of football code (2 levels) on the criterion variables, whereas a 1-way analysis of variance identified where differences occurred. A significant effect was noted, with the TID junior ARF players outscoring their soccer counterparts when performing the overhead squat and push-up. No other criterions significantly differed according to the main effect. Practitioners should be aware that specific sporting requirements may incur slight differences in athletic movement skill among TID juniors from different football codes. However, given the low athletic movement skill noted in both football codes, developmental coaches should address the underlying movement skill capabilities of juniors when prescribing physical training in both codes.
Transport calculations and accelerator experiments needed for radiation risk assessment in space.
Sihver, Lembit
2008-01-01
The major uncertainties on space radiation risk estimates in humans are associated to the poor knowledge of the biological effects of low and high LET radiation, with a smaller contribution coming from the characterization of space radiation field and its primary interactions with the shielding and the human body. However, to decrease the uncertainties on the biological effects and increase the accuracy of the risk coefficients for charged particles radiation, the initial charged-particle spectra from the Galactic Cosmic Rays (GCRs) and the Solar Particle Events (SPEs), and the radiation transport through the shielding material of the space vehicle and the human body, must be better estimated Since it is practically impossible to measure all primary and secondary particles from all possible position-projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes must be used. These codes are also needed when estimating the risk for radiation induced failures in advanced microelectronics, such as single-event effects, etc., and the efficiency of different shielding materials. It is therefore important that the models and transport codes will be carefully benchmarked and validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence, dose and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground based accelerator experiments are needed The efficiency of passive shielding and protection of electronic devices should also be tested in accelerator experiments and compared to simulations using different transport codes. In this paper different multipurpose particle and heavy ion transport codes will be presented, different concepts of shielding and protection discussed, as well as future accelerator experiments needed for testing and validating codes and shielding materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yidong Xia; Mitch Plummer; Robert Podgorney
2016-02-01
Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation anglemore » for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.« less
HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCann, R.A.; Lowery, P.S.
1987-10-01
HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equationsmore » for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.« less
Whiteford, Kelly L; Kreft, Heather A; Oxenham, Andrew J
2017-08-01
Natural sounds can be characterized by their fluctuations in amplitude and frequency. Ageing may affect sensitivity to some forms of fluctuations more than others. The present study used individual differences across a wide age range (20-79 years) to test the hypothesis that slow-rate, low-carrier frequency modulation (FM) is coded by phase-locked auditory-nerve responses to temporal fine structure (TFS), whereas fast-rate FM is coded via rate-place (tonotopic) cues, based on amplitude modulation (AM) of the temporal envelope after cochlear filtering. Using a low (500 Hz) carrier frequency, diotic FM and AM detection thresholds were measured at slow (1 Hz) and fast (20 Hz) rates in 85 listeners. Frequency selectivity and TFS coding were assessed using forward masking patterns and interaural phase disparity tasks (slow dichotic FM), respectively. Comparable interaural level disparity tasks (slow and fast dichotic AM and fast dichotic FM) were measured to control for effects of binaural processing not specifically related to TFS coding. Thresholds in FM and AM tasks were correlated, even across tasks thought to use separate peripheral codes. Age was correlated with slow and fast FM thresholds in both diotic and dichotic conditions. The relationship between age and AM thresholds was generally not significant. Once accounting for AM sensitivity, only diotic slow-rate FM thresholds remained significantly correlated with age. Overall, results indicate stronger effects of age on FM than AM. However, because of similar effects for both slow and fast FM when not accounting for AM sensitivity, the effects cannot be unambiguously ascribed to TFS coding.
Obsuth, Ingrid; Hennighausen, Katherine; Brumariu, Laura E.; Lyons-Ruth, Karlen
2013-01-01
Disoriented, punitive, and caregiving/role-reversed attachment behaviors are associated with psychopathology in childhood but have not been assessed in adolescence. One hundred twenty low-income late adolescents (aged 18 – 23) and parents were assessed in a conflict-resolution paradigm. Their interactions were coded with the Goal-Corrected Partnership in Adolescence Coding Scales. Confirmatory factor analysis demonstrated that the three disorganized constructs (punitive, care-giving, and disoriented interaction) were best represented as distinct factors and were separable from a fourth factor for collaboration. The four factors were then assessed in relation to measures of attachment disorganization, partner abuse, and psychopathology. Results indicate that forms of disorganized behavior first described in early childhood can also be reliably assessed in adolescence and are associated with maladaptive outcomes across multiple domains. PMID:23621826
NASA Astrophysics Data System (ADS)
Barker, H. W.; Stephens, G. L.; Partain, P. T.; Bergman, J. W.; Bonnel, B.; Campana, K.; Clothiaux, E. E.; Clough, S.; Cusack, S.; Delamere, J.; Edwards, J.; Evans, K. F.; Fouquart, Y.; Freidenreich, S.; Galin, V.; Hou, Y.; Kato, S.; Li, J.; Mlawer, E.; Morcrette, J.-J.; O'Hirok, W.; Räisänen, P.; Ramaswamy, V.; Ritter, B.; Rozanov, E.; Schlesinger, M.; Shibata, K.; Sporyshev, P.; Sun, Z.; Wendisch, M.; Wood, N.; Yang, F.
2003-08-01
The primary purpose of this study is to assess the performance of 1D solar radiative transfer codes that are used currently both for research and in weather and climate models. Emphasis is on interpretation and handling of unresolved clouds. Answers are sought to the following questions: (i) How well do 1D solar codes interpret and handle columns of information pertaining to partly cloudy atmospheres? (ii) Regardless of the adequacy of their assumptions about unresolved clouds, do 1D solar codes perform as intended?One clear-sky and two plane-parallel, homogeneous (PPH) overcast cloud cases serve to elucidate 1D model differences due to varying treatments of gaseous transmittances, cloud optical properties, and basic radiative transfer. The remaining four cases involve 3D distributions of cloud water and water vapor as simulated by cloud-resolving models. Results for 25 1D codes, which included two line-by-line (LBL) models (clear and overcast only) and four 3D Monte Carlo (MC) photon transport algorithms, were submitted by 22 groups. Benchmark, domain-averaged irradiance profiles were computed by the MC codes. For the clear and overcast cases, all MC estimates of top-of-atmosphere albedo, atmospheric absorptance, and surface absorptance agree with one of the LBL codes to within ±2%. Most 1D codes underestimate atmospheric absorptance by typically 15-25 W m-2 at overhead sun for the standard tropical atmosphere regardless of clouds.Depending on assumptions about unresolved clouds, the 1D codes were partitioned into four genres: (i) horizontal variability, (ii) exact overlap of PPH clouds, (iii) maximum/random overlap of PPH clouds, and (iv) random overlap of PPH clouds. A single MC code was used to establish conditional benchmarks applicable to each genre, and all MC codes were used to establish the full 3D benchmarks. There is a tendency for 1D codes to cluster near their respective conditional benchmarks, though intragenre variances typically exceed those for the clear and overcast cases. The majority of 1D codes fall into the extreme category of maximum/random overlap of PPH clouds and thus generally disagree with full 3D benchmark values. Given the fairly limited scope of these tests and the inability of any one code to perform extremely well for all cases begs the question that a paradigm shift is due for modeling 1D solar fluxes for cloudy atmospheres.
Dong, Skye; Butow, Phyllis N; Costa, Daniel S J; Dhillon, Haryana M; Shields, Cleveland G
2014-06-01
To adapt an observational tool for assessing patient-centeredness of radiotherapy consultations and to assess whether scores for this tool and an existing tool assessing patient-perceived patient-centeredness predict patient outcomes. The Measure of Patient-Centered Communication (MPCC), an observational coding system that assesses depth of discussion during a consultation, was adapted to the radiotherapy context. Fifty-six radiotherapy patients (from 10 radiation therapists) had their psycho-education sessions recorded and coded using the MPCC. Patients also completed instruments assessing their perception of patient-centeredness, trust in the radiation therapist, satisfaction with the consultation, authentic self-representation (ASR) and state anxiety. The MPCC correlated weakly with patient-perceived patient-centeredness. The Feelings subcomponent of the MPCC predicted one aspect of ASR and trust, and interacted with level of therapist experience to predict trust. Patient-perceived patient-centeredness, which exhibited a ceiling effect, predicted satisfaction. Patient-centered communication is an important predictor of patient outcomes in radiotherapy and obviates some negative aspects of radiation therapists' experience on patient trust. As in other studies, there is a weak association between self-reported and observational coding of PCC. Radiation therapists have both technical and supportive roles to play in patient care, and may benefit from training in their supportive role. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Simulated Assessment of Interference Effects in Direct Sequence Spread Spectrum (DSSS) QPSK Receiver
2014-03-27
bit error rate BPSK binary phase shift keying CDMA code division multiple access CSI comb spectrum interference CW continuous wave DPSK differential... CDMA ) and GPS systems which is a Gold code. This code is generated by a modulo-2 operation between two different preferred m-sequences. The preferred m...10 SNR Sim (dB) S N R O ut ( dB ) SNR RF SNR DS Figure 3.26: Comparison of input S NRS im and S NROut of the band-pass RF filter (S NRRF) and
Automatic Residential/Commercial Classification of Parcels with Solar Panel Detections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, April M; Omitaomu, Olufemi A; Kotikot, Susan
A computational method to automatically detect solar panels on rooftops to aid policy and financial assessment of solar distributed generation. The code automatically classifies parcels containing solar panels in the U.S. as residential or commercial. The code allows the user to specify an input dataset containing parcels and detected solar panels, and then uses information about the parcels and solar panels to automatically classify the rooftops as residential or commercial using machine learning techniques. The zip file containing the code includes sample input and output datasets for the Boston and DC areas.
Modeling of the EAST ICRF antenna with ICANT Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin Chengming; Zhao Yanping; Colas, L.
2007-09-28
A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.
Modeling of the EAST ICRF antenna with ICANT Code
NASA Astrophysics Data System (ADS)
Qin, Chengming; Zhao, Yanping; Colas, L.; Heuraux, S.
2007-09-01
A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.
2013-01-01
Background The harmonization of European health systems brings with it a need for tools to allow the standardized collection of information about medical care. A common coding system and standards for the description of services are needed to allow local data to be incorporated into evidence-informed policy, and to permit equity and mobility to be assessed. The aim of this project has been to design such a classification and a related tool for the coding of services for Long Term Care (DESDE-LTC), based on the European Service Mapping Schedule (ESMS). Methods The development of DESDE-LTC followed an iterative process using nominal groups in 6 European countries. 54 researchers and stakeholders in health and social services contributed to this process. In order to classify services, we use the minimal organization unit or “Basic Stable Input of Care” (BSIC), coded by its principal function or “Main Type of Care” (MTC). The evaluation of the tool included an analysis of feasibility, consistency, ontology, inter-rater reliability, Boolean Factor Analysis, and a preliminary impact analysis (screening, scoping and appraisal). Results DESDE-LTC includes an alpha-numerical coding system, a glossary and an assessment instrument for mapping and counting LTC. It shows high feasibility, consistency, inter-rater reliability and face, content and construct validity. DESDE-LTC is ontologically consistent. It is regarded by experts as useful and relevant for evidence-informed decision making. Conclusion DESDE-LTC contributes to establishing a common terminology, taxonomy and coding of LTC services in a European context, and a standard procedure for data collection and international comparison. PMID:23768163
Non-US data compression and coding research. FASAC Technical Assessment Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gray, R.M.; Cohn, M.; Craver, L.W.
1993-11-01
This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity,more » though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.« less
Baiao, R; Baptista, J; Carneiro, A; Pinto, R; Toscano, C; Fearon, P; Soares, I; Mesquita, A R
2018-07-01
The preschool years are a period of great developmental achievements, which impact critically on a child's interactive skills. Having valid and reliable measures to assess interactive behaviour at this stage is therefore crucial. The aim of this study was to describe the adaptation and validation of the child coding of the Coding System for Mother-Child Interactions and discuss its applications and implications in future research and practice. Two hundred twenty Portuguese preschoolers and their mothers were videotaped during a structured task. Child and mother interactive behaviours were coded based on the task. Maternal reports on the child's temperament and emotional and behaviour problems were also collected, along with family psychosocial information. Interrater agreement was confirmed. The use of child Cooperation, Enthusiasm, and Negativity as subscales was supported by their correlations across tasks. Moreover, these subscales were correlated with each other, which supports the use of a global child interactive behaviour score. Convergent validity with a measure of emotional and behavioural problems (Child Behaviour Checklist 1 ½-5) was established, as well as divergent validity with a measure of temperament (Children's Behaviour Questionnaire-Short Form). Regarding associations with family variables, child interactive behaviour was only associated with maternal behaviour. Findings suggest that this coding system is a valid and reliable measure for assessing child interactive behaviour in preschool age children. It therefore represents an important alternative to this area of research and practice, with reduced costs and with more flexible training requirements. Attention should be given in future research to expanding this work to clinical populations and different age groups. © 2018 John Wiley & Sons Ltd.
Annual Stock Assessment - CWT [Coded Wire Tag program] (USFWS), Annual Report 2007.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pastor, Stephen M.
2009-07-21
In 1989 the Bonneville Power Administration (BPA) began funding the evaluation of production groups of juvenile anadromous fish not being coded-wire tagged for other programs. These groups were the 'Missing Production Groups'. Production fish released by the U.S. Fish and Wildlife Service (FWS) without representative coded-wire tags during the 1980s are indicated as blank spaces on the survival graphs in this report. This program is now referred to as 'Annual Stock Assessment - CWT'. The objectives of the 'Annual Stock Assessment' program are to: (1) estimate the total survival of each production group, (2) estimate the contribution of each productionmore » group to fisheries, and (3) prepare an annual report for USFWS hatcheries in the Columbia River basin. Coded-wire tag recovery information will be used to evaluate the relative success of individual brood stocks. This information can also be used by salmon harvest managers to develop plans to allow the harvest of excess hatchery fish while protecting threatened, endangered, or other stocks of concern. All fish release information, including marked/unmarked ratios, is reported to the Pacific States Marine Fisheries Commission (PSMFC). Fish recovered in the various fisheries or at the hatcheries are sampled to recover coded-wire tags. This recovery information is also reported to PSMFC. This report has been prepared annually starting with the report labeled 'Annual Report 1994'. Although the current report has the title 'Annual Report 2007', it was written in fall of 2008 using data available from RMIS that same year, and submitted as final in January 2009. The main objective of the report is to evaluate survival of groups which have been tagged under this ongoing project.« less
Assessing Assessment: In Pursuit of Meaningful Learning
ERIC Educational Resources Information Center
Rootman-le Grange, Ilse; Blackie, Margaret A. L.
2018-01-01
The challenge of supporting the development of meaningful learning is prevalent in chemistry education research. One of the core activities used in the learning process is assessments. The aim of this paper is to illustrate how the semantics dimension of Legitimation Code Theory can be a helpful tool to critique the quality of assessments and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seitz, R.R.; Rittmann, P.D.; Wood, M.I.
The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrationsmore » in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.« less
Visual communication - Information and fidelity. [of images
NASA Technical Reports Server (NTRS)
Huck, Freidrich O.; Fales, Carl L.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur; Reichenbach, Stephen E.
1993-01-01
This assessment of visual communication deals with image gathering, coding, and restoration as a whole rather than as separate and independent tasks. The approach focuses on two mathematical criteria, information and fidelity, and on their relationships to the entropy of the encoded data and to the visual quality of the restored image. Past applications of these criteria to the assessment of image coding and restoration have been limited to the link that connects the output of the image-gathering device to the input of the image-display device. By contrast, the approach presented in this paper explicitly includes the critical limiting factors that constrain image gathering and display. This extension leads to an end-to-end assessment theory of visual communication that combines optical design with digital processing.
Application of Microgravity to the Assessment of Existing Structures and Structural Foundations.
1988-04-29
UADGU Geophysique Francafse IUSRSU 6c. ADDRESS (City, State. and ZIP Code) 7b. ADDRESS (City, State, and ZIP Code) 20, Rue des Pavilions Box 65 92800...r (2.8 - 2.4) 286 AM~TCT f eldo f6 YOUOUVT 4. EXISTING STRUCTURES AND (U) CONPAGNIE DE PROSPECTION GEOPHYSIQUE FRANCAISE RUEIL-MALNAISO J LAKSHNRNRN
Square One TV: Coding of Segments.
ERIC Educational Resources Information Center
McNeal, Betsy; Singer, Karen
This report describes the system used to code each segment of Square One TV for content analysis of all four seasons of production. The analysis is intended to aid in the assessment of how well Square One is meeting its three goals: (1) to promote positive attitudes toward, and enthusiasm for, mathematics; (2) to encourage the use and application…
AN EXACT SOLUTION FOR THE ASSESSMENT OF NONEQUILIBRIUM SORPTION OF RADIONUCLIDES IN THE VADOSE ZONE
In a report on model evaluation, the authors ran the HYDRUS Code, among other transport codes, to evaluate the impacts of nonequilibrium sorption sites on the time-evolution of 99Tc and 90Sr through the vadose zone. Since our evaluation was based on a rather low, annual recharge...
Code Pulse: Software Assurance (SWA) Visual Analytics for Dynamic Analysis of Code
2014-09-01
31 4.5.1 Market Analysis...competitive market analysis to assess the tool potential. The final transition targets were selected and expressed along with our research on the topic...public release milestones. Details of our testing methodology is in our Software Test Plan deliv- erable, CP- STP -0001. A summary of this approach is
ERIC Educational Resources Information Center
Subba Rao, G. M.; Vijayapushapm, T.; Venkaiah, K.; Pavarala, V.
2012-01-01
Objective: To assess quantity and quality of nutrition and food safety information in science textbooks prescribed by the Central Board of Secondary Education (CBSE), India for grades I through X. Design: Content analysis. Methods: A coding scheme was developed for quantitative and qualitative analyses. Two investigators independently coded the…
Plaie, Thierry; Thomas, Delphine
2008-06-01
Our study specifies the contributions of image generation and image maintenance processes occurring at the time of imaginal coding of verbal information in memory during normal aging. The memory capacities of 19 young adults (average age of 24 years) and 19 older adults (average age of 75 years) were assessed using recall tasks according to the imagery value of the stimuli to learn. The mental visual imagery capacities are assessed using tasks of image generation and temporary storage of mental imagery. The variance analysis indicates a more important decrease with age of the concretness effect. The major contribution of our study rests on the fact that the decline with age of dual coding of verbal information in memory would result primarily from the decline of image maintenance capacities and from a slowdown in image generation. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
Development of a Benchmark Example for Delamination Fatigue Growth Prediction
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2010-01-01
The development of a benchmark example for cyclic delamination growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of a Double Cantilever Beam (DCB) specimen, which is independent of the analysis software used and allows the assessment of the delamination growth prediction capabilities in commercial finite element codes. First, the benchmark result was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to grow under cyclic loading in a finite element model of a commercial code. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the analysis. In general, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. Overall, the results are encouraging but further assessment for mixed-mode delamination is required
NASA Technical Reports Server (NTRS)
1991-01-01
The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Draeger, Erik W.
This report documents the fact that the work in creating a strategic plan and beginning customer engagements has been completed. The description of milestone is: The newly formed advanced architecture and portability specialists (AAPS) team will develop a strategic plan to meet the goals of 1) sharing knowledge and experience with code teams to ensure that ASC codes run well on new architectures, and 2) supplying skilled computational scientists to put the strategy into practice. The plan will be delivered to ASC management in the first quarter. By the fourth quarter, the team will identify their first customers within PEMmore » and IC, perform an initial assessment and scalability and performance bottleneck for next-generation architectures, and embed AAPS team members with customer code teams to assist with initial portability development within standalone kernels or proxy applications.« less
Lasaygues, Philippe; Arciniegas, Andres; Espinosa, Luis; Prieto, Flavio; Brancheriau, Loïc
2018-05-26
Ultrasound computed tomography (USCT) using the transmission mode is a way to detect and assess the extent of decay in wood structures. The resolution of the ultrasonic image is closely related to the different anatomical features of wood. The complexity of the wave propagation process generates complex signals consisting of several wave packets with different signatures. Wave paths, depth dependencies, wave velocities or attenuations are often difficult to interpret. For this kind of assessment, the focus is generally on signal pre-processing. Several approaches have been used so far including filtering, spectrum analysis and a method involving deconvolution using a characteristic transfer function of the experimental device. However, all these approaches may be too sophisticated and/or unstable. The alternative methods proposed in this work are based on coded excitation, which makes it possible to process both local and general information available such as frequency and time parameters. Coded excitation is based on the filtering of the transmitted signal using a suitable electric input signal. The aim of the present study was to compare two coded-excitation methods, a chirp- and a wavelet-coded excitation method, to determine the time of flight of the ultrasonic wave, and to investigate the feasibility, the robustness and the precision of the measurement of geometrical and acoustical properties in laboratory conditions. To obtain control experimental data, the two methods were compared with the conventional ultrasonic pulse method. Experiments were conducted on a polyurethane resin sample and two samples of different wood species using two 500 kHz-transducers. The relative errors in the measurement of thickness compared with the results of caliper measurements ranged from 0.13% minimum for the wavelet-coded excitation method to 2.3% maximum for the chirp-coded excitation method. For the relative errors in the measurement of ultrasonic wave velocity, the coded excitation methods showed differences ranging from 0.24% minimum for the wavelet-coded excitation method to 2.62% maximum for the chirp-coded excitation method. Methods based on coded excitation algorithms thus enable accurate measurements of thickness and ultrasonic wave velocity in samples of wood species. Copyright © 2018 Elsevier B.V. All rights reserved.
Lewis, Joy H; Whelihan, Kate; Navarro, Isaac; Boyle, Kimberly R
2016-08-27
The social determinants of health (SDH) are conditions that shape the overall health of an individual on a continuous basis. As momentum for addressing social factors in primary care settings grows, provider ability to identify, treat and assess these factors remains unknown. Community health centers care for over 20-million of America's highest risk populations. This study at three centers evaluates provider ability to identify, treat and code for the SDH. Investigators utilized a pre-study survey and a card study design to obtain evidence from the point of care. The survey assessed providers' perceptions of the SDH and their ability to address them. Then providers filled out one anonymous card per patient on four assigned days over a 4-week period, documenting social factors observed during encounters. The cards allowed providers to indicate if they were able to: provide counseling or other interventions, enter a diagnosis code and enter a billing code for identified factors. The results of the survey indicate providers were familiar with the SDH and were comfortable identifying social factors at the point of care. A total of 747 cards were completed. 1584 factors were identified and 31 % were reported as having a service provided. However, only 1.2 % of factors were associated with a billing code and 6.8 % received a diagnosis code. An obvious discrepancy exists between the number of identifiable social factors, provider ability to address them and documentation with billing and diagnosis codes. This disparity could be related to provider inability to code for social factors and bill for related time and services. Health care organizations should seek to implement procedures to document and monitor social factors and actions taken to address them. Results of this study suggest simple methods of identification may be sufficient. The addition of searchable codes and reimbursements may improve the way social factors are addressed for individuals and populations.
Development of the 3DHZETRN code for space radiation protection
NASA Astrophysics Data System (ADS)
Wilson, John; Badavi, Francis; Slaba, Tony; Reddell, Brandon; Bahadori, Amir; Singleterry, Robert
Space radiation protection requires computationally efficient shield assessment methods that have been verified and validated. The HZETRN code is the engineering design code used for low Earth orbit dosimetric analysis and astronaut record keeping with end-to-end validation to twenty percent in Space Shuttle and International Space Station operations. HZETRN treated diffusive leakage only at the distal surface limiting its application to systems with a large radius of curvature. A revision of HZETRN that included forward and backward diffusion allowed neutron leakage to be evaluated at both the near and distal surfaces. That revision provided a deterministic code of high computational efficiency that was in substantial agreement with Monte Carlo (MC) codes in flat plates (at least to the degree that MC codes agree among themselves). In the present paper, the 3DHZETRN formalism capable of evaluation in general geometry is described. Benchmarking will help quantify uncertainty with MC codes (Geant4, FLUKA, MCNP6, and PHITS) in simple shapes such as spheres within spherical shells and boxes. Connection of the 3DHZETRN to general geometry will be discussed.
(I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.
van Rijnsoever, Frank J
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.
Improving the sensitivity and specificity of the abbreviated injury scale coding system.
Kramer, C F; Barancik, J I; Thode, H C
1990-01-01
The Abbreviated Injury Scale with Epidemiologic Modifications (AIS 85-EM) was developed to make it possible to code information about anatomic injury types and locations that, although generally available from medical records, is not codable under the standard Abbreviated Injury Scale, published by the American Association for Automotive Medicine in 1985 (AIS 85). In a population-based sample of 3,223 motor vehicle trauma cases, 68 percent of the patients had one or more injuries that were coded to the AIS 85 body region nonspecific category external. When the same patients' injuries were coded using the AIS 85-EM coding procedure, only 15 percent of the patients had injuries that could not be coded to a specific body region. With AIS 85-EM, the proportion of codable head injury cases increased from 16 percent to 37 percent, thereby improving the potential for identifying cases with head and threshold brain injury. The data suggest that body region coding of all injuries is necessary to draw valid and reliable conclusions about changes in injury patterns and their sequelae. The increased specificity of body region coding improves assessments of the efficacy of injury intervention strategies and countermeasure programs using epidemiologic methodology. PMID:2116633
Hanford meteorological station computer codes: Volume 9, The quality assurance computer codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burk, K.W.; Andrews, G.L.
1989-02-01
The Hanford Meteorological Station (HMS) was established in 1944 on the Hanford Site to collect and archive meteorological data and provide weather forecasts and related services for Hanford Site approximately 1/2 mile east of the 200 West Area and is operated by PNL for the US Department of Energy. Meteorological data are collected from various sensors and equipment located on and off the Hanford Site. These data are stored in data bases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS (hereafter referred to as the HMS computer). Files from those data bases are routinely transferred to themore » Emergency Management System (EMS) computer at the Unified Dose Assessment Center (UDAC). To ensure the quality and integrity of the HMS data, a set of Quality Assurance (QA) computer codes has been written. The codes will be routinely used by the HMS system manager or the data base custodian. The QA codes provide detailed output files that will be used in correcting erroneous data. The following sections in this volume describe the implementation and operation of QA computer codes. The appendices contain detailed descriptions, flow charts, and source code listings of each computer code. 2 refs.« less
Three decades of the WHO code and marketing of infant formulas.
Forsyth, Stewart
2012-05-01
The International Code of Marketing of Breast Milk Substitutes states that governments, non-governmental organizations, experts, consumers and industry need to cooperate in activities aimed at improving infant nutrition. However, the evidence from the last three decades is that of a series of disputes, legal proceedings and boycotts. The purpose of this review is to assess the overall progress in the implementation of the Code and to examine the problematic areas of monitoring, compliance and governance. There are continuing issues of implementation, monitoring and compliance which predominantly reflect weak governance. Many Member States have yet to fully implement the Code recommendations and most States do not have adequate monitoring and reporting mechanisms. Application of the Code in developed countries may be undermined by a lack of consensus on the WHO recommendation of 6 months exclusive breastfeeding. There is evidence of continuing conflict and acrimony, especially between non-government organizations and industry. Measures need to be taken to encourage the Member States to implement the Code and to establish the governance systems that will not only ensure effective implementation and monitoring of the Code, but also deliver the Code within a spirit of participation, collaboration and trust.
Psychometric challenges and proposed solutions when scoring facial emotion expression codes.
Olderbak, Sally; Hildebrandt, Andrea; Pinkpank, Thomas; Sommer, Werner; Wilhelm, Oliver
2014-12-01
Coding of facial emotion expressions is increasingly performed by automated emotion expression scoring software; however, there is limited discussion on how best to score the resulting codes. We present a discussion of facial emotion expression theories and a review of contemporary emotion expression coding methodology. We highlight methodological challenges pertinent to scoring software-coded facial emotion expression codes and present important psychometric research questions centered on comparing competing scoring procedures of these codes. Then, on the basis of a time series data set collected to assess individual differences in facial emotion expression ability, we derive, apply, and evaluate several statistical procedures, including four scoring methods and four data treatments, to score software-coded emotion expression data. These scoring procedures are illustrated to inform analysis decisions pertaining to the scoring and data treatment of other emotion expression questions and under different experimental circumstances. Overall, we found applying loess smoothing and controlling for baseline facial emotion expression and facial plasticity are recommended methods of data treatment. When scoring facial emotion expression ability, maximum score is preferred. Finally, we discuss the scoring methods and data treatments in the larger context of emotion expression research.
Emergency readmissions to paediatric surgery and urology: The impact of inappropriate coding
Peeraully, R; Henderson, K; Davies, B
2016-01-01
Introduction In England, emergency readmissions within 30 days of hospital discharge after an elective admission are not reimbursed if they do not meet Payment by Results (PbR) exclusion criteria. However, coding errors could inappropriately penalise hospitals. We aimed to assess the accuracy of coding for emergency readmissions. Methods Emergency readmissions attributed to paediatric surgery and urology between September 2012 and August 2014 to our tertiary referral centre were retrospectively reviewed. Payment by Results (PbR) coding data were obtained from the hospital’s Family Health Directorate. Clinical details were obtained from contemporaneous records. All readmissions were categorised as appropriately coded (postoperative or nonoperative) or inappropriately coded (planned surgical readmission, unrelated surgical admission, unrelated medical admission or coding error). Results Over the 24-month period, 241 patients were coded as 30-day readmissions, with 143 (59%) meeting the PbR exclusion criteria. Of the remaining 98 (41%) patients, 24 (25%) were inappropriately coded as emergency readmissions. These readmissions resulted in 352 extra bed days, of which 117 (33%) were attributable to inappropriately coded cases. Conclusions One-quarter of non-excluded emergency readmissions were inappropriately coded, accounting for one-third of additional bed days. As a stay on a paediatric ward costs up to £500 a day, the potential cost to our institution due to inappropriate readmission coding was over £50,000. Diagnoses and the reason for admission for each care episode should be accurately documented and coded, and readmission data should be reviewed at a senior clinician level. PMID:26924486
Using Quick Response Codes in the Classroom: Quality Outcomes.
Zurmehly, Joyce; Adams, Kellie
2017-10-01
With smart device technology emerging, educators are challenged with redesigning teaching strategies using technology to allow students to participate dynamically and provide immediate answers. To facilitate integration of technology and to actively engage students, quick response codes were included in a medical surgical lecture. Quick response codes are two-dimensional square patterns that enable the coding or storage of more than 7000 characters that can be accessed via a quick response code scanning application. The aim of this quasi-experimental study was to explore quick response code use in a lecture and measure students' satisfaction (met expectations, increased interest, helped understand, and provided practice and prompt feedback) and engagement (liked most, liked least, wanted changed, and kept involved), assessed using an investigator-developed instrument. Although there was no statistically significant correlation of quick response use to examination scores, satisfaction scores were high, and there was a small yet positive association between how students perceived their learning with quick response codes and overall examination scores. Furthermore, on open-ended survey questions, students responded that they were satisfied with the use of quick response codes, appreciated the immediate feedback, and planned to use them in the clinical setting. Quick response codes offer a way to integrate technology into the classroom to provide students with instant positive feedback.
RADTRAD: A simplified model for RADionuclide Transport and Removal And Dose estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphreys, S.L.; Miller, L.A.; Monroe, D.K.
1998-04-01
This report documents the RADTRAD computer code developed for the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Reactor Regulation (NRR) to estimate transport and removal of radionuclides and dose at selected receptors. The document includes a users` guide to the code, a description of the technical basis for the code, the quality assurance and code acceptance testing documentation, and a programmers` guide. The RADTRAD code can be used to estimate the containment release using either the NRC TID-14844 or NUREG-1465 source terms and assumptions, or a user-specified table. In addition, the code can account for a reduction in themore » quantity of radioactive material due to containment sprays, natural deposition, filters, and other natural and engineered safety features. The RADTRAD code uses a combination of tables and/or numerical models of source term reduction phenomena to determine the time-dependent dose at user-specified locations for a given accident scenario. The code system also provides the inventory, decay chain, and dose conversion factor tables needed for the dose calculation. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room; to estimate site boundary doses; and to estimate dose attenuation due to modification of a facility or accident sequence.« less
Nouraei, S A R; Hudovsky, A; Virk, J S; Saleh, H A
2017-04-01
This study aimed to develop a multidisciplinary coded dataset standard for nasal surgery and to assess its impact on data accuracy. An audit of 528 patients undergoing septal and/or inferior turbinate surgery, rhinoplasty and/or septorhinoplasty, and nasal fracture surgery was undertaken. A total of 200 septoplasties, 109 septorhinoplasties, 57 complex septorhinoplasties and 116 nasal fractures were analysed. There were 76 (14.4 per cent) changes to the primary diagnosis. Septorhinoplasties were the most commonly amended procedures. The overall audit-related income change for nasal surgery was £8.78 per patient. Use of a multidisciplinary coded dataset standard revealed that nasal diagnoses were under-coded; a significant proportion of patients received more precise diagnoses following the audit. There was also significant under-coding of both morbidities and revision surgery. The multidisciplinary coded dataset standard approach can improve the accuracy of both data capture and information flow, and, thus, ultimately create a more reliable dataset for use outcomes and health planning.
Temporal Code-Driven Stimulation: Definition and Application to Electric Fish Signaling
Lareo, Angel; Forlim, Caroline G.; Pinto, Reynaldo D.; Varona, Pablo; Rodriguez, Francisco de Borja
2016-01-01
Closed-loop activity-dependent stimulation is a powerful methodology to assess information processing in biological systems. In this context, the development of novel protocols, their implementation in bioinformatics toolboxes and their application to different description levels open up a wide range of possibilities in the study of biological systems. We developed a methodology for studying biological signals representing them as temporal sequences of binary events. A specific sequence of these events (code) is chosen to deliver a predefined stimulation in a closed-loop manner. The response to this code-driven stimulation can be used to characterize the system. This methodology was implemented in a real time toolbox and tested in the context of electric fish signaling. We show that while there are codes that evoke a response that cannot be distinguished from a control recording without stimulation, other codes evoke a characteristic distinct response. We also compare the code-driven response to open-loop stimulation. The discussed experiments validate the proposed methodology and the software toolbox. PMID:27766078
The evolution of the genetic code: Impasses and challenges.
Kun, Ádám; Radványi, Ádám
2018-02-01
The origin of the genetic code and translation is a "notoriously difficult problem". In this survey we present a list of questions that a full theory of the genetic code needs to answer. We assess the leading hypotheses according to these criteria. The stereochemical, the coding coenzyme handle, the coevolution, the four-column theory, the error minimization and the frozen accident hypotheses are discussed. The integration of these hypotheses can account for the origin of the genetic code. But experiments are badly needed. Thus we suggest a host of experiments that could (in)validate some of the models. We focus especially on the coding coenzyme handle hypothesis (CCH). The CCH suggests that amino acids attached to RNA handles enhanced catalytic activities of ribozymes. Alternatively, amino acids without handles or with a handle consisting of a single adenine, like in contemporary coenzymes could have been employed. All three scenarios can be tested in in vitro compartmentalized systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Hypercube matrix computation task
NASA Technical Reports Server (NTRS)
Calalo, Ruel H.; Imbriale, William A.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Lyons, James R.; Manshadi, Farzin; Patterson, Jean E.
1988-01-01
A major objective of the Hypercube Matrix Computation effort at the Jet Propulsion Laboratory (JPL) is to investigate the applicability of a parallel computing architecture to the solution of large-scale electromagnetic scattering problems. Three scattering analysis codes are being implemented and assessed on a JPL/California Institute of Technology (Caltech) Mark 3 Hypercube. The codes, which utilize different underlying algorithms, give a means of evaluating the general applicability of this parallel architecture. The three analysis codes being implemented are a frequency domain method of moments code, a time domain finite difference code, and a frequency domain finite elements code. These analysis capabilities are being integrated into an electromagnetics interactive analysis workstation which can serve as a design tool for the construction of antennas and other radiating or scattering structures. The first two years of work on the Hypercube Matrix Computation effort is summarized. It includes both new developments and results as well as work previously reported in the Hypercube Matrix Computation Task: Final Report for 1986 to 1987 (JPL Publication 87-18).
Temporal Code-Driven Stimulation: Definition and Application to Electric Fish Signaling.
Lareo, Angel; Forlim, Caroline G; Pinto, Reynaldo D; Varona, Pablo; Rodriguez, Francisco de Borja
2016-01-01
Closed-loop activity-dependent stimulation is a powerful methodology to assess information processing in biological systems. In this context, the development of novel protocols, their implementation in bioinformatics toolboxes and their application to different description levels open up a wide range of possibilities in the study of biological systems. We developed a methodology for studying biological signals representing them as temporal sequences of binary events. A specific sequence of these events (code) is chosen to deliver a predefined stimulation in a closed-loop manner. The response to this code-driven stimulation can be used to characterize the system. This methodology was implemented in a real time toolbox and tested in the context of electric fish signaling. We show that while there are codes that evoke a response that cannot be distinguished from a control recording without stimulation, other codes evoke a characteristic distinct response. We also compare the code-driven response to open-loop stimulation. The discussed experiments validate the proposed methodology and the software toolbox.
Hadden, Kellie L; LeFort, Sandra; O'Brien, Michelle; Coyte, Peter C; Guerriere, Denise N
2016-04-01
The purpose of the current study was to examine the concurrent and discriminant validity of the Child Facial Coding System for children with cerebral palsy. Eighty-five children (mean = 8.35 years, SD = 4.72 years) were videotaped during a passive joint stretch with their physiotherapist and during 3 time segments: baseline, passive joint stretch, and recovery. Children's pain responses were rated from videotape using the Numerical Rating Scale and Child Facial Coding System. Results indicated that Child Facial Coding System scores during the passive joint stretch significantly correlated with Numerical Rating Scale scores (r = .72, P < .01). Child Facial Coding System scores were also significantly higher during the passive joint stretch than the baseline and recovery segments (P < .001). Facial activity was not significantly correlated with the developmental measures. These findings suggest that the Child Facial Coding System is a valid method of identifying pain in children with cerebral palsy. © The Author(s) 2015.
Challenges in using medicaid claims to ascertain child maltreatment.
Raghavan, Ramesh; Brown, Derek S; Allaire, Benjamin T; Garfield, Lauren D; Ross, Raven E; Hedeker, Donald
2015-05-01
Medicaid data contain International Classification of Diseases, Clinical Modification (ICD-9-CM) codes indicating maltreatment, yet there is a little information on how valid these codes are for the purposes of identifying maltreatment from health, as opposed to child welfare, data. This study assessed the validity of Medicaid codes in identifying maltreatment. Participants (n = 2,136) in the first National Survey of Child and Adolescent Well-Being were linked to their Medicaid claims obtained from 36 states. Caseworker determinations of maltreatment were compared with eight sets of ICD-9-CM codes. Of the 1,921 children identified by caseworkers as being maltreated, 15.2% had any relevant ICD-9-CM code in any of their Medicaid files across 4 years of observation. Maltreated boys and those of African American race had lower odds of displaying a maltreatment code. Using only Medicaid claims to identify maltreated children creates validity problems. Medicaid data linkage with other types of administrative data is required to better identify maltreated children. © The Author(s) 2014.
78 FR 29628 - Community Health Needs Assessments for Charitable Hospitals; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-21
...-BL30 Community Health Needs Assessments for Charitable Hospitals; Correction AGENCY: Internal Revenue... the community health needs assessment requirements, and related excise tax and reporting obligations... 501(r), 4959, 6012, and 6033 of the Internal Revenue Code. Need for Correction As published April 5...
[Learning virtual routes: what does verbal coding do in working memory?].
Gyselinck, Valérie; Grison, Élise; Gras, Doriane
2015-03-01
Two experiments were run to complete our understanding of the role of verbal and visuospatial encoding in the construction of a spatial model from visual input. In experiment 1 a dual task paradigm was applied to young adults who learned a route in a virtual environment and then performed a series of nonverbal tasks to assess spatial knowledge. Results indicated that landmark knowledge as asserted by the visual recognition of landmarks was not impaired by any of the concurrent task. Route knowledge, assessed by recognition of directions, was impaired both by a tapping task and a concurrent articulation task. Interestingly, the pattern was modulated when no landmarks were available to perform the direction task. A second experiment was designed to explore the role of verbal coding on the construction of landmark and route knowledge. A lexical-decision task was used as a verbal-semantic dual task, and a tone decision task as a nonsemantic auditory task. Results show that these new concurrent tasks impaired differently landmark knowledge and route knowledge. Results can be interpreted as showing that the coding of route knowledge could be grounded on both a coding of the sequence of events and on a semantic coding of information. These findings also point on some limits of Baddeley's working memory model. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Dentists' perspectives on caries-related treatment decisions.
Gomez, J; Ellwood, R P; Martignon, S; Pretty, I A
2014-06-01
To assess the impact of patient risk status on Colombian dentists' caries related treatment decisions for early to intermediate caries lesions (ICDAS code 2 to 4). A web-based questionnaire assessed dentists' views on the management of early/intermediate lesions. The questionnaire included questions on demographic characteristics, five clinical scenarios with randomised levels of caries risk, and two questions on different clinical and radiographic sets of images with different thresholds of caries. Questionnaires were completed by 439 dentists. For the two scenarios describing occlusal lesions ICDAS code 2, dentists chose to provide a preventive option in 63% and 60% of the cases. For the approximal lesion ICDAS code 2, 81% of the dentists chose to restore. The main findings of the binary logistic regression analysis for the clinical scenarios suggest that for the ICDAS code 2 occlusal lesions, the odds of a high caries risk patient having restorations is higher than for a low caries risk patient. For the questions describing different clinical thresholds of caries, most dentists would restore at ICDAS code 2 (55%) and for the question showing different radiographic thresholds images, 65% of dentists would intervene operatively at the inner half of enamel. No significant differences with respect to risk were found for these questions with the logistic regression. The results of this study indicate that Colombian dentists have not yet fully adopted non-invasive treatment for early caries lesions.
TRIAD II: do living wills have an impact on pre-hospital lifesaving care?
Mirarchi, Ferdinando L; Kalantzis, Stella; Hunter, Daniel; McCracken, Emily; Kisiel, Theresa
2009-02-01
Living wills accompany patients who present for emergent care. To the best of our knowledge, no studies assess pre-hospital provider interpretations of these instructions. Determine how a living will is interpreted and assess how interpretation impacts lifesaving care. Three-part survey administered at a regional emergency medical system educational symposium to 150 emergency medical technicians (EMTs) and paramedics. Part I assessed understanding of the living will and do-not-resuscitate (DNR) orders. Part II assessed the living will's impact in clinical situations of patients requiring lifesaving interventions. Part III was similar to part II except a code status designation (full code) was incorporated into the living will. There were 127 surveys completed, yielding an 87% response rate. The majority were male (55%) and EMTs (74%). The average age was 44 years and the average duration of employment was 15 years. Ninety percent (95% confidence interval [CI] 84.6-95.4%) of respondents determined that, after review of the living will, the patient's code status was DNR, and 92% (95% CI 86.5-96.6%) defined their understanding of DNR as comfort care/end-of-life care. When the living will was applied to clinical situations, it resulted in a higher proportion of patients being classified as DNR as opposed to full code (Case A 78% [95% CI 71.2-85.6%] vs. 22% [95% CI 14.4-28.8%], respectively; Case B 67% [95% CI 58.4-74.9%] vs. 33% [95% CI 25.1-1.6%], respectively; Case C 63% [95% CI 55.1-71.9%] vs. 37% [95% CI 28.1-44.9%]), respectively. With the scenarios presented, this DNR classification resulted in a lack of or a delay in lifesaving interventions. Incorporating a code status into the living will produced statistically significant increases in the provision of lifesaving care. In Case A, intubation increased from 15% to 56% (p < 0.0001); Case B, defibrillation increased from 40% to 59% (p < 0.0001); and Case C, defibrillation increased from 36% to 65% (p < 0.0001). Significant confusion and concern for patient safety exists in the pre-hospital setting due to the understanding and implementation of living wills and DNR orders. This confusion can be corrected by implementing clearly defined code status into the living will.
Creep and Creep-Fatigue Crack Growth at Structural Discontinuities and Welds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dr. F. W. Brust; Dr. G. M. Wilkowski; Dr. P. Krishnaswamy
2010-01-27
The subsection ASME NH high temperature design procedure does not admit crack-like defects into the structural components. The US NRC identified the lack of treatment of crack growth within NH as a limitation of the code and thus this effort was undertaken. This effort is broken into two parts. Part 1, summarized here, involved examining all high temperature creep-fatigue crack growth codes being used today and from these, the task objective was to choose a methodology that is appropriate for possible implementation within NH. The second part of this task, which has just started, is to develop design rules formore » possible implementation within NH. This second part is a challenge since all codes require step-by-step analysis procedures to be undertaken in order to assess the crack growth and life of the component. Simple rules for design do not exist in any code at present. The codes examined in this effort included R5, RCC-MR (A16), BS 7910, API 579, and ATK (and some lesser known codes). There are several reasons that the capability for assessing cracks in high temperature nuclear components is desirable. These include: (1) Some components that are part of GEN IV reactors may have geometries that have sharp corners - which are essentially cracks. Design of these components within the traditional ASME NH procedure is quite challenging. It is natural to ensure adequate life design by modeling these features as cracks within a creep-fatigue crack growth procedure. (2) Workmanship flaws in welds sometimes occur and are accepted in some ASME code sections. It can be convenient to consider these as flaws when making a design life assessment. (3) Non-destructive Evaluation (NDE) and inspection methods after fabrication are limited in the size of the crack or flaw that can be detected. It is often convenient to perform a life assessment using a flaw of a size that represents the maximum size that can elude detection. (4) Flaws that are observed using in-service detection methods often need to be addressed as plants age. Shutdown inspection intervals can only be designed using creep and creep-fatigue crack growth techniques. (5) The use of crack growth procedures can aid in examining the seriousness of creep damage in structural components. How cracks grow can be used to assess margins on components and lead to further safe operation. After examining the pros and cons of all these methods, the R5 code was chosen as the most up-to-date and validated high temperature creep and creep fatigue code currently used in the world at present. R5 is considered the leader because the code: (1) has well established and validated rules, (2) has a team of experts continually improving and updating it, (3) has software that can be used by designers, (4) extensive validation in many parts with available data from BE resources as well as input from Imperial college's database, and (5) was specifically developed for use in nuclear plants. R5 was specifically developed for use in gas cooled nuclear reactors which operate in the UK and much of the experience is based on materials and temperatures which are experienced in these reactors. If the next generation advanced reactors to be built in the US used these same materials within the same temperature ranges as these reactors, then R5 may be appropriate for consideration of direct implementation within ASME code NH or Section XI. However, until more verification and validation of these creep/fatigue crack growth rules for the specific materials and temperatures to be used in the GEN IV reactors is complete, ASME should consider delaying this implementation. With this in mind, it is this authors opinion that R5 methods are the best available for code use today. The focus of this work was to examine the literature for creep and creep-fatigue crack growth procedures that are well established in codes in other countries and choose a procedure to consider implementation into ASME NH. It is very important to recognize that all creep and creep fatigue crack growth procedures that are part of high temperature design codes are related and very similar. This effort made no attempt to develop a new creep-fatigue crack growth predictive methodology. Rather examination of current procedures was the only goal. The uncertainties in the R5 crack growth methods and recommendations for more work are summarized here also.« less
Benchmarking of neutron production of heavy-ion transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, I.; Ronningen, R. M.; Heilbronn, L.
Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondarymore » neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)« less
Keltie, Kim; Cole, Helen; Arber, Mick; Patrick, Hannah; Powell, John; Campbell, Bruce; Sims, Andrew
2014-11-28
Several authors have developed and applied methods to routine data sets to identify the nature and rate of complications following interventional procedures. But, to date, there has been no systematic search for such methods. The objective of this article was to find, classify and appraise published methods, based on analysis of clinical codes, which used routine healthcare databases in a United Kingdom setting to identify complications resulting from interventional procedures. A literature search strategy was developed to identify published studies that referred, in the title or abstract, to the name or acronym of a known routine healthcare database and to complications from procedures or devices. The following data sources were searched in February and March 2013: Cochrane Methods Register, Conference Proceedings Citation Index - Science, Econlit, EMBASE, Health Management Information Consortium, Health Technology Assessment database, MathSciNet, MEDLINE, MEDLINE in-process, OAIster, OpenGrey, Science Citation Index Expanded and ScienceDirect. Of the eligible papers, those which reported methods using clinical coding were classified and summarised in tabular form using the following headings: routine healthcare database; medical speciality; method for identifying complications; length of follow-up; method of recording comorbidity. The benefits and limitations of each approach were assessed. From 3688 papers identified from the literature search, 44 reported the use of clinical codes to identify complications, from which four distinct methods were identified: 1) searching the index admission for specified clinical codes, 2) searching a sequence of admissions for specified clinical codes, 3) searching for specified clinical codes for complications from procedures and devices within the International Classification of Diseases 10th revision (ICD-10) coding scheme which is the methodology recommended by NHS Classification Service, and 4) conducting manual clinical review of diagnostic and procedure codes. The four distinct methods identifying complication from codified data offer great potential in generating new evidence on the quality and safety of new procedures using routine data. However the most robust method, using the methodology recommended by the NHS Classification Service, was the least frequently used, highlighting that much valuable observational data is being ignored.
Project Fever - Fostering Electric Vehicle Expansion in the Rockies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swalnick, Natalia
2013-06-30
Project FEVER (Fostering Electric Vehicle Expansion in the Rockies) is a part of the Clean Cities Community Readiness and Planning for Plug-in Electric Vehicles and Charging Infrastructure Funding Opportunity funded by the U.S. Department of Energy (DOE) for the state of Colorado. Tasks undertaken in this project include: Electric Vehicle Grid Impact Assessment; Assessment of Electrical Permitting and Inspection for EV/EVSE (electric vehicle/electric vehicle supply equipment); Assessment of Local Ordinances Pertaining to Installation of Publicly Available EVSE;Assessment of Building Codes for EVSE; EV Demand and Energy/Air Quality Impacts Assessment; State and Local Policy Assessment; EV Grid Impact Minimization Efforts; Unificationmore » and Streamlining of Electrical Permitting and Inspection for EV/EVSE; Development of BMP for Local EVSE Ordinances; Development of BMP for Building Codes Pertaining to EVSE; Development of Colorado-Specific Assessment for EV/EVSE Energy/Air Quality Impacts; Development of State and Local Policy Best Practices; Create Final EV/EVSE Readiness Plan; Develop Project Marketing and Communications Elements; Plan and Schedule In-person Education and Outreach Opportunities.« less
SWIFT Code Assessment for Two Similar Transonic Compressors
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.
2009-01-01
One goal of the NASA Fundamental Aeronautics Program is the assessment of computational fluid dynamic (CFD) codes used for the design and analysis of many aerospace systems. This paper describes the assessment of the SWIFT turbomachinery analysis code for two similar transonic compressors, NASA rotor 37 and stage 35. The two rotors have identical blade profiles on the front, transonic half of the blade but rotor 37 has more camber aft of the shock. Thus the two rotors have the same shock structure and choking flow but rotor 37 produces a higher pressure ratio. The two compressors and experimental data are described here briefly. Rotor 37 was also used for test cases organized by ASME, IGTI, and AGARD in 1994-1998. Most of the participating codes over predicted pressure and temperature ratios, and failed to predict certain features of the downstream flowfield. Since then the AUSM+ upwind scheme and the k- turbulence model have been added to SWIFT. In this work the new capabilities were assessed for the two compressors. Comparisons were made with overall performance maps and spanwise profiles of several aerodynamic parameters. The results for rotor 37 were in much better agreement with the experimental data than the original blind test case results although there were still some discrepancies. The results for stage 35 were in very good agreement with the data. The results for rotor 37 were very sensitive to turbulence model parameters but the results for stage 35 were not. Comparison of the rotor solutions showed that the main difference between the two rotors was not blade camber as expected, but shock/boundary layer interaction on the casing.
NASA Astrophysics Data System (ADS)
Zeitler, T.; Kirchner, T. B.; Hammond, G. E.; Park, H.
2014-12-01
The Waste Isolation Pilot Plant (WIPP) has been developed by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. Containment of TRU waste at the WIPP is regulated by the U.S. Environmental Protection Agency (EPA). The DOE demonstrates compliance with the containment requirements by means of performance assessment (PA) calculations. WIPP PA calculations estimate the probability and consequence of potential radionuclide releases from the repository to the accessible environment for a regulatory period of 10,000 years after facility closure. The long-term performance of the repository is assessed using a suite of sophisticated computational codes. In a broad modernization effort, the DOE has overseen the transfer of these codes to modern hardware and software platforms. Additionally, there is a current effort to establish new performance assessment capabilities through the further development of the PFLOTRAN software, a state-of-the-art massively parallel subsurface flow and reactive transport code. Improvements to the current computational environment will result in greater detail in the final models due to the parallelization afforded by the modern code. Parallelization will allow for relatively faster calculations, as well as a move from a two-dimensional calculation grid to a three-dimensional grid. The result of the modernization effort will be a state-of-the-art subsurface flow and transport capability that will serve WIPP PA into the future. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. This research is funded by WIPP programs administered by the Office of Environmental Management (EM) of the U.S Department of Energy.
2012-01-01
Background No validated model exists to explain the learning effects of assessment, a problem when designing and researching assessment for learning. We recently developed a model explaining the pre-assessment learning effects of summative assessment in a theory teaching context. The challenge now is to validate this model. The purpose of this study was to explore whether the model was operational in a clinical context as a first step in this process. Methods Given the complexity of the model, we adopted a qualitative approach. Data from in-depth interviews with eighteen medical students were subject to content analysis. We utilised a code book developed previously using grounded theory. During analysis, we remained alert to data that might not conform to the coding framework and open to the possibility of deploying inductive coding. Ethical clearance and informed consent were obtained. Results The three components of the model i.e., assessment factors, mechanism factors and learning effects were all evident in the clinical context. Associations between these components could all be explained by the model. Interaction with preceptors was identified as a new subcomponent of assessment factors. The model could explain the interrelationships of the three facets of this subcomponent i.e., regular accountability, personal consequences and emotional valence of the learning environment, with previously described components of the model. Conclusions The model could be utilized to analyse and explain observations in an assessment context different to that from which it was derived. In the clinical setting, the (negative) influence of preceptors on student learning was particularly prominent. In this setting, learning effects resulted not only from the high-stakes nature of summative assessment but also from personal stakes, e.g. for esteem and agency. The results suggest that to influence student learning, consequences should accrue from assessment that are immediate, concrete and substantial. The model could have utility as a planning or diagnostic tool in practice and research settings. PMID:22420839
Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan
2016-04-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Hart, Philip R.; Athalye, Rahul A.
The US Department of Energy’s most recent commercial energy code compliance evaluation efforts focused on determining a percent compliance rating for states to help them meet requirements under the American Recovery and Reinvestment Act (ARRA) of 2009. That approach included a checklist of code requirements, each of which was graded pass or fail. Percent compliance for any given building was simply the percent of individual requirements that passed. With its binary approach to compliance determination, the previous methodology failed to answer some important questions. In particular, how much energy cost could be saved by better compliance with the commercial energymore » code and what are the relative priorities of code requirements from an energy cost savings perspective? This paper explores an analytical approach and pilot study using a single building type and climate zone to answer those questions.« less
The feasibility of adapting a population-based asthma-specific job exposure matrix (JEM) to NHANES.
McHugh, Michelle K; Symanski, Elaine; Pompeii, Lisa A; Delclos, George L
2010-12-01
To determine the feasibility of applying a job exposure matrix (JEM) for classifying exposures to 18 asthmagens in the National Health and Nutrition Examination Survey (NHANES), 1999-2004. We cross-referenced 490 National Center for Health Statistics job codes used to develop the 40 NHANES occupation groups with 506 JEM job titles and assessed homogeneity in asthmagen exposure across job codes within each occupation group. In total, 399 job codes corresponded to one JEM job title, 32 to more than one job title, and 59 were not in the JEM. Three occupation groups had the same asthmagen exposure across job codes, 11 had no asthmagen exposure, and 26 groups had heterogeneous exposures across jobs codes. The NHANES classification of occupations limits the use of the JEM to evaluate the association between workplace exposures and asthma and more refined occupational data are needed to enhance work-related injury/illness surveillance efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weaver, Robert P.; Miller, Paul; Howley, Kirsten
The NNSA Laboratories have entered into an interagency collaboration with the National Aeronautics and Space Administration (NASA) to explore strategies for prevention of Earth impacts by asteroids. Assessment of such strategies relies upon use of sophisticated multi-physics simulation codes. This document describes the task of verifying and cross-validating, between Lawrence Livermore National Laboratory (LLNL) and Los Alamos National Laboratory (LANL), modeling capabilities and methods to be employed as part of the NNSA-NASA collaboration. The approach has been to develop a set of test problems and then to compare and contrast results obtained by use of a suite of codes, includingmore » MCNP, RAGE, Mercury, Ares, and Spheral. This document provides a short description of the codes, an overview of the idealized test problems, and discussion of the results for deflection by kinetic impactors and stand-off nuclear explosions.« less
Assessment Intelligence in Small Group Learning
ERIC Educational Resources Information Center
Xing, Wanli; Wu, Yonghe
2014-01-01
Assessment of groups in CSCL context is a challenging task fraught with many confounding factors collected and measured. Previous documented studies are by and large summative in nature and some process-oriented methods require time-intensive coding of qualitative data. This study attempts to resolve these problems for teachers to assess groups…
ERIC Educational Resources Information Center
Research Triangle Inst., Durham, NC.
This manual for Exercise Administrators of the National Assessment of Educational Progress; Second Literature Third Reading Assessment, consists of administrative instructions for use immediately preceding, during and after assessment sessions in schools. Definitions of racial/ethnic categories, associated codes, and guidelines for solicting…
Assessing the Quality of Teachers' Teaching Practices
ERIC Educational Resources Information Center
Chen, Weiyun; Mason, Stephen; Staniszewski, Christina; Upton, Ashley; Valley, Megan
2012-01-01
This study assessed the extent to which nine elementary physical education teachers implemented the quality of teaching practices. Thirty physical education lessons taught by the nine teachers to their students in grades K-5 were videotaped. Four investigators coded the taped lessons using the Assessing Quality Teaching Rubric (AQTR) designed and…
Personalized Assessment as a Means to Mitigate Plagiarism
ERIC Educational Resources Information Center
Manoharan, Sathiamoorthy
2017-01-01
Although every educational institution has a code of academic honesty, they still encounter incidents of plagiarism. These are difficult and time-consuming to detect and deal with. This paper explores the use of personalized assessments with the goal of reducing incidents of plagiarism, proposing a personalized assessment software framework…
An Analysis of State Alternate Assessment Participation Guidelines
ERIC Educational Resources Information Center
Musson, Jane E.; Thomas, Megan K.; Towles-Reeves, Elizabeth; Kearns, Jacqueline F.
2010-01-01
The purpose of this study was to examine all states' participation guidelines for alternate assessments based on alternate achievement standards (AA-AAS) and to analyze these guidelines for common and contrasting themes. State alternate assessment participation guidelines were found for all 50 states. Participation guidelines were coded, and 12…
Assessing Quality of Critical Thought in Online Discussion
ERIC Educational Resources Information Center
Weltzer-Ward, Lisa; Baltes, Beate; Lynn, Laura Knight
2009-01-01
Purpose: The purpose of this paper is to describe a theoretically based coding framework for an integrated analysis and assessment of critical thinking in online discussion. Design/methodology/approach: The critical thinking assessment framework (TAF) is developed through review of theory and previous research, verified by comparing results to…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-15
... Assessment (C-CASA) categories, along with definitions and explanations; (3) revises the advice on which... C-CASA categories eliminates the need for any additional coding; (7) provides multiple additional...
Deaf Children's Use of Phonological Coding: Evidence from Reading, Spelling, and Working Memory
ERIC Educational Resources Information Center
Harris, Margaret; Moreno, Constanza
2004-01-01
Two groups of deaf children, aged 8 and 14 years, were presented with a number of tasks designed to assess their reliance on phonological coding. Their performance was compared with that of hearing children of the same chronological age (CA) and reading age (RA). Performance on the first task, short-term recall of pictures, showed that the deaf…
NASA Technical Reports Server (NTRS)
Bjork, C.
1981-01-01
The REEDS (rocket exhaust effluent diffusion single layer) computer code is used for the estimation of certain rocket exhaust effluent concentrations and dosages and their distributions near the Earth's surface following a rocket launch event. Output from REEDS is used in producing near real time air quality and environmental assessments of the effects of certain potentially harmful effluents, namely HCl, Al2O3, CO, and NO.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-26
... Appendix G to the Code for calculating K IM factors, and instead applies FEM [finite element modeling..., Units 1 and 2 are calculated using the CE NSSS finite element modeling methods. The Need for the... Society of Mechanical Engineers (ASME) Code, Section XI, Appendix G) or determined by applying finite...
An Empirical Test of the Modified C Index and SII, O*NET, and DHOC Occupational Code Classifications
ERIC Educational Resources Information Center
Dik, Bryan J.; Hu, Ryan S. C.; Hansen, Jo-Ida C.
2007-01-01
The present study investigated new approaches for assessing Holland's congruence hypothesis by (a) developing and applying four sets of decision rules for assigning Holland codes of varying lengths for purposes of computing Eggerth and Andrew's modified C index; (b) testing the modified C index computed using these four approaches against Brown…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-14
....gov Web site is an ``anonymous access'' system, which means the EPA will not know your identity or... request exemptions in accordance with Ala. Admin. Code r. 335-3-14- 01(1) and (5). Respondent operated... Title 129 of Neb. Admin. Code 17-001.01. Respondent operated an emergency generator at its facility...
Team interaction during surgery: a systematic review of communication coding schemes.
Tiferes, Judith; Bisantz, Ann M; Guru, Khurshid A
2015-05-15
Communication problems have been systematically linked to human errors in surgery and a deep understanding of the underlying processes is essential. Although a number of tools exist to assess nontechnical skills, methods to study communication and other team-related processes are far from being standardized, making comparisons challenging. We conducted a systematic review to analyze methods used to study events in the operating room (OR) and to develop a synthesized coding scheme for OR team communication. Six electronic databases were accessed to search for articles that collected individual events during surgery and included detailed coding schemes. Additional articles were added based on cross-referencing. That collection was then classified based on type of events collected, environment type (real or simulated), number of procedures, type of surgical task, team characteristics, method of data collection, and coding scheme characteristics. All dimensions within each coding scheme were grouped based on emergent content similarity. Categories drawn from articles, which focused on communication events, were further analyzed and synthesized into one common coding scheme. A total of 34 of 949 articles met the inclusion criteria. The methodological characteristics and coding dimensions of the articles were summarized. A priori coding was used in nine studies. The synthesized coding scheme for OR communication included six dimensions as follows: information flow, period, statement type, topic, communication breakdown, and effects of communication breakdown. The coding scheme provides a standardized coding method for OR communication, which can be used to develop a priori codes for future studies especially in comparative effectiveness research. Copyright © 2015 Elsevier Inc. All rights reserved.
Impact of Different Correlations on TRACEv4.160 Predicted Critical Heat Flux
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasiulevicius, A.; Macian-Juan, R.
2006-07-01
This paper presents an independent assessment of the Critical Heat Flux (CHF) models implemented in TRACEv4.160 with data from the experiments carried out at the Royal Institute of Technology (RIT) in Stockholm, Sweden, with single vertical uniformly heated 7.0 m long tubes. In previous CHF assessment studies with TRACE, it was noted that, although the overall code predictions in long single tubes with inner diameters of 1.0 to 2.49 cm agreed rather well with the results of experiments (with r.m.s. error being 25.6%), several regions of pressure and coolant mass flux could be identified, in which the code strongly under-predictsmore » or over-predicts the CHF. In order to evaluate the possibility of improving the code performance, some of the most widely used and assessed CHF correlations were additionally implemented in TRACEv4.160, namely Bowring, Levitan - Lantsman, and Tong-W3. The results obtained for the CHF predictions in single tubes with uniform axial heat flux by using these correlations, were compared to the results produced with the standard TRACE correlations (Biasi and CISE-GE), and with the experimental data from RIT, which covered a broad range of pressures (3-20 MPa) and coolant mass fluxes (500-3000 kg/m{sup 2}s). Several hundreds of experimental points were calculated to cover the parameter range mentioned above for the evaluation of the newly implemented correlations in the TRACEv4.160 code. (author)« less
Bai, Jinbing; Swanson, Kristen M; Santacroce, Sheila J
2018-01-01
Parent interactions with their child can influence the child's pain and distress during painful procedures. Reliable and valid interaction analysis systems (IASs) are valuable tools for capturing these interactions. The extent to which IASs are used in observational research of parent-child interactions is unknown in pediatric populations. To identify and evaluate studies that focus on assessing psychometric properties of initial iterations/publications of observational coding systems of parent-child interactions during painful procedures. To identify and evaluate studies that focus on assessing psychometric properties of initial iterations/publications of observational coding systems of parent-child interactions during painful procedures. Computerized databases searched included PubMed, CINAHL, PsycINFO, Health and Psychosocial Instruments, and Scopus. Timeframes covered from inception of the database to January 2017. Studies were included if they reported use or psychometrics of parent-child IASs. First assessment was whether the parent-child IASs were theory-based; next, using the Society of Pediatric Psychology Assessment Task Force criteria IASs were assigned to one of three categories: well-established, approaching well-established, or promising. A total of 795 studies were identified through computerized searches. Eighteen studies were ultimately determined to be eligible for inclusion in the review and 17 parent-child IASs were identified from these 18 studies. Among the 17 coding systems, 14 were suitable for use in children age 3 years or more; two were theory-based; and 11 included verbal and nonverbal parent behaviors that promoted either child coping or child distress. Four IASs were assessed as well-established; seven approached well-established; and six were promising. Findings indicate a need for the development of theory-based parent-child IASs that consider both verbal and nonverbal parent behaviors during painful procedures. Findings also suggest a need for further testing of those parent-child IASs deemed "approaching well-established" or "promising". © 2017 World Institute of Pain.
Seelandt, Julia C; Tschan, Franziska; Keller, Sandra; Beldi, Guido; Jenni, Nadja; Kurmann, Anita; Candinas, Daniel; Semmer, Norbert K
2014-11-01
To develop a behavioural observation method to simultaneously assess distractors and communication/teamwork during surgical procedures through direct, on-site observations; to establish the reliability of the method for long (>3 h) procedures. Observational categories for an event-based coding system were developed based on expert interviews, observations and a literature review. Using Cohen's κ and the intraclass correlation coefficient, interobserver agreement was assessed for 29 procedures. Agreement was calculated for the entire surgery, and for the 1st hour. In addition, interobserver agreement was assessed between two tired observers and between a tired and a non-tired observer after 3 h of surgery. The observational system has five codes for distractors (door openings, noise distractors, technical distractors, side conversations and interruptions), eight codes for communication/teamwork (case-relevant communication, teaching, leadership, problem solving, case-irrelevant communication, laughter, tension and communication with external visitors) and five contextual codes (incision, last stitch, personnel changes in the sterile team, location changes around the table and incidents). Based on 5-min intervals, Cohen's κ was good to excellent for distractors (0.74-0.98) and for communication/teamwork (0.70-1). Based on frequency counts, intraclass correlation coefficient was excellent for distractors (0.86-0.99) and good to excellent for communication/teamwork (0.45-0.99). After 3 h of surgery, Cohen's κ was 0.78-0.93 for distractors, and 0.79-1 for communication/teamwork. The observational method developed allows a single observer to simultaneously assess distractors and communication/teamwork. Even for long procedures, high interobserver agreement can be achieved. Data collected with this method allow for investigating separate or combined effects of distractions and communication/teamwork on surgical performance and patient outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
A new framework for interactive quality assessment with application to light field coding
NASA Astrophysics Data System (ADS)
Viola, Irene; Ebrahimi, Touradj
2017-09-01
In recent years, light field has experienced a surge of popularity, mainly due to the recent advances in acquisition and rendering technologies that have made it more accessible to the public. Thanks to image-based rendering techniques, light field contents can be rendered in real time on common 2D screens, allowing virtual navigation through the captured scenes in an interactive fashion. However, this richer representation of the scene poses the problem of reliable quality assessments for light field contents. In particular, while subjective methodologies that enable interaction have already been proposed, no work has been done on assessing how users interact with light field contents. In this paper, we propose a new framework to subjectively assess the quality of light field contents in an interactive manner and simultaneously track users behaviour. The framework is successfully used to perform subjective assessment of two coding solutions. Moreover, statistical analysis performed on the results shows interesting correlation between subjective scores and average interaction time.
TDRSS telecommunications system, PN code analysis
NASA Technical Reports Server (NTRS)
Dixon, R.; Gold, R.; Kaiser, F.
1976-01-01
The pseudo noise (PN) codes required to support the TDRSS telecommunications services are analyzed and the impact of alternate coding techniques on the user transponder equipment, the TDRSS equipment, and all factors that contribute to the acquisition and performance of these telecommunication services is assessed. Possible alternatives to the currently proposed hybrid FH/direct sequence acquisition procedures are considered and compared relative to acquisition time, implementation complexity, operational reliability, and cost. The hybrid FH/direct sequence technique is analyzed and rejected in favor of a recommended approach which minimizes acquisition time and user transponder complexity while maximizing probability of acquisition and overall link reliability.
NASA Technical Reports Server (NTRS)
Teske, M. E.
1984-01-01
This is a user manual for the computer code ""AGDISP'' (AGricultural DISPersal) which has been developed to predict the deposition of material released from fixed and rotary wing aircraft in a single-pass, computationally efficient manner. The formulation of the code is novel in that the mean particle trajectory and the variance about the mean resulting from turbulent fluid fluctuations are simultaneously predicted. The code presently includes the capability of assessing the influence of neutral atmospheric conditions, inviscid wake vortices, particle evaporation, plant canopy and terrain on the deposition pattern.
Rapid Assessment of Agility for Conceptual Design Synthesis
NASA Technical Reports Server (NTRS)
Biezad, Daniel J.
1996-01-01
This project consists of designing and implementing a real-time graphical interface for a workstation-based flight simulator. It is capable of creating a three-dimensional out-the-window scene of the aircraft's flying environment, with extensive information about the aircraft's state displayed in the form of a heads-up-display (HUD) overlay. The code, written in the C programming language, makes calls to Silicon Graphics' Graphics Library (GL) to draw the graphics primitives. Included in this report is a detailed description of the capabilities of the code, including graphical examples, as well as a printout of the code itself
Caskey, Rachel N; Abutahoun, Angelos; Polick, Anne; Barnes, Michelle; Srivastava, Pavan; Boyd, Andrew D
2018-05-04
The US health care system uses diagnostic codes for billing and reimbursement as well as quality assessment and measuring clinical outcomes. The US transitioned to the International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) on October, 2015. Little is known about the impact of ICD-10-CM on internal medicine and medicine subspecialists. We used a state-wide data set from Illinois Medicaid specified for Internal Medicine providers and subspecialists. A total of 3191 ICD-9-CM codes were used for 51,078 patient encounters, for a total cost of US $26,022,022 for all internal medicine. We categorized all of the ICD-9-CM codes based on the complexity of mapping to ICD-10-CM as codes with complex mapping could result in billing or administrative errors during the transition. Codes found to have complex mapping and frequently used codes (n = 295) were analyzed for clinical accuracy of mapping to ICD-10-CM. Each subspecialty was analyzed for complexity of codes used and proportion of reimbursement associated with complex codes. Twenty-five percent of internal medicine codes have convoluted mapping to ICD-10-CM, which represent 22% of Illinois Medicaid patients, and 30% of reimbursements. Rheumatology and Endocrinology had the greatest proportion of visits and reimbursement associated with complex codes. We found 14.5% of ICD-9-CM codes used by internists, when mapped to ICD-10-CM, resulted in potential clinical inaccuracies. We identified that 43% of diagnostic codes evaluated and used by internists and that account for 14% of internal medicine reimbursements are associated with codes which could result in administrative errors.
Stewart, Claire; Shoemaker, Jamie; Keller-Smith, Rachel; Edmunds, Katherine; Davis, Andrew; Tegtmeyer, Ken
2017-10-16
Pediatric code blue activations are infrequent events with a high mortality rate despite the best effort of code teams. The best method for training these code teams is debatable; however, it is clear that training is needed to assure adherence to American Heart Association (AHA) Resuscitation Guidelines and to prevent the decay that invariably occurs after Pediatric Advanced Life Support training. The objectives of this project were to train a multidisciplinary, multidepartmental code team and to measure this team's adherence to AHA guidelines during code simulation. Multidisciplinary code team training sessions were held using high-fidelity, in situ simulation. Sessions were held several times per month. Each session was filmed and reviewed for adherence to 5 AHA guidelines: chest compression rate, ventilation rate, chest compression fraction, use of a backboard, and use of a team leader. After the first study period, modifications were made to the code team including implementation of just-in-time training and alteration of the compression team. Thirty-eight sessions were completed, with 31 eligible for video analysis. During the first study period, 1 session adhered to all AHA guidelines. During the second study period, after alteration of the code team and implementation of just-in-time training, no sessions adhered to all AHA guidelines; however, there was an improvement in percentage of sessions adhering to ventilation rate and chest compression rate and an improvement in median ventilation rate. We present a method for training a large code team drawn from multiple hospital departments and a method of assessing code team performance. Despite subjective improvement in code team positioning, communication, and role completion and some improvement in ventilation rate and chest compression rate, we failed to consistently demonstrate improvement in adherence to all guidelines.
Amoroso, P J; Smith, G S; Bell, N S
2000-04-01
Accurate injury cause data are essential for injury prevention research. U.S. military hospitals, unlike civilian hospitals, use the NATO STANAG system for cause-of-injury coding. Reported deficiencies in civilian injury cause data suggested a need to specifically evaluate the STANAG. The Total Army Injury and Health Outcomes Database (TAIHOD) was used to evaluate worldwide Army injury hospitalizations, especially STANAG Trauma, Injury, and Place of Occurrence coding. We conducted a review of hospital procedures at Tripler Army Medical Center (TAMC) including injury cause and intent coding, potential crossover between acute injuries and musculoskeletal conditions, and data for certain hospital patients who are not true admissions. We also evaluated the use of free-text injury comment fields in three hospitals. Army-wide review of injury records coding revealed full compliance with cause coding, although nonspecific codes appeared to be overused. A small but intensive single hospital records review revealed relatively poor intent coding but good activity and cause coding. Data on specific injury history were present on most acute injury records and 75% of musculoskeletal conditions. Place of Occurrence coding, although inherently nonspecific, was over 80% accurate. Review of text fields produced additional details of the injuries in over 80% of cases. STANAG intent coding specificity was poor, while coding of cause of injury was at least comparable to civilian systems. The strengths of military hospital data systems are an exceptionally high compliance with injury cause coding, the availability of free text, and capture of all population hospital records without regard to work-relatedness. Simple changes in procedures could greatly improve data quality.
Ramírez de Arellano, A; Coca, A; de la Figuera, M; Rubio-Terrés, C; Rubio-Rodríguez, D; Gracia, A; Boldeanu, A; Puig-Gilberte, J; Salas, E
2013-10-01
A clinical–genetic function (Cardio inCode®) was generated using genetic variants associated with coronary heart disease (CHD), but not with classical CHD risk factors, to achieve a more precise estimation of the CHD risk of individuals by incorporating genetics into risk equations [Framingham and REGICOR (Registre Gironí del Cor)]. The objective of this study was to conduct an economic analysis of the CHD risk assessment with Cardio inCode®, which incorporates the patient’s genetic risk into the functions of REGICOR and Framingham, compared with the standard method (using only the functions). A Markov model was developed with seven states of health (low CHD risk, moderate CHD risk, high CHD risk, CHD event, recurrent CHD, chronic CHD, and death). The reclassification of CHD risk derived from genetic information and transition probabilities between states was obtained from a validation study conducted in cohorts of REGICOR (Spain) and Framingham (USA). It was assumed that patients classified as at moderate risk by the standard method were the best candidates to test the risk reclassification with Cardio inCode®. The utilities and costs (€; year 2011 values) of Markov states were obtained from the literature and Spanish sources. The analysis was performed from the perspective of the Spanish National Health System, for a life expectancy of 82 years in Spain. An annual discount rate of 3.5 % for costs and benefits was applied. For a Cardio inCode® price of €400, the cost per QALY gained compared with the standard method [incremental cost-effectiveness ratio (ICER)] would be €12,969 and €21,385 in REGICOR and Framingham cohorts, respectively. The threshold price of Cardio inCode® to reach the ICER threshold generally accepted in Spain (€30,000/QALY) would range between €668 and €836. The greatest benefit occurred in the subgroup of patients with moderate–high risk, with a high-risk reclassification of 22.8 % and 12 % of patients and an ICER of €1,652/QALY and €5,884/QALY in the REGICOR and Framingham cohorts, respectively. Sensitivity analyses confirmed the stability of the study results. Cardio inCode® is a cost-effective risk score option in CHD risk assessment compared with the standard method.
Emerging technology for transonic wind-tunnel-wall interference assessment and corrections
NASA Technical Reports Server (NTRS)
Newman, P. A.; Kemp, W. B., Jr.; Garriz, J. A.
1988-01-01
Several nonlinear transonic codes and a panel method code for wind tunnel/wall interference assessment and correction (WIAC) studies are reviewed. Contrasts between two- and three-dimensional transonic testing factors which affect WIAC procedures are illustrated with airfoil data from the NASA/Langley 0.3-meter transonic cyrogenic tunnel and Pathfinder I data. Also, three-dimensional transonic WIAC results for Mach number and angle-of-attack corrections to data from a relatively large 20 deg swept semispan wing in the solid wall NASA/Ames high Reynolds number Channel I are verified by three-dimensional thin-layer Navier-Stokes free-air solutions.
Heyman, Richard E.
2006-01-01
The purpose of this review is to provide a balanced examination of the published research involving the observation of couples, with special attention toward the use of observation for clinical assessment. All published articles that (a) used an observational coding system and (b) relate to the validity of the coding system are summarized in a table. The psychometric properties of observational systems and the use of observation in clinical practice are discussed. Although advances have been made in understanding couple conflict through the use of observation, the review concludes with an appeal to the field to develop constructs in a psychometrically and theoretically sound manner. PMID:11281039
Software Process Assessment (SPA)
NASA Technical Reports Server (NTRS)
Rosenberg, Linda H.; Sheppard, Sylvia B.; Butler, Scott A.
1994-01-01
NASA's environment mirrors the changes taking place in the nation at large, i.e. workers are being asked to do more work with fewer resources. For software developers at NASA's Goddard Space Flight Center (GSFC), the effects of this change are that we must continue to produce quality code that is maintainable and reusable, but we must learn to produce it more efficiently and less expensively. To accomplish this goal, the Data Systems Technology Division (DSTD) at GSFC is trying a variety of both proven and state-of-the-art techniques for software development (e.g., object-oriented design, prototyping, designing for reuse, etc.). In order to evaluate the effectiveness of these techniques, the Software Process Assessment (SPA) program was initiated. SPA was begun under the assumption that the effects of different software development processes, techniques, and tools, on the resulting product must be evaluated in an objective manner in order to assess any benefits that may have accrued. SPA involves the collection and analysis of software product and process data. These data include metrics such as effort, code changes, size, complexity, and code readability. This paper describes the SPA data collection and analysis methodology and presents examples of benefits realized thus far by DSTD's software developers and managers.
Noel, Jonathan K; Xuan, Ziming; Babor, Thomas F
2017-07-03
Beer marketing in the United States is controlled through self-regulation, whereby the beer industry has created a marketing code and enforces its use. We performed a thematic content analysis on beer ads broadcast during a U.S. college athletic event and determined which themes are associated with violations of a self-regulated alcohol marketing code. 289 beer ads broadcast during the U.S. NCAA Men's and Women's 1999-2008 basketball tournaments were assessed for the presence of 23 thematic content areas. Associations between themes and violations of the U.S. Beer Institute's Marketing and Advertising Code were determined using generalized linear models. Humor (61.3%), taste (61.0%), masculinity (49.2%), and enjoyment (36.5%) were the most prevalent content areas. Nine content areas (i.e., conformity, ethnicity, sensation seeking, sociability, romance, special occasions, text responsibility messages, tradition, and individuality) were positively associated with code violations (p < 0.001-0.042). There were significantly more content areas positively associated with code violations than content areas negatively associated with code violations (p < 0.001). Several thematic content areas were positively associated with code violations. The results can inform existing efforts to revise self-regulated alcohol marketing codes to ensure better protection of vulnerable populations. The use of several themes is concerning in relation to adolescent alcohol use and health disparities.
The effect of multiple internal representations on context-rich instruction
NASA Astrophysics Data System (ADS)
Lasry, Nathaniel; Aulls, Mark W.
2007-11-01
We discuss n-coding, a theoretical model of multiple internal mental representations. The n-coding construct is developed from a review of cognitive and imaging data that demonstrates the independence of information processed along different modalities such as verbal, visual, kinesthetic, logico-mathematic, and social modalities. A study testing the effectiveness of the n-coding construct in classrooms is presented. Four sections differing in the level of n-coding opportunities were compared. Besides a traditional-instruction section used as a control group, each of the remaining three sections were given context-rich problems, which differed by the level of n-coding opportunities designed into their laboratory environment. To measure the effectiveness of the construct, problem-solving skills were assessed as conceptual learning using the force concept inventory. We also developed several new measures that take students' confidence in concepts into account. Our results show that the n-coding construct is useful in designing context-rich environments and can be used to increase learning gains in problem solving, conceptual knowledge, and concept confidence. Specifically, when using props in designing context-rich problems, we find n-coding to be a useful construct in guiding which additional dimensions need to be attended to.
(I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358
DRG coding practice: a nationwide hospital survey in Thailand.
Pongpirul, Krit; Walker, Damian G; Rahman, Hafizur; Robinson, Courtland
2011-10-31
Diagnosis Related Group (DRG) payment is preferred by healthcare reform in various countries but its implementation in resource-limited countries has not been fully explored. This study was aimed (1) to compare the characteristics of hospitals in Thailand that were audited with those that were not and (2) to develop a simplified scale to measure hospital coding practice. A questionnaire survey was conducted of 920 hospitals in the Summary and Coding Audit Database (SCAD hospitals, all of which were audited in 2008 because of suspicious reports of possible DRG miscoding); the questionnaire also included 390 non-SCAD hospitals. The questionnaire asked about general demographics of the hospitals, hospital coding structure and process, and also included a set of 63 opinion-oriented items on the current hospital coding practice. Descriptive statistics and exploratory factor analysis (EFA) were used for data analysis. SCAD and Non-SCAD hospitals were different in many aspects, especially the number of medical statisticians, experience of medical statisticians and physicians, as well as number of certified coders. Factor analysis revealed a simplified 3-factor, 20-item model to assess hospital coding practice and classify hospital intention. Hospital providers should not be assumed capable of producing high quality DRG codes, especially in resource-limited settings.
Gilmore-Bykovskyi, Andrea L.
2015-01-01
Mealtime behavioral symptoms are distressing and frequently interrupt eating for the individual experiencing them and others in the environment. In order to enable identification of potential antecedents to mealtime behavioral symptoms, a computer-assisted coding scheme was developed to measure caregiver person-centeredness and behavioral symptoms for nursing home residents with dementia during mealtime interactions. The purpose of this pilot study was to determine the acceptability and feasibility of procedures for video-capturing naturally-occurring mealtime interactions between caregivers and residents with dementia, to assess the feasibility, ease of use, and inter-observer reliability of the coding scheme, and to explore the clinical utility of the coding scheme. Trained observers coded 22 observations. Data collection procedures were feasible and acceptable to caregivers, residents and their legally authorized representatives. Overall, the coding scheme proved to be feasible, easy to execute and yielded good to very good inter-observer agreement following observer re-training. The coding scheme captured clinically relevant, modifiable antecedents to mealtime behavioral symptoms, but would be enhanced by the inclusion of measures for resident engagement and consolidation of items for measuring caregiver person-centeredness that co-occurred and were difficult for observers to distinguish. PMID:25784080
An integrity measure to benchmark quantum error correcting memories
NASA Astrophysics Data System (ADS)
Xu, Xiaosi; de Beaudrap, Niel; O'Gorman, Joe; Benjamin, Simon C.
2018-02-01
Rapidly developing experiments across multiple platforms now aim to realise small quantum codes, and so demonstrate a memory within which a logical qubit can be protected from noise. There is a need to benchmark the achievements in these diverse systems, and to compare the inherent power of the codes they rely upon. We describe a recently introduced performance measure called integrity, which relates to the probability that an ideal agent will successfully ‘guess’ the state of a logical qubit after a period of storage in the memory. Integrity is straightforward to evaluate experimentally without state tomography and it can be related to various established metrics such as the logical fidelity and the pseudo-threshold. We offer a set of experimental milestones that are steps towards demonstrating unconditionally superior encoded memories. Using intensive numerical simulations we compare memories based on the five-qubit code, the seven-qubit Steane code, and a nine-qubit code which is the smallest instance of a surface code; we assess both the simple and fault-tolerant implementations of each. While the ‘best’ code upon which to base a memory does vary according to the nature and severity of the noise, nevertheless certain trends emerge.
ESCAPE: Eco-Behavioral System for Complex Assessments of Preschool Environments. Research Draft.
ERIC Educational Resources Information Center
Carta, Judith J.; And Others
The manual details an observational code designed to track a child during an entire day in a preschool setting. The Eco-Behavioral System for Complex Assessments of Preschool Environments (ESCAPE) encompasses assessment of the following three major categories of variables with their respective subcategories: (1) ecological variables (designated…
ERIC Educational Resources Information Center
Scott, Debbie; Tonmyr, Lil; Fraser, Jenny; Walker, Sue; McKenzie, Kirsten
2009-01-01
Objective: The objectives of this article are to explore the extent to which the International Statistical Classification of Diseases and Related Health Problems (ICD) has been used in child abuse research, to describe how the ICD system has been applied, and to assess factors affecting the reliability of ICD coded data in child abuse research.…
A site-specific approach for assessing the fire risk to structures at the wildland/urban interface
Jack Cohen
1991-01-01
The essence of the wildland/urban interface fire problem is the loss of homes. The problem is not new, but is becoming increasingly important as more homes with inadequate adherence to safety codes are built at the wildland/urban interface. Current regulatory codes are inflexible. Specifications for building and site characteristics cannot be adjusted to accommodate...
Castrignanò, Tiziana; Canali, Alessandro; Grillo, Giorgio; Liuni, Sabino; Mignone, Flavio; Pesole, Graziano
2004-01-01
The identification and characterization of genome tracts that are highly conserved across species during evolution may contribute significantly to the functional annotation of whole-genome sequences. Indeed, such sequences are likely to correspond to known or unknown coding exons or regulatory motifs. Here, we present a web server implementing a previously developed algorithm that, by comparing user-submitted genome sequences, is able to identify statistically significant conserved blocks and assess their coding or noncoding nature through the measure of a coding potential score. The web tool, available at http://www.caspur.it/CSTminer/, is dynamically interconnected with the Ensembl genome resources and produces a graphical output showing a map of detected conserved sequences and annotated gene features. PMID:15215464
Towards a Consolidated Approach for the Assessment of Evaluation Models of Nuclear Power Reactors
Epiney, A.; Canepa, S.; Zerkak, O.; ...
2016-11-02
The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less
Investigating the use of quick response codes in the gross anatomy laboratory.
Traser, Courtney J; Hoffman, Leslie A; Seifert, Mark F; Wilson, Adam B
2015-01-01
The use of quick response (QR) codes within undergraduate university courses is on the rise, yet literature concerning their use in medical education is scant. This study examined student perceptions on the usefulness of QR codes as learning aids in a medical gross anatomy course, statistically analyzed whether this learning aid impacted student performance, and evaluated whether performance could be explained by the frequency of QR code usage. Question prompts and QR codes tagged on cadaveric specimens and models were available for four weeks as learning aids to medical (n = 155) and doctor of physical therapy (n = 39) students. Each QR code provided answers to posed questions in the form of embedded text or hyperlinked web pages. Students' perceptions were gathered using a formative questionnaire and practical examination scores were used to assess potential gains in student achievement. Overall, students responded positively to the use of QR codes in the gross anatomy laboratory as 89% (57/64) agreed the codes augmented their learning of anatomy. The users' most noticeable objection to using QR codes was the reluctance to bring their smartphones into the gross anatomy laboratory. A comparison between the performance of QR code users and non-users was found to be nonsignificant (P = 0.113), and no significant gains in performance (P = 0.302) were observed after the intervention. Learners welcomed the implementation of QR code technology in the gross anatomy laboratory, yet this intervention had no apparent effect on practical examination performance. © 2014 American Association of Anatomists.
Improving coding accuracy in an academic practice.
Nguyen, Dana; O'Mara, Heather; Powell, Robert
2017-01-01
Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnold, Bill Walter; Chang, Fu-lin; Mattie, Patrick D.
2006-02-01
Sandia National Laboratories (SNL) and Taiwan's Institute for Nuclear Energy Research (INER) have teamed together to evaluate several candidate sites for Low-Level Radioactive Waste (LLW) disposal in Taiwan. Taiwan currently has three nuclear power plants, with another under construction. Taiwan also has a research reactor, as well as medical and industrial wastes to contend with. Eventually the reactors will be decomissioned. Operational and decommissioning wastes will need to be disposed in a licensed disposal facility starting in 2014. Taiwan has adopted regulations similar to the US Nuclear Regulatory Commission's (NRC's) low-level radioactive waste rules (10 CFR 61) to govern themore » disposal of LLW. Taiwan has proposed several potential sites for the final disposal of LLW that is now in temporary storage on Lanyu Island and on-site at operating nuclear power plants, and for waste generated in the future through 2045. The planned final disposal facility will have a capacity of approximately 966,000 55-gallon drums. Taiwan is in the process of evaluating the best candidate site to pursue for licensing. Among these proposed sites there are basically two disposal concepts: shallow land burial and cavern disposal. A representative potential site for shallow land burial is located on a small island in the Taiwan Strait with basalt bedrock and interbedded sedimentary rocks. An engineered cover system would be constructed to limit infiltration for shallow land burial. A representative potential site for cavern disposal is located along the southeastern coast of Taiwan in a tunnel system that would be about 500 to 800 m below the surface. Bedrock at this site consists of argillite and meta-sedimentary rocks. Performance assessment analyses will be performed to evaluate future performance of the facility and the potential dose/risk to exposed populations. Preliminary performance assessment analyses will be used in the site-selection process and to aid in design of the disposal system. Final performance assessment analyses will be used in the regulatory process of licensing a site. The SNL/INER team has developed a performance assessment methodology that is used to simulate processes associated with the potential release of radionuclides to evaluate these sites. The following software codes are utilized in the performance assessment methodology: GoldSim (to implement a probabilistic analysis that will explicitly address uncertainties); the NRC's Breach, Leach, and Transport - Multiple Species (BLT-MS) code (to simulate waste-container degradation, waste-form leaching, and transport through the host rock); the Finite Element Heat and Mass Transfer code (FEHM) (to simulate groundwater flow and estimate flow velocities); the Hydrologic Evaluation of Landfill performance Model (HELP) code (to evaluate infiltration through the disposal cover); the AMBER code (to evaluate human health exposures); and the NRC's Disposal Unit Source Term -- Multiple Species (DUST-MS) code (to screen applicable radionuclides). Preliminary results of the evaluations of the two disposal concept sites are presented.« less
The SERGISAI procedure for seismic risk assessment
NASA Astrophysics Data System (ADS)
Zonno, G.; Garcia-Fernandez, M.; Jimenez, M.J.; Menoni, S.; Meroni, F.; Petrini, V.
The European project SERGISAI developed a computational tool where amethodology for seismic risk assessment at different geographical scales hasbeen implemented. Experts of various disciplines, including seismologists,engineers, planners, geologists, and computer scientists, co-operated in anactual multidisciplinary process to develop this tool. Standard proceduralcodes, Geographical Information Systems (GIS), and Artificial Intelligence(AI) techniques compose the whole system, that will enable the end userto carry out a complete seismic risk assessment at three geographical scales:regional, sub-regional and local. At present, single codes or models thathave been incorporated are not new in general, but the modularity of theprototype, based on a user-friendly front-end, offers potential users thepossibility of updating or replacing any code or model if desired. Theproposed procedure is a first attempt to integrate tools, codes and methodsfor assessing expected earthquake damage, and it was mainly designedto become a useful support for civil defence and land use planning agencies.Risk factors have been treated in the most suitable way for each one, interms of level of detail, kind of parameters and units of measure.Identifying various geographical scales is not a mere question of dimension;since entities to be studied correspond to areas defined by administrativeand geographical borders. The procedure was applied in the following areas:Toscana in Italy, for the regional scale, the Garfagnana area in Toscana, forthe sub-regional scale, and a part of Barcelona city, Spain, for the localscale.
Jalilvand, Aryan; Fleming, Margaret; Moreno, Courtney; MacFarlane, Dan; Duszak, Richard
2018-01-01
The 2015 conversion of the International Classification of Diseases (ICD) system from the ninth revision (ICD-9) to the 10th revision (ICD-10) was widely projected to adversely impact physician practices. We aimed to assess code conversion impact factor (CCIF) projections and revenue delay impact to help radiology groups better prepare for eventual conversion to ICD, 11th revision (ICD-11). Studying 673,600 claims for 179 radiologists for the first year after ICD-10's implementation, we identified primary ICD-10 codes for the top 90th percentile of all examinations for the entire enterprise and each subspecialty division. Using established methodology, we calculated CCIFs (actual ICD-10 codes ÷ prior ICD-9 codes). To assess ICD-10's impact on cash flow, average monthly days in accounts receivable status was compared for the 12 months before and after conversion. Of all 69,823 ICD-10 codes, only 7,075 were used to report primary diagnoses across the entire practice, and just 562 were used to report 90% of all claims, compared with 348 under ICD-9. This translates to an overall CCIF of 1.6 for the department (far less than the literature-predicted 6). By subspecialty division, CCIFs ranged from 0.7 (breast) to 3.5 (musculoskeletal). Monthly average days in accounts receivable for the 12 months before and after ICD-10 conversion did not increase. The operational impact of the ICD-10 transition on radiology practices appears far less than anticipated with respect to both CCIF and delays in cash flow. Predictive models should be refined to help practices better prepare for ICD-11. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Assessing the Formation of Experience-Based Gender Expectations in an Implicit Learning Scenario
Öttl, Anton; Behne, Dawn M.
2017-01-01
The present study investigates the formation of new word-referent associations in an implicit learning scenario, using a gender-coded artificial language with spoken words and visual referents. Previous research has shown that when participants are explicitly instructed about the gender-coding system underlying an artificial lexicon, they monitor the frequency of exposure to male vs. female referents within this lexicon, and subsequently use this probabilistic information to predict the gender of an upcoming referent. In an explicit learning scenario, the auditory and visual gender cues are necessarily highlighted prior to acqusition, and the effects previously observed may therefore depend on participants' overt awareness of these cues. To assess whether the formation of experience-based expectations is dependent on explicit awareness of the underlying coding system, we present data from an experiment in which gender-coding was acquired implicitly, thereby reducing the likelihood that visual and auditory gender cues are used strategically during acquisition. Results show that even if the gender coding system was not perfectly mastered (as reflected in the number of gender coding errors), participants develop frequency based expectations comparable to those previously observed in an explicit learning scenario. In line with previous findings, participants are quicker at recognizing a referent whose gender is consistent with an induced expectation than one whose gender is inconsistent with an induced expectation. At the same time however, eyetracking data suggest that these expectations may surface earlier in an implicit learning scenario. These findings suggest that experience-based expectations are robust against manner of acquisition, and contribute to understanding why similar expectations observed in the activation of stereotypes during the processing of natural language stimuli are difficult or impossible to suppress. PMID:28936186
Physical Activity and Influenza-Coded Outpatient Visits, a Population-Based Cohort Study
Siu, Eric; Campitelli, Michael A.; Kwong, Jeffrey C.
2012-01-01
Background Although the benefits of physical activity in preventing chronic medical conditions are well established, its impacts on infectious diseases, and seasonal influenza in particular, are less clearly defined. We examined the association between physical activity and influenza-coded outpatient visits, as a proxy for influenza infection. Methodology/Principal Findings We conducted a cohort study of Ontario respondents to Statistics Canada’s population health surveys over 12 influenza seasons. We assessed physical activity levels through survey responses, and influenza-coded physician office and emergency department visits through physician billing claims. We used logistic regression to estimate the risk of influenza-coded outpatient visits during influenza seasons. The cohort comprised 114,364 survey respondents who contributed 357,466 person-influenza seasons of observation. Compared to inactive individuals, moderately active (OR 0.83; 95% CI 0.74–0.94) and active (OR 0.87; 95% CI 0.77–0.98) individuals were less likely to experience an influenza-coded visit. Stratifying by age, the protective effect of physical activity remained significant for individuals <65 years (active OR 0.86; 95% CI 0.75–0.98, moderately active: OR 0.85; 95% CI 0.74–0.97) but not for individuals ≥65 years. The main limitations of this study were the use of influenza-coded outpatient visits rather than laboratory-confirmed influenza as the outcome measure, the reliance on self-report for assessing physical activity and various covariates, and the observational study design. Conclusion/Significance Moderate to high amounts of physical activity may be associated with reduced risk of influenza for individuals <65 years. Future research should use laboratory-confirmed influenza outcomes to confirm the association between physical activity and influenza. PMID:22737242
Gildersleeve, Sara; Singer, Jefferson A; Skerrett, Karen; Wein, Shelter
2017-05-01
"We-ness," a couple's mutual investment in their relationship and in each other, has been found to be a potent dimension of couple resilience. This study examined the development of a method to capture We-ness in psychotherapy through the coding of relationship narratives co-constructed by couples ("We-Stories"). It used a coding system to identify the core thematic elements that make up these narratives. Couples that self-identified as "happy" (N = 53) generated We-Stories and completed measures of relationship satisfaction and mutuality. These stories were then coded using the We-Stories coding manual. Findings indicated that security, an element that involves aspects of safety, support, and commitment, was most common, appearing in 58.5% of all narratives. This element was followed by the elements of pleasure (49.1%) and shared meaning/vision (37.7%). The number of "We-ness" elements was also correlated with and predictive of discrepancy scores on measures of relationship mutuality, indicating the validity of the We-Stories coding manual. Limitations and future directions are discussed.
Medicaid provider reimbursement policy for adult immunizations☆
Stewart, Alexandra M.; Lindley, Megan C.; Cox, Marisa A.
2015-01-01
Background State Medicaid programs establish provider reimbursement policy for adult immunizations based on: costs, private insurance payments, and percentage of Medicare payments for equivalent services. Each program determines provider eligibility, payment amount, and permissible settings for administration. Total reimbursement consists of different combinations of Current Procedural Terminology codes: vaccine, vaccine administration, and visit. Objective Determine how Medicaid programs in the 50 states and the District of Columbia approach provider reimbursement for adult immunizations. Design Observational analysis using document review and a survey. Setting and participants Medicaid administrators in 50 states and the District of Columbia. Measurements Whether fee-for-service programs reimburse providers for: vaccines; their administration; and/or office visits when provided to adult enrollees. We assessed whether adult vaccination services are reimbursed when administered by a wide range of providers in a wide range of settings. Results Medicaid programs use one of 4 payment methods for adults: (1) a vaccine and an administration code; (2) a vaccine and visit code; (3) a vaccine code; and (4) a vaccine, visit, and administration code. Limitations Study results do not reflect any changes related to implementation of national health reform. Nine of fifty one programs did not respond to the survey or declined to participate, limiting the information available to researchers. Conclusions Medicaid reimbursement policy for adult vaccines impacts provider participation and enrollee access and uptake. While programs have generally increased reimbursement levels since 2003, each program could assess whether current policies reflect the most effective approach to encourage providers to increase vaccination services. PMID:26403369
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Newman, Perry A.
1991-01-01
A nonlinear, four wall, post-test wall interference assessment/correction (WIAC) code was developed for transonic airfoil data from solid wall wind tunnels with flexibly adaptable top and bottom walls. The WIAC code was applied over a broad range of test conditions to four sets of NACA 0012 airfoil data, from two different adaptive wall wind tunnels. The data include many test points for fully adapted walls, as well as numerous partially adapted and unadapted test points, which together represent many different model/tunnel configurations and possible wall interference effects. Small corrections to the measured Mach numbers and angles of attack were obtained from the WIAC code even for fully adapted data; these corrections generally improve the correlation among the various sets of airfoil data and simultaneously improve the correlation of the data with calculations for a 2-D, free air, Navier-Stokes code. The WIAC corrections for airfoil data taken in fully adapted wall test sections are shown to be significantly smaller than those for comparable airfoil data from straight, slotted wall test sections. This indicates, as expected, a lesser degree of wall interference in the adapted wall tunnels relative to the slotted wall tunnels. Application of the WIAC code to this data was, however, somewhat more difficult and time consuming than initially expected from similar previous experience with WIAC applications to slotted wall data.
Medicaid provider reimbursement policy for adult immunizations.
Stewart, Alexandra M; Lindley, Megan C; Cox, Marisa A
2015-10-26
State Medicaid programs establish provider reimbursement policy for adult immunizations based on: costs, private insurance payments, and percentage of Medicare payments for equivalent services. Each program determines provider eligibility, payment amount, and permissible settings for administration. Total reimbursement consists of different combinations of Current Procedural Terminology codes: vaccine, vaccine administration, and visit. Determine how Medicaid programs in the 50 states and the District of Columbia approach provider reimbursement for adult immunizations. Observational analysis using document review and a survey. Medicaid administrators in 50 states and the District of Columbia. Whether fee-for-service programs reimburse providers for: vaccines; their administration; and/or office visits when provided to adult enrollees. We assessed whether adult vaccination services are reimbursed when administered by a wide range of providers in a wide range of settings. Medicaid programs use one of 4 payment methods for adults: (1) a vaccine and an administration code; (2) a vaccine and visit code; (3) a vaccine code; and (4) a vaccine, visit, and administration code. Study results do not reflect any changes related to implementation of national health reform. Nine of fifty one programs did not respond to the survey or declined to participate, limiting the information available to researchers. Medicaid reimbursement policy for adult vaccines impacts provider participation and enrollee access and uptake. While programs have generally increased reimbursement levels since 2003, each program could assess whether current policies reflect the most effective approach to encourage providers to increase vaccination services. Copyright © 2015 Elsevier Ltd. All rights reserved.
Domier, L L; Latorre, I J; Steinlage, T A; McCoppin, N; Hartman, G L
2003-10-01
The variability of North American and Asian strains and isolates of Soybean mosaic virus was investigated. First, polymerase chain reaction (PCR) products representing the coat protein (CP)-coding regions of 38 SMVs were analyzed for restriction fragment length polymorphisms (RFLP). Second, the nucleotide and predicted amino acid sequence variability of the P1-coding region of 18 SMVs and the helper component/protease (HC/Pro) and CP-coding regions of 25 SMVs were assessed. The CP nucleotide and predicted amino acid sequences were the most similar and predicted phylogenetic relationships similar to those obtained from RFLP analysis. Neither RFLP nor sequence analyses of the CP-coding regions grouped the SMVs by geographical origin. The P1 and HC/Pro sequences were more variable and separated the North American and Asian SMV isolates into two groups similar to previously reported differences in pathogenic diversity of the two sets of SMV isolates. The P1 region was the most informative of the three regions analyzed. To assess the biological relevance of the sequence differences in the HC/Pro and CP coding regions, the transmissibility of 14 SMV isolates by Aphis glycines was tested. All field isolates of SMV were transmitted efficiently by A. glycines, but the laboratory isolates analyzed were transmitted poorly. The amino acid sequences from most, but not all, of the poorly transmitted isolates contained mutations in the aphid transmission-associated DAG and/or KLSC amino acid sequence motifs of CP and HC/Pro, respectively.
Huo, Jinhai; Yang, Ming; Tina Shih, Ya-Chen
2018-03-01
The "meaningful use of certified electronic health record" policy requires eligible professionals to record smoking status for more than 50% of all individuals aged 13 years or older in 2011 to 2012. To explore whether the coding to document smoking behavior has increased over time and to assess the accuracy of smoking-related diagnosis and procedure codes in identifying previous and current smokers. We conducted an observational study with 5,423,880 enrollees from the year 2009 to 2014 in the Truven Health Analytics database. Temporal trends of smoking coding, sensitivity, specificity, positive predictive value, and negative predictive value were measured. The rate of coding of smoking behavior improved significantly by the end of the study period. The proportion of patients in the claims data recorded as current smokers increased 2.3-fold and the proportion of patients recorded as previous smokers increased 4-fold during the 6-year period. The sensitivity of each International Classification of Diseases, Ninth Revision, Clinical Modification code was generally less than 10%. The diagnosis code of tobacco use disorder (305.1X) was the most sensitive code (9.3%) for identifying smokers. The specificities of these codes and the Current Procedural Terminology codes were all more than 98%. A large improvement in the coding of current and previous smoking behavior has occurred since the inception of the meaningful use policy. Nevertheless, the use of diagnosis and procedure codes to identify smoking behavior in administrative data is still unreliable. This suggests that quality improvements toward medical coding on smoking behavior are needed to enhance the capability of claims data for smoking-related outcomes research. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
High Temperature Gas Reactors: Assessment of Applicable Codes and Standards
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDowell, Bruce K.; Nickolaus, James R.; Mitchell, Mark R.
2011-10-31
Current interest expressed by industry in HTGR plants, particularly modular plants with power up to about 600 MW(e) per unit, has prompted NRC to task PNNL with assessing the currently available literature related to codes and standards applicable to HTGR plants, the operating history of past and present HTGR plants, and with evaluating the proposed designs of RPV and associated piping for future plants. Considering these topics in the order they are arranged in the text, first the operational histories of five shut-down and two currently operating HTGR plants are reviewed, leading the authors to conclude that while small, simplemore » prototype HTGR plants operated reliably, some of the larger plants, particularly Fort St. Vrain, had poor availability. Safety and radiological performance of these plants has been considerably better than LWR plants. Petroleum processing plants provide some applicable experience with materials similar to those proposed for HTGR piping and vessels. At least one currently operating plant - HTR-10 - has performed and documented a leak before break analysis that appears to be applicable to proposed future US HTGR designs. Current codes and standards cover some HTGR materials, but not all materials are covered to the high temperatures envisioned for HTGR use. Codes and standards, particularly ASME Codes, are under development for proposed future US HTGR designs. A 'roadmap' document has been prepared for ASME Code development; a new subsection to section III of the ASME Code, ASME BPVC III-5, is scheduled to be published in October 2011. The question of terminology for the cross-duct structure between the RPV and power conversion vessel is discussed, considering the differences in regulatory requirements that apply depending on whether this structure is designated as a 'vessel' or as a 'pipe'. We conclude that designing this component as a 'pipe' is the more appropriate choice, but that the ASME BPVC allows the owner of the facility to select the preferred designation, and that either designation can be acceptable.« less
SETI in vivo: testing the we-are-them hypothesis
NASA Astrophysics Data System (ADS)
Makukov, Maxim A.; Shcherbak, Vladimir I.
2018-04-01
After it was proposed that life on Earth might descend from seeding by an earlier extraterrestrial civilization motivated to secure and spread life, some authors noted that this alternative offers a testable implication: microbial seeds could be intentionally supplied with a durable signature that might be found in extant organisms. In particular, it was suggested that the optimal location for such an artefact is the genetic code, as the least evolving part of cells. However, as the mainstream view goes, this scenario is too speculative and cannot be meaningfully tested because encoding/decoding a signature within the genetic code is something ill-defined, so any retrieval attempt is doomed to guesswork. Here we refresh the seeded-Earth hypothesis in light of recent observations, and discuss the motivation for inserting a signature. We then show that `biological SETI' involves even weaker assumptions than traditional SETI and admits a well-defined methodological framework. After assessing the possibility in terms of molecular and evolutionary biology, we formalize the approach and, adopting the standard guideline of SETI that encoding/decoding should follow from first principles and be convention-free, develop a universal retrieval strategy. Applied to the canonical genetic code, it reveals a non-trivial precision structure of interlocked logical and numerical attributes of systematic character (previously we found these heuristically). To assess this result in view of the initial assumption, we perform statistical, comparison, interdependence and semiotic analyses. Statistical analysis reveals no causal connection of the result to evolutionary models of the genetic code, interdependence analysis precludes overinterpretation, and comparison analysis shows that known variations of the code lack any precision-logic structures, in agreement with these variations being post-LUCA (i.e. post-seeding) evolutionary deviations from the canonical code. Finally, semiotic analysis shows that not only the found attributes are consistent with the initial assumption, but that they make perfect sense from SETI perspective, as they ultimately maintain some of the most universal codes of culture.
Schiff, Elad; Ben-Arye, Eran; Shilo, Margalit; Levy, Moti; Schachter, Leora; Weitchner, Na'ama; Golan, Ofra; Stone, Julie
2011-02-01
Recently, ethical guidelines regarding safe touch in CAM were developed in Israel. Publishing ethical codes does not imply that they will actually help practitioners to meet ethical care standards. The effectiveness of ethical rules depends on familiarity with the code and its content. In addition, critical self-examination of the code by individual members of the profession is required to reflect on the moral commitments encompassed in the code. For the purpose of dynamic self-appraisal, we devised a survey to assess how CAM practitioners view the suggested ethical guidelines for safe touch. We surveyed 781 CAM practitioners regarding their perspectives on the safe-touch code. There was a high level of agreement with general statements regarding ethics pertaining to safe touch with a mean rate of agreement of 4.61 out of a maximum of 5. Practitioners concurred substantially with practice guidelines for appropriate touch with a mean rate of agreement of 4.16 out of a maximum of 5. Attitudes toward the necessity to touch intimate areas for treatment purposes varied with 78.6% of respondents strongly disagreeing with any notion of need to touch intimate areas during treatment. 7.9% neither disagreed nor agreed, 7.9% slightly agreed, and 7.6% strongly agreed with the need for touching intimate areas during treatment. There was a direct correlation between disagreement with touching intimate areas for therapeutic purposes and agreement with general statements regarding ethics of safe touch (Spearman r=0.177, p<0.0001), and practice guidelines for appropriate touch (r=0.092, p=0.012). A substantial number of practitioners agreed with the code, although some findings regarding the need to touch intimate area during treatments were disturbing. Our findings can serve as a basis for ethical code development and implementation, as well as for educating CAM practitioners on the ethics of touch. Copyright © 2010 Elsevier Ltd. All rights reserved.
Jayasinghe, Sanjay; Macartney, Kristine
2013-01-30
Hospital discharge records and laboratory data have shown a substantial early impact from the rotavirus vaccination program that commenced in 2007 in Australia. However, these assessments are affected by the validity and reliability of hospital discharge coding and stool testing to measure the true incidence of hospitalised disease. The aim of this study was to assess the validity of these data sources for disease estimation, both before and after, vaccine introduction. All hospitalisations at a major paediatric centre in children aged <5 years from 2000 to 2009 containing acute gastroenteritis (AGE) ICD 10 AM diagnosis codes were linked to hospital laboratory stool testing data. The validity of the rotavirus-specific diagnosis code (A08.0) and the incidence of hospitalisations attributable to rotavirus by both direct estimation and with adjustments for non-testing and miscoding were calculated for pre- and post-vaccination periods. A laboratory record of stool testing was available for 36% of all AGE hospitalisations (n=4948) the rotavirus code had high specificity (98.4%; 95% CI, 97.5-99.1%) and positive predictive value (96.8%; 94.8-98.3%), and modest sensitivity (61.6%; 58-65.1%). Of all rotavirus test positive hospitalisations only a third had a rotavirus code. The estimated annual average number of rotavirus hospitalisations, following adjustment for non-testing and miscoding was 5- and 6-fold higher than identified, respectively, from testing and coding alone. Direct and adjusted estimates yielded similar percentage reductions in annual average rotavirus hospitalisations of over 65%. Due to the limited use of stool testing and poor sensitivity of the rotavirus-specific diagnosis code routine hospital discharge and laboratory data substantially underestimate the true incidence of rotavirus hospitalisations and absolute vaccine impact. However, this data can still be used to monitor vaccine impact as the effects of miscoding and under-testing appear to be comparable between pre and post vaccination periods. Copyright © 2012 Elsevier Ltd. All rights reserved.
Code System for Performance Assessment Ground-water Analysis for Low-level Nuclear Waste.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MATTHEW,; KOZAK, W.
1994-02-09
Version 00 The PAGAN code system is a part of the performance assessment methodology developed for use by the U. S. Nuclear Regulatory Commission in evaluating license applications for low-level waste disposal facilities. In this methodology, PAGAN is used as one candidate approach for analysis of the ground-water pathway. PAGAN, Version 1.1 has the capability to model the source term, vadose-zone transport, and aquifer transport of radionuclides from a waste disposal unit. It combines the two codes SURFACE and DISPERSE which are used as semi-analytical solutions to the convective-dispersion equation. This system uses menu driven input/out for implementing a simplemore » ground-water transport analysis and incorporates statistical uncertainty functions for handling data uncertainties. The output from PAGAN includes a time- and location-dependent radionuclide concentration at a well in the aquifer, or a time- and location-dependent radionuclide flux into a surface-water body.« less
Seng, Elizabeth K; Lovejoy, Travis I
2013-12-01
This study psychometrically evaluates the Motivational Interviewing Treatment Integrity Code (MITI) to assess fidelity to motivational interviewing to reduce sexual risk behaviors in people living with HIV/AIDS. 74 sessions from a pilot randomized controlled trial of motivational interviewing to reduce sexual risk behaviors in people living with HIV were coded with the MITI. Participants reported sexual behavior at baseline, 3-month, and 6-months. Regarding reliability, excellent inter-rater reliability was achieved for measures of behavior frequency across the 12 sessions coded by both coders; global scales demonstrated poor intraclass correlations, but adequate percent agreement. Regarding validity, principle components analyses indicated that a two-factor model accounted for an adequate amount of variance in the data. These factors were associated with decreases in sexual risk behaviors after treatment. The MITI is a reliable and valid measurement of treatment fidelity for motivational interviewing targeting sexual risk behaviors in people living with HIV/AIDS.
Effect of normal aging and of Alzheimer's disease on, episodic memory.
Le Moal, S; Reymann, J M; Thomas, V; Cattenoz, C; Lieury, A; Allain, H
1997-01-01
Performances of 12 patients with Alzheimer's disease (AD), 15 healthy elderly subjects and 20 young healthy volunteers were compared on two episodic memory tests. The first, a learning test of semantically related words, enabled an assessment of the effect of semantic relationships on word learning by controlling the encoding and retrieval processes. The second, a dual coding test, is about the assessment of automatic processes operating during drawings encoding. The results obtained demonstrated quantitative and qualitative differences between the population. Manifestations of episodic memory deficit in AD patients were shown not only by lower performance scores than in elderly controls, but also by the lack of any effect of semantic cues and the production of a large number of extra-list intrusions. Automatic processes underlying dual coding appear to be spared in AD, although more time is needed to process information than in young or elderly subjects. These findings confirm former data and emphasize the preservation of certain memory processes (dual coding) in AD which could be used in future therapeutic approaches.
Module-oriented modeling of reactive transport with HYTEC
NASA Astrophysics Data System (ADS)
van der Lee, Jan; De Windt, Laurent; Lagneau, Vincent; Goblet, Patrick
2003-04-01
The paper introduces HYTEC, a coupled reactive transport code currently used for groundwater pollution studies, safety assessment of nuclear waste disposals, geochemical studies and interpretation of laboratory column experiments. Based on a known permeability field, HYTEC evaluates the groundwater flow paths, and simulates the migration of mobile matter (ions, organics, colloids) subject to geochemical reactions. The code forms part of a module-oriented structure which facilitates maintenance and improves coding flexibility. In particular, using the geochemical module CHESS as a common denominator for several reactive transport models significantly facilitates the development of new geochemical features which become automatically available to all models. A first example shows how the model can be used to assess migration of uranium from a sub-surface source under the effect of an oxidation front. The model also accounts for alteration of hydrodynamic parameters (local porosity, permeability) due to precipitation and dissolution of mineral phases, which potentially modifies the migration properties in general. The second example illustrates this feature.
NASA Technical Reports Server (NTRS)
Treiber, David A.; Muilenburg, Dennis A.
1995-01-01
The viability of applying a state-of-the-art Euler code to calculate the aerodynamic forces and moments through maximum lift coefficient for a generic sharp-edge configuration is assessed. The OVERFLOW code, a method employing overset (Chimera) grids, was used to conduct mesh refinement studies, a wind-tunnel wall sensitivity study, and a 22-run computational matrix of flow conditions, including sideslip runs and geometry variations. The subject configuration was a generic wing-body-tail geometry with chined forebody, swept wing leading-edge, and deflected part-span leading-edge flap. The analysis showed that the Euler method is adequate for capturing some of the non-linear aerodynamic effects resulting from leading-edge and forebody vortices produced at high angle-of-attack through C(sub Lmax). Computed forces and moments, as well as surface pressures, match well enough useful preliminary design information to be extracted. Vortex burst effects and vortex interactions with the configuration are also investigated.
3DHZETRN: Inhomogeneous Geometry Issues
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.
2017-01-01
Historical methods for assessing radiation exposure inside complicated geometries for space applications were limited by computational constraints and lack of knowledge associated with nuclear processes occurring over a broad range of particles and energies. Various methods were developed and utilized to simplify geometric representations and enable coupling with simplified but efficient particle transport codes. Recent transport code development efforts, leading to 3DHZETRN, now enable such approximate methods to be carefully assessed to determine if past exposure analyses and validation efforts based on those approximate methods need to be revisited. In this work, historical methods of representing inhomogeneous spacecraft geometry for radiation protection analysis are first reviewed. Two inhomogeneous geometry cases, previously studied with 3DHZETRN and Monte Carlo codes, are considered with various levels of geometric approximation. Fluence, dose, and dose equivalent values are computed in all cases and compared. It is found that although these historical geometry approximations can induce large errors in neutron fluences up to 100 MeV, errors on dose and dose equivalent are modest (<10%) for the cases studied here.
Non-contact assessment of melanin distribution via multispectral temporal illumination coding
NASA Astrophysics Data System (ADS)
Amelard, Robert; Scharfenberger, Christian; Wong, Alexander; Clausi, David A.
2015-03-01
Melanin is a pigment that is highly absorptive in the UV and visible electromagnetic spectra. It is responsible for perceived skin tone, and protects against harmful UV effects. Abnormal melanin distribution is often an indicator for melanoma. We propose a novel approach for non-contact melanin distribution via multispectral temporal illumination coding to estimate the two-dimensional melanin distribution based on its absorptive characteristics. In the proposed system, a novel multispectral, cross-polarized, temporally-coded illumination sequence is synchronized with a camera to measure reflectance under both multispectral and ambient illumination. This allows us to eliminate the ambient illumination contribution from the acquired reflectance measurements, and also to determine the melanin distribution in an observed region based on the spectral properties of melanin using the Beer-Lambert law. Using this information, melanin distribution maps can be generated for objective, quantitative assessment of skin type of individuals. We show that the melanin distribution map correctly identifies areas with high melanin densities (e.g., nevi).
Zafirah, S A; Nur, Amrizal Muhammad; Puteh, Sharifa Ezat Wan; Aljunid, Syed Mohamed
2018-01-25
The accuracy of clinical coding is crucial in the assignment of Diagnosis Related Groups (DRGs) codes, especially if the hospital is using Casemix System as a tool for resource allocations and efficiency monitoring. The aim of this study was to estimate the potential loss of income due to an error in clinical coding during the implementation of the Malaysia Diagnosis Related Group (MY-DRG ® ) Casemix System in a teaching hospital in Malaysia. Four hundred and sixty-four (464) coded medical records were selected, re-examined and re-coded by an independent senior coder (ISC). This ISC re-examined and re-coded the error code that was originally entered by the hospital coders. The pre- and post-coding results were compared, and if there was any disagreement, the codes by the ISC were considered the accurate codes. The cases were then re-grouped using a MY-DRG ® grouper to assess and compare the changes in the DRG assignment and the hospital tariff assignment. The outcomes were then verified by a casemix expert. Coding errors were found in 89.4% (415/424) of the selected patient medical records. Coding errors in secondary diagnoses were the highest, at 81.3% (377/464), followed by secondary procedures at 58.2% (270/464), principal procedures of 50.9% (236/464) and primary diagnoses at 49.8% (231/464), respectively. The coding errors resulted in the assignment of different MY-DRG ® codes in 74.0% (307/415) of the cases. From this result, 52.1% (160/307) of the cases had a lower assigned hospital tariff. In total, the potential loss of income due to changes in the assignment of the MY-DRG ® code was RM654,303.91. The quality of coding is a crucial aspect in implementing casemix systems. Intensive re-training and the close monitoring of coder performance in the hospital should be performed to prevent the potential loss of hospital income.
Reiche, Kristin; Kasack, Katharina; Schreiber, Stephan; Lüders, Torben; Due, Eldri U.; Naume, Bjørn; Riis, Margit; Kristensen, Vessela N.; Horn, Friedemann; Børresen-Dale, Anne-Lise; Hackermüller, Jörg; Baumbusch, Lars O.
2014-01-01
Breast cancer, the second leading cause of cancer death in women, is a highly heterogeneous disease, characterized by distinct genomic and transcriptomic profiles. Transcriptome analyses prevalently assessed protein-coding genes; however, the majority of the mammalian genome is expressed in numerous non-coding transcripts. Emerging evidence supports that many of these non-coding RNAs are specifically expressed during development, tumorigenesis, and metastasis. The focus of this study was to investigate the expression features and molecular characteristics of long non-coding RNAs (lncRNAs) in breast cancer. We investigated 26 breast tumor and 5 normal tissue samples utilizing a custom expression microarray enclosing probes for mRNAs as well as novel and previously identified lncRNAs. We identified more than 19,000 unique regions significantly differentially expressed between normal versus breast tumor tissue, half of these regions were non-coding without any evidence for functional open reading frames or sequence similarity to known proteins. The identified non-coding regions were primarily located in introns (53%) or in the intergenic space (33%), frequently orientated in antisense-direction of protein-coding genes (14%), and commonly distributed at promoter-, transcription factor binding-, or enhancer-sites. Analyzing the most diverse mRNA breast cancer subtypes Basal-like versus Luminal A and B resulted in 3,025 significantly differentially expressed unique loci, including 682 (23%) for non-coding transcripts. A notable number of differentially expressed protein-coding genes displayed non-synonymous expression changes compared to their nearest differentially expressed lncRNA, including an antisense lncRNA strongly anticorrelated to the mRNA coding for histone deacetylase 3 (HDAC3), which was investigated in more detail. Previously identified chromatin-associated lncRNAs (CARs) were predominantly downregulated in breast tumor samples, including CARs located in the protein-coding genes for CALD1, FTX, and HNRNPH1. In conclusion, a number of differentially expressed lncRNAs have been identified with relation to cancer-related protein-coding genes. PMID:25264628
Reiche, Kristin; Kasack, Katharina; Schreiber, Stephan; Lüders, Torben; Due, Eldri U; Naume, Bjørn; Riis, Margit; Kristensen, Vessela N; Horn, Friedemann; Børresen-Dale, Anne-Lise; Hackermüller, Jörg; Baumbusch, Lars O
2014-01-01
Breast cancer, the second leading cause of cancer death in women, is a highly heterogeneous disease, characterized by distinct genomic and transcriptomic profiles. Transcriptome analyses prevalently assessed protein-coding genes; however, the majority of the mammalian genome is expressed in numerous non-coding transcripts. Emerging evidence supports that many of these non-coding RNAs are specifically expressed during development, tumorigenesis, and metastasis. The focus of this study was to investigate the expression features and molecular characteristics of long non-coding RNAs (lncRNAs) in breast cancer. We investigated 26 breast tumor and 5 normal tissue samples utilizing a custom expression microarray enclosing probes for mRNAs as well as novel and previously identified lncRNAs. We identified more than 19,000 unique regions significantly differentially expressed between normal versus breast tumor tissue, half of these regions were non-coding without any evidence for functional open reading frames or sequence similarity to known proteins. The identified non-coding regions were primarily located in introns (53%) or in the intergenic space (33%), frequently orientated in antisense-direction of protein-coding genes (14%), and commonly distributed at promoter-, transcription factor binding-, or enhancer-sites. Analyzing the most diverse mRNA breast cancer subtypes Basal-like versus Luminal A and B resulted in 3,025 significantly differentially expressed unique loci, including 682 (23%) for non-coding transcripts. A notable number of differentially expressed protein-coding genes displayed non-synonymous expression changes compared to their nearest differentially expressed lncRNA, including an antisense lncRNA strongly anticorrelated to the mRNA coding for histone deacetylase 3 (HDAC3), which was investigated in more detail. Previously identified chromatin-associated lncRNAs (CARs) were predominantly downregulated in breast tumor samples, including CARs located in the protein-coding genes for CALD1, FTX, and HNRNPH1. In conclusion, a number of differentially expressed lncRNAs have been identified with relation to cancer-related protein-coding genes.
Early Childhood Diarrhea Predicts Cognitive Delays in Later Childhood Independently of Malnutrition
Pinkerton, Relana; Oriá, Reinaldo B.; Lima, Aldo A. M.; Rogawski, Elizabeth T.; Oriá, Mônica O. B.; Patrick, Peter D.; Moore, Sean R.; Wiseman, Benjamin L.; Niehaus, Mark D.; Guerrant, Richard L.
2016-01-01
Understanding the complex relationship between early childhood infectious diseases, nutritional status, poverty, and cognitive development is significantly hindered by the lack of studies that adequately address confounding between these variables. This study assesses the independent contributions of early childhood diarrhea (ECD) and malnutrition on cognitive impairment in later childhood. A cohort of 131 children from a shantytown community in northeast Brazil was monitored from birth to 24 months for diarrhea and anthropometric status. Cognitive assessments including Test of Nonverbal Intelligence (TONI), coding tasks (WISC-III), and verbal fluency (NEPSY) were completed when children were an average of 8.4 years of age (range = 5.6–12.7 years). Multivariate analysis of variance models were used to assess the individual as well as combined effects of ECD and stunting on later childhood cognitive performance. ECD, height for age (HAZ) at 24 months, and weight for age (WAZ) at 24 months were significant univariate predictors of the studies three cognitive outcomes: TONI, coding, and verbal performance (P < 0.05). Multivariate models showed that ECD remained a significant predictor, after adjusting for the effect of 24 months HAZ and WAZ, for both TONI (HAZ, P = 0.029 and WAZ, P = 0.006) and coding (HAZ, P = 0.025 and WAZ, P = 0.036) scores. WAZ and HAZ were also significant predictors after adjusting for ECD. ECD remained a significant predictor of coding (WISC III) after number of household income was considered (P = 0.006). This study provides evidence that ECD and stunting may have independent effects on children's intellectual function well into later childhood. PMID:27601523
Measuring compliance with the Baby-Friendly Hospital Initiative.
Haiek, Laura N
2012-05-01
The WHO/UNICEF Baby-Friendly Hospital Initiative (BFHI) is an effective strategy to increase breast-feeding exclusivity and duration but many countries have been slow to implement it. The present paper describes the development of a computer-based instrument that measures policies and practices outlined in the BFHI. The tool uses clinical staff/managers' and pregnant women/mothers' opinions as well as maternity unit observations to assess compliance with the BFHI's Ten Steps to Successful Breastfeeding (Ten Steps) and the International Code of Marketing of Breastmilk Substitutes (Code) by measuring the extent of implementation of two to fourteen indicators for each step and the Code. Composite scores are used to summarize results. Examples of results from a 2007 assessment performed in nine hospitals in the province of Québec are presented to illustrate the type of information returned to individual hospitals and health authorities. Participants included nine to fifteen staff/managers per hospital randomly selected among those present during the interviewer-observer's 12 h hospital visit and nine to forty-five breast-feeding mothers per hospital telephoned at home after being randomly selected from birth certificates. The Ten Steps Global Compliance Score for the nine hospitals varied between 2.87 and 6.51 (range 0-10, mean 5.06) whereas the Code Global Compliance Score varied between 0.58 and 1 (range 0-1, mean 0.83). Instrument development, examples of assessment results and potential applications are discussed. A methodology to measure BFHI compliance may help support the implementation of this effective intervention and contribute to improved maternal and child health.
Effect of Obesity on Complication Rate After Elbow Arthroscopy in a Medicare Population.
Werner, Brian C; Fashandi, Ahmad H; Chhabra, A Bobby; Deal, D Nicole
2016-03-01
To use a national insurance database to explore the association of obesity with the incidence of complications after elbow arthroscopy in a Medicare population. Using Current Procedural Terminology (CPT) and International Classification of Diseases, 9th Revision (ICD-9) procedure codes, we queried the PearlDiver database for patients undergoing elbow arthroscopy. Patients were divided into obese (body mass index [BMI] >30) and nonobese (BMI <30) cohorts using ICD-9 codes for BMI and obesity. Nonobese patients were matched to obese patients based on age, sex, tobacco use, diabetes, and rheumatoid arthritis. Postoperative complications were assessed with ICD-9 and Current Procedural Terminology codes, including infection, nerve injury, stiffness, and medical complications. A total of 2,785 Medicare patients who underwent elbow arthroscopy were identified from 2005 to 2012; 628 patients (22.5%) were coded as obese or morbidly obese, and 628 matched nonobese patients formed the control group. There were no differences between the obese patients and matched control nonobese patients regarding type of elbow arthroscopy, previous elbow fracture or previous elbow arthroscopy. Obese patients had greater rates of all assessed complications, including infection (odds ratio [OR] 2.8, P = .037), nerve injury (OR 5.4, P = .001), stiffness (OR 1.9, P = .016) and medical complications (OR 6.9, P < .0001). Obesity is associated with significantly increased rates of all assessed complications after elbow arthroscopy in a Medicare population, including infection, nerve injury, stiffness, and medical complications. Therapeutic Level III, case-control study. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Harford, Thomas C.; Chen, Chiung M.; Saha, Tulshi D.; Smith, Sharon M.; Hasin, Deborah S.; Grant, Bridget F.
2013-01-01
The purpose of this study was to evaluate the psychometric properties of DSM–IV symptom criteria for assessing personality disorders (PDs) in a national population and to compare variations in proposed symptom coding for social and/or occupational dysfunction. Data were obtained from a total sample of 34,653 respondents from Waves 1 and 2 of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). For each personality disorder, confirmatory factor analysis (CFA) established a 1-factor latent factor structure for the respective symptom criteria. A 2-parameter item response theory (IRT) model was applied to the symptom criteria for each PD to assess the probabilities of symptom item endorsements across different values of the underlying trait (latent factor). Findings were compared with a separate IRT model using an alternative coding of symptom criteria that requires distress/impairment to be related to each criterion. The CFAs yielded a good fit for a single underlying latent dimension for each PD. Findings from the IRT indicated that DSM–IV PD symptom criteria are clustered in the moderate to severe range of the underlying latent dimension for each PD and are peaked, indicating high measurement precision only within a narrow range of the underlying trait and lower measurement precision at lower and higher levels of severity. Compared with the NESARC symptom coding, the IRT results for the alternative symptom coding are shifted toward the more severe range of the latent trait but generally have lower measurement precision for each PD. The IRT findings provide support for a reliable assessment of each PD for both NESARC and alternative coding for distress/impairment. The use of symptom dysfunction for each criterion, however, raises a number of issues and implications for the DSM-5 revision currently proposed for Axis II disorders (American Psychiatric Association, 2010). PMID:22449066
PFLOTRAN-RepoTREND Source Term Comparison Summary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frederick, Jennifer M.
Code inter-comparison studies are useful exercises to verify and benchmark independently developed software to ensure proper function, especially when the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment. This summary describes the results of the first portion of the code inter-comparison between PFLOTRAN and RepoTREND, which compares the radionuclide source term used in a typical performance assessment.
Effective Cyber Situation Awareness (CSA) Assessment and Training
2013-11-01
activity/scenario. y. Save Wireshark Captures. z. Save SNORT logs. aa. Save MySQL databases. 4. After the completion of the scenario, the reversion...line or from custom Java code. • Cisco ASA Parser: Builds normalized vendor-neutral firewall rule specifications from Cisco ASA and PIX firewall...The Service tool lets analysts build Cauldron models from either the command line or from custom Java code. Functionally, it corresponds to the
Elastic-plastic analysis of annular plate problems using NASTRAN
NASA Technical Reports Server (NTRS)
Chen, P. C. T.
1983-01-01
The plate elements of the NASTRAN code are used to analyze two annular plate problems loaded beyond the elastic limit. The first problem is an elastic-plastic annular plate loaded externally by two concentrated forces. The second problem is stressed radially by uniform internal pressure for which an exact analytical solution is available. A comparison of the two approaches together with an assessment of the NASTRAN code is given.
Validation and Intercomparison Studies Within GODAE
2009-09-01
unlimited. 13. SUPPLEMENTARY NOTES 20091228154 14. ABSTRACT During the Global Ocean Data Assimilation Experiment (GODAE), seven international... global -ocean and basin-scale forecasting systems of different countries in routine interaction and continuous operation, (2) to assess the quality and... Franchi , 7000 Public Affairs (Unclassified/ Unlimited Only), Code 7o30 4 Division, Code ^VtcV Vs-Jc \\ -Vi<-’/c ••>’ 3^v’.-:5, w. 3Uo|eri 1
Simultaneous Semi-Distributed Model Calibration Guided by ...
Modelling approaches to transfer hydrologically-relevant information from locations with streamflow measurements to locations without such measurements continues to be an active field of research for hydrologists. The Pacific Northwest Hydrologic Landscapes (PNW HL) provide a solid conceptual classification framework based on our understanding of dominant processes. A Hydrologic Landscape code (5 letter descriptor based on physical and climatic properties) describes each assessment unit area, and these units average area 60km2. The core function of these HL codes is to relate and transfer hydrologically meaningful information between watersheds without the need for streamflow time series. We present a novel approach based on the HL framework to answer the question “How can we calibrate models across separate watersheds simultaneously, guided by our understanding of dominant processes?“. We should be able to apply the same parameterizations to assessment units of common HL codes if 1) the Hydrologic Landscapes contain hydrologic information transferable between watersheds at a sub-watershed-scale and 2) we use a conceptual hydrologic model and parameters that reflect the hydrologic behavior of a watershed. In this study, This work specifically tests the ability or inability to use HL-codes to inform and share model parameters across watersheds in the Pacific Northwest. EPA’s Western Ecology Division has published and is refining a framework for defining la
Assessment of the Effects of Entrainment and Wind Shear on Nuclear Cloud Rise Modeling
NASA Astrophysics Data System (ADS)
Zalewski, Daniel; Jodoin, Vincent
2001-04-01
Accurate modeling of nuclear cloud rise is critical in hazard prediction following a nuclear detonation. This thesis recommends improvements to the model currently used by DOD. It considers a single-term versus a three-term entrainment equation, the value of the entrainment and eddy viscous drag parameters, as well as the effect of wind shear in the cloud rise following a nuclear detonation. It examines departures from the 1979 version of the Department of Defense Land Fallout Interpretive Code (DELFIC) with the current code used in the Hazard Prediction and Assessment Capability (HPAC) code version 3.2. The recommendation for a single-term entrainment equation, with constant value parameters, without wind shear corrections, and without cloud oscillations is based on both a statistical analysis using 67 U.S. nuclear atmospheric test shots and the physical representation of the modeling. The statistical analysis optimized the parameter values of interest for four cases: the three-term entrainment equation with wind shear and without wind shear as well as the single-term entrainment equation with and without wind shear. The thesis then examines the effect of cloud oscillations as a significant departure in the code. Modifications to user input atmospheric tables are identified as a potential problem in the calculation of stabilized cloud dimensions in HPAC.
A method for assessing fidelity of delivery of telephone behavioral support for smoking cessation.
Lorencatto, Fabiana; West, Robert; Bruguera, Carla; Michie, Susan
2014-06-01
Behavioral support for smoking cessation is delivered through different modalities, often guided by treatment manuals. Recently developed methods for assessing fidelity of delivery have shown that face-to-face behavioral support is often not delivered as specified in the service treatment manual. This study aimed to extend this method to evaluate fidelity of telephone-delivered behavioral support. A treatment manual and transcripts of 75 audio-recorded behavioral support sessions were obtained from the United Kingdom's national Quitline service and coded into component behavior change techniques (BCTs) using a taxonomy of 45 smoking cessation BCTs. Interrater reliability was assessed using percentage agreement. Fidelity was assessed by comparing the number of BCTs identified in the manual with those delivered in telephone sessions by 4 counselors. Fidelity was assessed according to session type, duration, counselor, and BCT. Differences between self-reported and actual BCT use were examined. Average coding reliability was high (81%). On average, 41.8% of manual-specified BCTs were delivered per session (SD = 16.2), with fidelity varying by counselor from 32% to 49%. Fidelity was highest in pre-quit sessions (46%) and for BCT "give options for additional support" (95%). Fidelity was lowest for quit-day sessions (35%) and BCT "set graded tasks" (0%). Session duration was positively correlated with fidelity (r = .585; p < .01). Significantly fewer BCTs were used than were reported as being used, t(15) = -5.52, p < .001. The content of telephone-delivered behavioral support can be reliably coded in terms of BCTs. This can be used to assess fidelity to treatment manuals and to in turn identify training needs. The observed low fidelity underlines the need to establish routine procedures for monitoring delivery of behavioral support. PsycINFO Database Record (c) 2014 APA, all rights reserved.
ERIC Educational Resources Information Center
Nicholas, Mark C.
2011-01-01
Empirical research on how faculty across disciplines conceptualize or assess CT is scarce. This investigation focused on a group of 14 faculty drawn from multiple disciplines in the humanities and natural sciences. Using in-depth interviews, focus group discussions, assessment artifacts and qualitative coding strategies, this study examined how…
Wind Resource Assessment | Wind | NREL
Resource Assessment Wind Resource Assessment A map of the United States is color-coded to indicate the high winds at 80 meters. This map shows the wind resource at 80 meters for both land-based and offshore wind resources in the United States. Correct estimation of the energy available in the wind can
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-27
... for restoring injured natural resources and compensating recreational losses resulting from the Cosco... under OPA, will pay damages to compensate the public for the injuries to natural resources and lost... accordance with the OPA, the Natural Resource Damage Assessment regulations found in the Code of Federal...
Ferlaino, Michael; Rogers, Mark F.; Shihab, Hashem A.; Mort, Matthew; Cooper, David N.; Gaunt, Tom R.; Campbell, Colin
2018-01-01
Background Small insertions and deletions (indels) have a significant influence in human disease and, in terms of frequency, they are second only to single nucleotide variants as pathogenic mutations. As the majority of mutations associated with complex traits are located outside the exome, it is crucial to investigate the potential pathogenic impact of indels in non-coding regions of the human genome. Results We present FATHMM-indel, an integrative approach to predict the functional effect, pathogenic or neutral, of indels in non-coding regions of the human genome. Our method exploits various genomic annotations in addition to sequence data. When validated on benchmark data, FATHMM-indel significantly outperforms CADD and GAVIN, state of the art models in assessing the pathogenic impact of non-coding variants. FATHMM-indel is available via a web server at indels.biocompute.org.uk. Conclusions FATHMM-indel can accurately predict the functional impact and prioritise small indels throughout the whole non-coding genome. PMID:28985712
MPAS-Ocean NESAP Status Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petersen, Mark Roger; Arndt, William; Keen, Noel
NESAP performance improvements on MPAS-Ocean have resulted in a 5% to 7% speed-up on each of the examined systems including Cori-KNL, Cori-Haswell, and Edison. These tests were configured to emulate a production workload by using 128 nodes and a high-resolution ocean domain. Overall, the gap between standard and many-core architecture performance has been narrowed, but Cori-KNL remains considerably under-performing relative to Edison. NESAP code alterations affected 600 lines of code, and most of these improvements will benefit other MPAS codes (sea ice, land ice) that are also components within ACME. Modifications are fully tested within MPAS. Testing in ACME acrossmore » many platforms is underway, and must be completed before the code is merged. In addition, a ten-year production ACME global simulation was conducted on Cori-KNL in late 2016 with the pre-NESAP code in order to test readiness and configurations for scientific studies. Next steps include assessing performance across a range of nodes, threads per node, and ocean resolutions on Cori-KNL.« less
THE CODE OF THE STREET AND INMATE VIOLENCE: INVESTIGATING THE SALIENCE OF IMPORTED BELIEF SYSTEMS*
MEARS, DANIEL P.; STEWART, ERIC A.; SIENNICK, SONJA E.; SIMONS, RONALD L.
2013-01-01
Scholars have long argued that inmate behaviors stem in part from cultural belief systems that they “import” with them into incarcerative settings. Even so, few empirical assessments have tested this argument directly. Drawing on theoretical accounts of one such set of beliefs—the code of the street—and on importation theory, we hypothesize that individuals who adhere more strongly to the street code will be more likely, once incarcerated, to engage in violent behavior and that this effect will be amplified by such incarceration experiences as disciplinary sanctions and gang involvement, as well as the lack of educational programming, religious programming, and family support. We test these hypotheses using unique data that include measures of the street code belief system and incarceration experiences. The results support the argument that the code of the street belief system affects inmate violence and that the effect is more pronounced among inmates who lack family support, experience disciplinary sanctions, and are gang involved. Implications of these findings are discussed. PMID:24068837
Ferlaino, Michael; Rogers, Mark F; Shihab, Hashem A; Mort, Matthew; Cooper, David N; Gaunt, Tom R; Campbell, Colin
2017-10-06
Small insertions and deletions (indels) have a significant influence in human disease and, in terms of frequency, they are second only to single nucleotide variants as pathogenic mutations. As the majority of mutations associated with complex traits are located outside the exome, it is crucial to investigate the potential pathogenic impact of indels in non-coding regions of the human genome. We present FATHMM-indel, an integrative approach to predict the functional effect, pathogenic or neutral, of indels in non-coding regions of the human genome. Our method exploits various genomic annotations in addition to sequence data. When validated on benchmark data, FATHMM-indel significantly outperforms CADD and GAVIN, state of the art models in assessing the pathogenic impact of non-coding variants. FATHMM-indel is available via a web server at indels.biocompute.org.uk. FATHMM-indel can accurately predict the functional impact and prioritise small indels throughout the whole non-coding genome.
Performance assessment of KORAT-3D on the ANL IBM-SP computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexeyev, A.V.; Zvenigorodskaya, O.A.; Shagaliev, R.M.
1999-09-01
The TENAR code is currently being developed at the Russian Federal Nuclear Center (VNIIEF) as a coupled dynamics code for the simulation of transients in VVER and RBMK systems and other nuclear systems. The neutronic module in this code system is KORAT-3D. This module is also one of the most computationally intensive components of the code system. A parallel version of KORAT-3D has been implemented to achieve the goal of obtaining transient solutions in reasonable computational time, particularly for RBMK calculations that involve the application of >100,000 nodes. An evaluation of the KORAT-3D code performance was recently undertaken on themore » Argonne National Laboratory (ANL) IBM ScalablePower (SP) parallel computer located in the Mathematics and Computer Science Division of ANL. At the time of the study, the ANL IBM-SP computer had 80 processors. This study was conducted under the auspices of a technical staff exchange program sponsored by the International Nuclear Safety Center (INSC).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santucci, P.; Guetat, P.
1993-12-31
This document describes the code CERISE, Code d`Evaluations Radiologiques Individuelles pour des Situations en Enterprise et dans l`Environnement. This code has been developed in the frame of European studies to establish acceptance criteria of very low-level radioactive waste and materials. This code is written in Fortran and runs on PC. It calculates doses received by the different pathways: external exposure, ingestion, inhalation and skin contamination. Twenty basic scenarios are already elaborated, which have been determined from previous studies. Calculations establish the relation between surface, specific and/or total activities, and doses. Results can be expressed as doses for an average activitymore » unit, or as average activity limits for a set of reference doses (defined for each scenario analyzed). In this last case, the minimal activity values and the corresponding limiting scenarios, are selected and summarized in a final table.« less
Spriggs, M J; Sumner, R L; McMillan, R L; Moran, R J; Kirk, I J; Muthukumaraswamy, S D
2018-04-30
The Roving Mismatch Negativity (MMN), and Visual LTP paradigms are widely used as independent measures of sensory plasticity. However, the paradigms are built upon fundamentally different (and seemingly opposing) models of perceptual learning; namely, Predictive Coding (MMN) and Hebbian plasticity (LTP). The aim of the current study was to compare the generative mechanisms of the MMN and visual LTP, therefore assessing whether Predictive Coding and Hebbian mechanisms co-occur in the brain. Forty participants were presented with both paradigms during EEG recording. Consistent with Predictive Coding and Hebbian predictions, Dynamic Causal Modelling revealed that the generation of the MMN modulates forward and backward connections in the underlying network, while visual LTP only modulates forward connections. These results suggest that both Predictive Coding and Hebbian mechanisms are utilized by the brain under different task demands. This therefore indicates that both tasks provide unique insight into plasticity mechanisms, which has important implications for future studies of aberrant plasticity in clinical populations. Copyright © 2018 Elsevier Inc. All rights reserved.
Evaluation of Agency Non-Code Layered Pressure Vessels (LPVs)
NASA Technical Reports Server (NTRS)
Prosser, William H.
2014-01-01
In coordination with the Office of Safety and Mission Assurance and the respective Center Pressure System Managers (PSMs), the NASA Engineering and Safety Center (NESC) was requested to formulate a consensus draft proposal for the development of additional testing and analysis methods to establish the technical validity, and any limitation thereof, for the continued safe operation of facility non-code layered pressure vessels. The PSMs from each NASA Center were asked to participate as part of the assessment team by providing, collecting, and reviewing data regarding current operations of these vessels. This report contains the outcome of the assessment and the findings, observations, and NESC recommendations to the Agency and individual NASA Centers.
Evaluation of Agency Non-Code Layered Pressure Vessels (LPVs). Corrected Copy, Aug. 25, 2014
NASA Technical Reports Server (NTRS)
Prosser, William H.
2014-01-01
In coordination with the Office of Safety and Mission Assurance and the respective Center Pressure System Managers (PSMs), the NASA Engineering and Safety Center (NESC) was requested to formulate a consensus draft proposal for the development of additional testing and analysis methods to establish the technical validity, and any limitation thereof, for the continued safe operation of facility non-code layered pressure vessels. The PSMs from each NASA Center were asked to participate as part of the assessment team by providing, collecting, and reviewing data regarding current operations of these vessels. This report contains the outcome of the assessment and the findings, observations, and NESC recommendations to the Agency and individual NASA Centers.
Current and anticipated uses of the thermal hydraulics codes at the NRC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caruso, R.
1997-07-01
The focus of Thermal-Hydraulic computer code usage in nuclear regulatory organizations has undergone a considerable shift since the codes were originally conceived. Less work is being done in the area of {open_quotes}Design Basis Accidents,{close_quotes}, and much more emphasis is being placed on analysis of operational events, probabalistic risk/safety assessment, and maintenance practices. All of these areas need support from Thermal-Hydraulic computer codes to model the behavior of plant fluid systems, and they all need the ability to perform large numbers of analyses quickly. It is therefore important for the T/H codes of the future to be able to support thesemore » needs, by providing robust, easy-to-use, tools that produce easy-to understand results for a wider community of nuclear professionals. These tools need to take advantage of the great advances that have occurred recently in computer software, by providing users with graphical user interfaces for both input and output. In addition, reduced costs of computer memory and other hardware have removed the need for excessively complex data structures and numerical schemes, which make the codes more difficult and expensive to modify, maintain, and debug, and which increase problem run-times. Future versions of the T/H codes should also be structured in a modular fashion, to allow for the easy incorporation of new correlations, models, or features, and to simplify maintenance and testing. Finally, it is important that future T/H code developers work closely with the code user community, to ensure that the code meet the needs of those users.« less
Mohajjel-Aghdam, Alireza; Hassankhani, Hadi; Zamanzadeh, Vahid; Khameneh, Saied; Moghaddam, Sara
2013-09-01
Nursing profession requires knowledge of ethics to guide performance. The nature of this profession necessitates ethical care more than routine care. Today, worldwide definition of professional ethic code has been done based on human and ethical issues in the communication between nurse and patient. To improve all dimensions of nursing, we need to respect ethic codes. The aim of this study is to assess knowledge and performance about nursing ethic codes from nurses' and patients' perspective. A descriptive study Conducted upon 345 nurses and 500 inpatients in six teaching hospitals of Tabriz, 2012. To investigate nurses' knowledge and performance, data were collected by using structured questionnaires. Statistical analysis was done using descriptive and analytic statistics, independent t-test and ANOVA and Pearson correlation coefficient, in SPSS13. Most of the nurses were female, married, educated at BS degree and 86.4% of them were aware of Ethic codes also 91.9% of nurses and 41.8% of patients represented nurses respect ethic codes. Nurses' and patients' perspective about ethic codes differed significantly. Significant relationship was found between nurses' knowledge of ethic codes and job satisfaction and complaint of ethical performance. According to the results, consideration to teaching ethic codes in nursing curriculum for student and continuous education for staff is proposed, on the other hand recognizing failures of the health system, optimizing nursing care, attempt to inform patients about Nursing ethic codes, promote patient rights and achieve patient satisfaction can minimize the differences between the two perspectives.
Mohajjel-Aghdam, Alireza; Hassankhani, Hadi; Zamanzadeh, Vahid; Khameneh, Saied; Moghaddam, Sara
2013-01-01
Introduction: Nursing profession requires knowledge of ethics to guide performance. The nature of this profession necessitates ethical care more than routine care. Today, worldwide definition of professional ethic code has been done based on human and ethical issues in the communication between nurse and patient. To improve all dimensions of nursing, we need to respect ethic codes. The aim of this study is to assess knowledge and performance about nursing ethic codes from nurses' and patients' perspective. Methods: A descriptive study Conducted upon 345 nurses and 500 inpatients in six teaching hospitals of Tabriz, 2012. To investigate nurses' knowledge and performance, data were collected by using structured questionnaires. Statistical analysis was done using descriptive and analytic statistics, independent t-test and ANOVA and Pearson correlation coefficient, in SPSS13. Results: Most of the nurses were female, married, educated at BS degree and 86.4% of them were aware of Ethic codes also 91.9% of nurses and 41.8% of patients represented nurses respect ethic codes. Nurses' and patients' perspective about ethic codes differed significantly. Significant relationship was found between nurses' knowledge of ethic codes and job satisfaction and complaint of ethical performance. Conclusion: According to the results, consideration to teaching ethic codes in nursing curriculum for student and continuous education for staff is proposed, on the other hand recognizing failures of the health system, optimizing nursing care, attempt to inform patients about Nursing ethic codes, promote patient rights and achieve patient satisfaction can minimize the differences between the two perspectives. PMID:25276730
The use of the SRIM code for calculation of radiation damage induced by neutrons
NASA Astrophysics Data System (ADS)
Mohammadi, A.; Hamidi, S.; Asadabad, Mohsen Asadi
2017-12-01
Materials subjected to neutron irradiation will being evolve to structural changes by the displacement cascades initiated by nuclear reaction. This study discusses a methodology to compute primary knock-on atoms or PKAs information that lead to radiation damage. A program AMTRACK has been developed for assessing of the PKAs information. This software determines the specifications of recoil atoms (using PTRAC card of MCNPX code) and also the kinematics of interactions. The deterministic method was used for verification of the results of (MCNPX+AMTRACK). The SRIM (formely TRIM) code is capable to compute neutron radiation damage. The PKAs information was extracted by AMTRACK program, which can be used as an input of SRIM codes for systematic analysis of primary radiation damage. Then the Bushehr Nuclear Power Plant (BNPP) radiation damage on reactor pressure vessel is calculated.
Emergency medicine summary code for reporting CT scan results: implementation and survey results.
Lam, Joanne; Coughlin, Ryan; Buhl, Luce; Herbst, Meghan; Herbst, Timothy; Martillotti, Jared; Coughlin, Bret
2018-06-01
The purpose of the study was to assess the emergency department (ED) providers' interest and satisfaction with ED CT result reporting before and after the implementation of a standardized summary code for all CT scan reporting. A summary code was provided at the end of all CTs ordered through the ED from August to October of 2016. A retrospective review was completed on all studies performed during this period. A pre- and post-survey was given to both ED and radiology providers. A total of 3980 CT scans excluding CTAs were ordered with 2240 CTs dedicated to the head and neck, 1685 CTs dedicated to the torso, and 55 CTs dedicated to the extremities. Approximately 74% CT scans were contrast enhanced. Of the 3980 ED CT examination ordered, 69% had a summary code assigned to it. Fifteen percent of the coded CTs had a critical or diagnostic positive result. The introduction of an ED CT summary code did not show a definitive improvement in communication. However, the ED providers are in consensus that radiology reports are crucial their patients' management. There is slightly increased satisfaction with the providers with less than 5 years of experience with the ED CT codes compared to more seasoned providers. The implementation of a user-friendly summary code may allow better analysis of results, practice improvement, and quality measurements in the future.
NASA Technical Reports Server (NTRS)
Hwang, D. P.; Boldman, D. R.; Hughes, C. E.
1994-01-01
An axisymmetric panel code and a three dimensional Navier-Stokes code (used as an inviscid Euler code) were verified for low speed, high angle of attack flow conditions. A three dimensional Navier-Stokes code (used as an inviscid code), and an axisymmetric Navier-Stokes code (used as both viscous and inviscid code) were also assessed for high Mach number cruise conditions. The boundary layer calculations were made by using the results from the panel code or Euler calculation. The panel method can predict the internal surface pressure distributions very well if no shock exists. However, only Euler and Navier-Stokes calculations can provide a good prediction of the surface static pressure distribution including the pressure rise across the shock. Because of the high CPU time required for a three dimensional Navier-Stokes calculation, only the axisymmetric Navier-Stokes calculation was considered at cruise conditions. The use of suction and tangential blowing boundary layer control to eliminate the flow separation on the internal surface was demonstrated for low free stream Mach number and high angle of attack cases. The calculation also shows that transition from laminar flow to turbulent flow on the external cowl surface can be delayed by using suction boundary layer control at cruise flow conditions. The results were compared with experimental data where possible.
The accuracy of burn diagnosis codes in health administrative data: A validation study.
Mason, Stephanie A; Nathens, Avery B; Byrne, James P; Fowler, Rob; Gonzalez, Alejandro; Karanicolas, Paul J; Moineddin, Rahim; Jeschke, Marc G
2017-03-01
Health administrative databases may provide rich sources of data for the study of outcomes following burn. We aimed to determine the accuracy of International Classification of Diseases diagnoses codes for burn in a population-based administrative database. Data from a regional burn center's clinical registry of patients admitted between 2006-2013 were linked to administrative databases. Burn total body surface area (TBSA), depth, mechanism, and inhalation injury were compared between the registry and administrative records. The sensitivity, specificity, and positive and negative predictive values were determined, and coding agreement was assessed with the kappa statistic. 1215 burn center patients were linked to administrative records. TBSA codes were highly sensitive and specific for ≥10 and ≥20% TBSA (89/93% sensitive and 95/97% specific), with excellent agreement (κ, 0.85/κ, 0.88). Codes were weakly sensitive (68%) in identifying ≥10% TBSA full-thickness burn, though highly specific (86%) with moderate agreement (κ, 0.46). Codes for inhalation injury had limited sensitivity (43%) but high specificity (99%) with moderate agreement (κ, 0.54). Burn mechanism had excellent coding agreement (κ, 0.84). Administrative data diagnosis codes accurately identify burn by burn size and mechanism, while identification of inhalation injury or full-thickness burns is less sensitive but highly specific. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.
Validation of ICD-9 Codes for Stable Miscarriage in the Emergency Department.
Quinley, Kelly E; Falck, Ailsa; Kallan, Michael J; Datner, Elizabeth M; Carr, Brendan G; Schreiber, Courtney A
2015-07-01
International Classification of Disease, Ninth Revision (ICD-9) diagnosis codes have not been validated for identifying cases of missed abortion where a pregnancy is no longer viable but the cervical os remains closed. Our goal was to assess whether ICD-9 code "632" for missed abortion has high sensitivity and positive predictive value (PPV) in identifying patients in the emergency department (ED) with cases of stable early pregnancy failure (EPF). We studied females ages 13-50 years presenting to the ED of an urban academic medical center. We approached our analysis from two perspectives, evaluating both the sensitivity and PPV of ICD-9 code "632" in identifying patients with stable EPF. All patients with chief complaints "pregnant and bleeding" or "pregnant and cramping" over a 12-month period were identified. We randomly reviewed two months of patient visits and calculated the sensitivity of ICD-9 code "632" for true cases of stable miscarriage. To establish the PPV of ICD-9 code "632" for capturing missed abortions, we identified patients whose visits from the same time period were assigned ICD-9 code "632," and identified those with actual cases of stable EPF. We reviewed 310 patient records (17.6% of 1,762 sampled). Thirteen of 31 patient records assigned ICD-9 code for missed abortion correctly identified cases of stable EPF (sensitivity=41.9%), and 140 of the 142 patients without EPF were not assigned the ICD-9 code "632"(specificity=98.6%). Of the 52 eligible patients identified by ICD-9 code "632," 39 cases met the criteria for stable EPF (PPV=75.0%). ICD-9 code "632" has low sensitivity for identifying stable EPF, but its high specificity and moderately high PPV are valuable for studying cases of stable EPF in epidemiologic studies using administrative data.
2014-01-01
Background The genome is pervasively transcribed but most transcripts do not code for proteins, constituting non-protein-coding RNAs. Despite increasing numbers of functional reports of individual long non-coding RNAs (lncRNAs), assessing the extent of functionality among the non-coding transcriptional output of mammalian cells remains intricate. In the protein-coding world, transcripts differentially expressed in the context of processes essential for the survival of multicellular organisms have been instrumental in the discovery of functionally relevant proteins and their deregulation is frequently associated with diseases. We therefore systematically identified lncRNAs expressed differentially in response to oncologically relevant processes and cell-cycle, p53 and STAT3 pathways, using tiling arrays. Results We found that up to 80% of the pathway-triggered transcriptional responses are non-coding. Among these we identified very large macroRNAs with pathway-specific expression patterns and demonstrated that these are likely continuous transcripts. MacroRNAs contain elements conserved in mammals and sauropsids, which in part exhibit conserved RNA secondary structure. Comparing evolutionary rates of a macroRNA to adjacent protein-coding genes suggests a local action of the transcript. Finally, in different grades of astrocytoma, a tumor disease unrelated to the initially used cell lines, macroRNAs are differentially expressed. Conclusions It has been shown previously that the majority of expressed non-ribosomal transcripts are non-coding. We now conclude that differential expression triggered by signaling pathways gives rise to a similar abundance of non-coding content. It is thus unlikely that the prevalence of non-coding transcripts in the cell is a trivial consequence of leaky or random transcription events. PMID:24594072
76 FR 78814 - National Voluntary Laboratory Accreditation Program; Operating Procedures
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-20
... requirements for accreditation bodies accrediting conformity assessment bodies. The change will allow NVLAP... the human environment. Therefore, an environmental assessment or Environmental Impact Statement is not..., Laboratories, Measurement standards, Testing. For the reasons set forth in the preamble, title 15 of the Code...
Seismic assessment of Technical Area V (TA-V).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Medrano, Carlos S.
The Technical Area V (TA-V) Seismic Assessment Report was commissioned as part of Sandia National Laboratories (SNL) Self Assessment Requirement per DOE O 414.1, Quality Assurance, for seismic impact on existing facilities at Technical Area-V (TA-V). SNL TA-V facilities are located on an existing Uniform Building Code (UBC) Seismic Zone IIB Site within the physical boundary of the Kirtland Air Force Base (KAFB). The document delineates a summary of the existing facilities with their safety-significant structure, system and components, identifies DOE Guidance, conceptual framework, past assessments and the present Geological and Seismic conditions. Building upon the past information and themore » evolution of the new seismic design criteria, the document discusses the potential impact of the new standards and provides recommendations based upon the current International Building Code (IBC) per DOE O 420.1B, Facility Safety and DOE G 420.1-2, Guide for the Mitigation of Natural Phenomena Hazards for DOE Nuclear Facilities and Non-Nuclear Facilities.« less
Assessment of the Draft AIAA S-119 Flight Dynamic Model Exchange Standard
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce; Murri, Daniel G.; Hill, Melissa A.; Jessick, Matthew V.; Penn, John M.; Hasan, David A.; Crues, Edwin Z.; Falck, Robert D.; McCarthy, Thomas G.; Vuong, Nghia;
2011-01-01
An assessment of a draft AIAA standard for flight dynamics model exchange, ANSI/AIAA S-119-2011, was conducted on behalf of NASA by a team from the NASA Engineering and Safety Center. The assessment included adding the capability of importing standard models into real-time simulation facilities at several NASA Centers as well as into analysis simulation tools. All participants were successful at importing two example models into their respective simulation frameworks by using existing software libraries or by writing new import tools. Deficiencies in the libraries and format documentation were identified and fixed; suggestions for improvements to the standard were provided to the AIAA. An innovative tool to generate C code directly from such a model was developed. Performance of the software libraries compared favorably with compiled code. As a result of this assessment, several NASA Centers can now import standard models directly into their simulations. NASA is considering adopting the now-published S-119 standard as an internal recommended practice.
Distributed polar-coded OFDM based on Plotkin's construction for half duplex wireless communication
NASA Astrophysics Data System (ADS)
Umar, Rahim; Yang, Fengfan; Mughal, Shoaib; Xu, HongJun
2018-07-01
A Plotkin-based polar-coded orthogonal frequency division multiplexing (P-PC-OFDM) scheme is proposed and its bit error rate (BER) performance over additive white gaussian noise (AWGN), frequency selective Rayleigh, Rician and Nakagami-m fading channels has been evaluated. The considered Plotkin's construction possesses a parallel split in its structure, which motivated us to extend the proposed P-PC-OFDM scheme in a coded cooperative scenario. As the relay's effective collaboration has always been pivotal in the design of cooperative communication therefore, an efficient selection criterion for choosing the information bits has been inculcated at the relay node. To assess the BER performance of the proposed cooperative scheme, we have also upgraded conventional polar-coded cooperative scheme in the context of OFDM as an appropriate bench marker. The Monte Carlo simulated results revealed that the proposed Plotkin-based polar-coded cooperative OFDM scheme convincingly outperforms the conventional polar-coded cooperative OFDM scheme by 0.5 0.6 dBs over AWGN channel. This prominent gain in BER performance is made possible due to the bit-selection criteria and the joint successive cancellation decoding adopted at the relay and the destination nodes, respectively. Furthermore, the proposed coded cooperative schemes outperform their corresponding non-cooperative schemes by a gain of 1 dB under an identical condition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, D.G.: Watkins, J.C.
This report documents an evaluation of the TRAC-PF1/MOD1 reactor safety analysis computer code during computer simulations of feedwater line break transients. The experimental data base for the evaluation included the results of three bottom feedwater line break tests performed in the Semiscale Mod-2C test facility. The tests modeled 14.3% (S-FS-7), 50% (S-FS-11), and 100% (S-FS-6B) breaks. The test facility and the TRAC-PF1/MOD1 model used in the calculations are described. Evaluations of the accuracy of the calculations are presented in the form of comparisons of measured and calculated histories of selected parameters associated with the primary and secondary systems. In additionmore » to evaluating the accuracy of the code calculations, the computational performance of the code during the simulations was assessed. A conclusion was reached that the code is capable of making feedwater line break transient calculations efficiently, but there is room for significant improvements in the simulations that were performed. Recommendations are made for follow-on investigations to determine how to improve future feedwater line break calculations and for code improvements to make the code easier to use.« less
Comparing thin slices of verbal communication behavior of varying number and duration.
Carcone, April Idalski; Naar, Sylvie; Eggly, Susan; Foster, Tanina; Albrecht, Terrance L; Brogan, Kathryn E
2015-02-01
The aim of this study was to assess the accuracy of thin slices to characterize the verbal communication behavior of counselors and patients engaged in Motivational Interviewing sessions relative to fully coded sessions. Four thin slice samples that varied in number (four versus six slices) and duration (one- versus two-minutes) were extracted from a previously coded dataset. In the parent study, an observational code scheme was used to characterize specific counselor and patient verbal communication behaviors. For the current study, we compared the frequency of communication codes and the correlations among the full dataset and each thin slice sample. Both the proportion of communication codes and strength of the correlation demonstrated the highest degree of accuracy when a greater number (i.e., six versus four) and duration (i.e., two- versus one-minute) of slices were extracted. These results suggest that thin slice sampling may be a useful and accurate strategy to reduce coding burden when coding specific verbal communication behaviors within clinical encounters. We suggest researchers interested in using thin slice sampling in their own work conduct preliminary research to determine the number and duration of thin slices required to accurately characterize the behaviors of interest. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
DRG coding practice: a nationwide hospital survey in Thailand
2011-01-01
Background Diagnosis Related Group (DRG) payment is preferred by healthcare reform in various countries but its implementation in resource-limited countries has not been fully explored. Objectives This study was aimed (1) to compare the characteristics of hospitals in Thailand that were audited with those that were not and (2) to develop a simplified scale to measure hospital coding practice. Methods A questionnaire survey was conducted of 920 hospitals in the Summary and Coding Audit Database (SCAD hospitals, all of which were audited in 2008 because of suspicious reports of possible DRG miscoding); the questionnaire also included 390 non-SCAD hospitals. The questionnaire asked about general demographics of the hospitals, hospital coding structure and process, and also included a set of 63 opinion-oriented items on the current hospital coding practice. Descriptive statistics and exploratory factor analysis (EFA) were used for data analysis. Results SCAD and Non-SCAD hospitals were different in many aspects, especially the number of medical statisticians, experience of medical statisticians and physicians, as well as number of certified coders. Factor analysis revealed a simplified 3-factor, 20-item model to assess hospital coding practice and classify hospital intention. Conclusion Hospital providers should not be assumed capable of producing high quality DRG codes, especially in resource-limited settings. PMID:22040256
Ogunrin, Olubunmi A; Daniel, Folasade; Ansa, Victor
2016-12-01
Responsibility for protection of research participants from harm and exploitation rests on Research Ethics Committees and principal investigators. The Nigerian National Code of Health Research Ethics defines responsibilities of stakeholders in research so its knowledge among researchers will likely aid ethical conduct of research. The levels of awareness and knowledge of the Code among biomedical researchers in southern Nigerian research institutions was assessed. Four institutions were selected using a stratified random sampling technique. Research participants were selected by purposive sampling and completed a pre-tested structured questionnaire. A total of 102 biomedical researchers completed the questionnaires. Thirty percent of the participants were aware of the National Code though 64% had attended at least one training seminar in research ethics. Twenty-five percent had a fairly acceptable knowledge (scores 50%-74%) and 10% had excellent knowledge of the code (score ≥75%). Ninety-five percent expressed intentions to learn more about the National Code and agreed that it is highly relevant to the ethical conduct of research. Awareness and knowledge of the Code were found to be very limited among biomedical researchers in southern Nigeria. There is need to improve awareness and knowledge through ethics seminars and training. Use of existing Nigeria-specific online training resources is also encouraged.
Canham-Chervak, Michelle; Steelman, Ryan A; Schuh, Anna; Jones, Bruce H
2016-11-01
Injuries are a barrier to military medical readiness, and overexertion has historically been a leading mechanism of injury among active duty U.S. Army soldiers. Details are needed to inform prevention planning. The Defense Medical Surveillance System (DMSS) was queried for unique medical encounters among active duty Army soldiers consistent with the military injury definition and assigned an overexertion external cause code (ICD-9: E927.0-E927.9) in 2014 (n=21,891). Most (99.7%) were outpatient visits and 60% were attributed specifically to sudden strenuous movement. Among the 41% (n=9,061) of visits with an activity code (ICD-9: E001-E030), running was the most common activity (n=2,891, 32%); among the 19% (n=4,190) with a place of occurrence code (ICD-9: E849.0-E849.9), the leading location was recreation/sports facilities (n=1,332, 32%). External cause codes provide essential details, but the data represented less than 4% of all injury-related medical encounters among U.S. Army soldiers in 2014. Efforts to improve external cause coding are needed, and could be aligned with training on and enforcement of ICD-10 coding guidelines throughout the Military Health System.
Malnutrition: The Importance of Identification, Documentation, and Coding in the Acute Care Setting
Kyle, Greg; Itsiopoulos, Catherine; Naunton, Mark; Luff, Narelle
2016-01-01
Malnutrition is a significant issue in the hospital setting. This cross-sectional, observational study determined the prevalence of malnutrition amongst 189 adult inpatients in a teaching hospital using the Patient-Generated Subjective Global Assessment tool and compared data to control groups for coding of malnutrition to determine the estimated unclaimed financial reimbursement associated with this comorbidity. Fifty-three percent of inpatients were classified as malnourished. Significant associations were found between malnutrition and increasing age, decreasing body mass index, and increased length of stay. Ninety-eight percent of malnourished patients were coded as malnourished in medical records. The results of the medical history audit of patients in control groups showed that between 0.9 and 5.4% of patients were coded as malnourished which is remarkably lower than the 52% of patients who were coded as malnourished from the point prevalence study data. This is most likely to be primarily due to lack of identification. The estimated unclaimed annual financial reimbursement due to undiagnosed or undocumented malnutrition based on the point prevalence study was AU$8,536,200. The study found that half the patients were malnourished, with older adults being particularly vulnerable. It is imperative that malnutrition is diagnosed and accurately documented and coded, so appropriate coding, funding reimbursement, and treatment can occur. PMID:27774317
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dinh, Nam; Athe, Paridhi; Jones, Christopher
The Virtual Environment for Reactor Applications (VERA) code suite is assessed in terms of capability and credibility against the Consortium for Advanced Simulation of Light Water Reactors (CASL) Verification and Validation Plan (presented herein) in the context of three selected challenge problems: CRUD-Induced Power Shift (CIPS), Departure from Nucleate Boiling (DNB), and Pellet-Clad Interaction (PCI). Capability refers to evidence of required functionality for capturing phenomena of interest while capability refers to the evidence that provides confidence in the calculated results. For this assessment, each challenge problem defines a set of phenomenological requirements against which the VERA software is assessed. Thismore » approach, in turn, enables the focused assessment of only those capabilities relevant to the challenge problem. The evaluation of VERA against the challenge problem requirements represents a capability assessment. The mechanism for assessment is the Sandia-developed Predictive Capability Maturity Model (PCMM) that, for this assessment, evaluates VERA on 8 major criteria: (1) Representation and Geometric Fidelity, (2) Physics and Material Model Fidelity, (3) Software Quality Assurance and Engineering, (4) Code Verification, (5) Solution Verification, (6) Separate Effects Model Validation, (7) Integral Effects Model Validation, and (8) Uncertainty Quantification. For each attribute, a maturity score from zero to three is assigned in the context of each challenge problem. The evaluation of these eight elements constitutes the credibility assessment for VERA.« less
Lawrence, Renée H; Tomolo, Anne M
2011-03-01
Although practice-based learning and improvement (PBLI) is now recognized as a fundamental and necessary skill set, we are still in need of tools that yield specific information about gaps in knowledge and application to help nurture the development of quality improvement (QI) skills in physicians in a proficient and proactive manner. We developed a questionnaire and coding system as an assessment tool to evaluate and provide feedback regarding PBLI self-efficacy, knowledge, and application skills for residency programs and related professional requirements. Five nationally recognized QI experts/leaders reviewed and completed our questionnaire. Through an iterative process, a coding system based on identifying key variables needed for ideal responses was developed to score project proposals. The coding system comprised 14 variables related to the QI projects, and an additional 30 variables related to the core knowledge concepts related to PBLI. A total of 86 residents completed the questionnaire, and 2 raters coded their open-ended responses. Interrater reliability was assessed by percentage agreement and Cohen κ for individual variables and Lin concordance correlation for total scores for knowledge and application. Discriminative validity (t test to compare known groups) and coefficient of reproducibility as an indicator of construct validity (item difficulty hierarchy) were also assessed. Interrater reliability estimates were good (percentage of agreements, above 90%; κ, above 0.4 for most variables; concordances for total scores were R = .88 for knowledge and R = .98 for application). Despite the residents' limited range of experiences in the group with prior PBLI exposure, our tool met our goal of differentiating between the 2 groups in our preliminary analyses. Correcting for chance agreement identified some variables that are potentially problematic. Although additional evaluation is needed, our tool may prove helpful and provide detailed information about trainees' progress and the curriculum.
Lawrence, Renée H; Tomolo, Anne M
2011-01-01
Background Although practice-based learning and improvement (PBLI) is now recognized as a fundamental and necessary skill set, we are still in need of tools that yield specific information about gaps in knowledge and application to help nurture the development of quality improvement (QI) skills in physicians in a proficient and proactive manner. We developed a questionnaire and coding system as an assessment tool to evaluate and provide feedback regarding PBLI self-efficacy, knowledge, and application skills for residency programs and related professional requirements. Methods Five nationally recognized QI experts/leaders reviewed and completed our questionnaire. Through an iterative process, a coding system based on identifying key variables needed for ideal responses was developed to score project proposals. The coding system comprised 14 variables related to the QI projects, and an additional 30 variables related to the core knowledge concepts related to PBLI. A total of 86 residents completed the questionnaire, and 2 raters coded their open-ended responses. Interrater reliability was assessed by percentage agreement and Cohen κ for individual variables and Lin concordance correlation for total scores for knowledge and application. Discriminative validity (t test to compare known groups) and coefficient of reproducibility as an indicator of construct validity (item difficulty hierarchy) were also assessed. Results Interrater reliability estimates were good (percentage of agreements, above 90%; κ, above 0.4 for most variables; concordances for total scores were R = .88 for knowledge and R = .98 for application). Conclusion Despite the residents' limited range of experiences in the group with prior PBLI exposure, our tool met our goal of differentiating between the 2 groups in our preliminary analyses. Correcting for chance agreement identified some variables that are potentially problematic. Although additional evaluation is needed, our tool may prove helpful and provide detailed information about trainees' progress and the curriculum. PMID:22379522
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epiney, A.; Canepa, S.; Zerkak, O.
The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less
ERIC Educational Resources Information Center
Green, Crystal D.
2010-01-01
This action research study investigated the perceptions that student participants had on the development of a career exploration model and a career exploration project. The Holland code theory was the primary assessment used for this research study, in addition to the Multiple Intelligences theory and the identification of a role model for the…
Fast Scattering Code (FSC) User's Manual: Version 2
NASA Technical Reports Server (NTRS)
Tinetti, Ana F.; Dun, M. H.; Pope, D. Stuart
2006-01-01
The Fast Scattering Code (version 2.0) is a computer program for predicting the three-dimensional scattered acoustic field produced by the interaction of known, time-harmonic, incident sound with aerostructures in the presence of potential background flow. The FSC has been developed for use as an aeroacoustic analysis tool for assessing global effects on noise radiation and scattering caused by changes in configuration (geometry, component placement) and operating conditions (background flow, excitation frequency).
Assessment of Turbulence-Chemistry Interaction Models in the National Combustion Code (NCC) - Part I
NASA Technical Reports Server (NTRS)
Wey, Thomas Changju; Liu, Nan-suey
2011-01-01
This paper describes the implementations of the linear-eddy model (LEM) and an Eulerian FDF/PDF model in the National Combustion Code (NCC) for the simulation of turbulent combustion. The impacts of these two models, along with the so called laminar chemistry model, are then illustrated via the preliminary results from two combustion systems: a nine-element gas fueled combustor and a single-element liquid fueled combustor.
2011-09-01
tectonically active regions such as the Middle East. For example, we previously applied the code to determine the crust and upper mantle structure...Objective Optimization (MOO) for Multiple Datasets The primary goal of our current project is to develop a tool for estimating crustal structure that...be used to obtain crustal velocity structures by modeling broadband waveform, receiver function, and surface wave dispersion data. The code has been
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.
2012-07-01
In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, andmore » to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)« less
Mizumachi, Hideyuki; Sakuma, Megumi; Ikezumi, Mayu; Saito, Kazutoshi; Takeyoshi, Midori; Imai, Noriyasu; Okutomi, Hiroko; Umetsu, Asami; Motohashi, Hiroko; Watanabe, Mika; Miyazawa, Masaaki
2018-05-03
The epidermal sensitization assay (EpiSensA) is an in vitro skin sensitization test method based on gene expression of four markers related to the induction of skin sensitization; the assay uses commercially available reconstructed human epidermis. EpiSensA has exhibited an accuracy of 90% for 72 chemicals, including lipophilic chemicals and pre-/pro-haptens, when compared with the results of the murine local lymph node assay. In this work, a ring study was performed by one lead and two naive laboratories to evaluate the transferability, as well as within- and between-laboratory reproducibilities, of EpiSensA. Three non-coded chemicals (two lipophilic sensitizers and one non-sensitizer) were tested for the assessment of transferability and 10 coded chemicals (seven sensitizers and three non-sensitizers, including four lipophilic chemicals) were tested for the assessment of reproducibility. In the transferability phase, the non-coded chemicals (two sensitizers and one non-sensitizer) were correctly classified at the two naive laboratories, indicating that the EpiSensA protocol was transferred successfully. For the within-laboratory reproducibility, the data generated with three coded chemicals tested in three independent experiments in each laboratory gave consistent predictions within laboratories. For the between-laboratory reproducibility, 9 of the 10 coded chemicals tested once in each laboratory provided consistent predictions among the three laboratories. These results suggested that EpiSensA has good transferability, as well as within- and between-laboratory reproducibility. Copyright © 2018 John Wiley & Sons, Ltd.
Student and staff opinion of electronic capture of data related to clinical activity.
Oliver, R G
1997-02-01
To seek the opinion of staff and students of a new electronic method for collection of data related to student clinical activity. Questionnaire survey. Staff and students in the Department of Child Dental Health, Dental School, Cardiff, and staff in the Community Dental Service who undertake clinical supervision. A questionnaire was circulated to all 2nd and 3rd clinical year dental undergraduate students seeking their opinion on a range of issues associated with the recently introduced bar code system of data gathering of their clinical activity and achievement. A similar questionnaire was circulated to staff who have responsibility for clinical supervision of these students. A total of 102 replies were received. With the exception of 2 aspects, there was no disagreement between staff and students. An overall majority preferred the use of bar codes to other methods of data collection; bar codes were perceived to be more accurate and reliable than other methods; students were satisfied with the method of quality assessment; staff were dissatisfied (P < 0.05). Staff were strongly in favour of extension of the use of bar codes to other clinics, whereas students were less strongly in favour (P < 0.001); there was little enthusiasm to extend bar codes for recording attendance at lectures, seminars and other such activity. The new system has been accepted by staff and students alike. It has proven to be satisfactory for its intended purpose. As a result of this survey, minor adjustments to procedures will take place, and the method of assessment of clinical work will be reconsidered.
Ramesh, S V
2013-09-01
Of late non-coding RNAs (ncRNAs)-mediated gene silencing is an influential tool deliberately deployed to negatively regulate the expression of targeted genes. In addition to the widely employed small interfering RNA (siRNA)-mediated gene silencing approach, other variants like artificial miRNA (amiRNA), miRNA mimics, and artificial transacting siRNAs (tasiRNAs) are being explored and successfully deployed in developing non-coding RNA-based genetically modified plants. The ncRNA-based gene manipulations are typified with mobile nature of silencing signals, interference from viral genome-derived suppressor proteins, and an obligation for meticulous computational analysis to prevaricate any inadvertent effects. In a broad sense, risk assessment inquiries for genetically modified plants based on the expression of ncRNAs are competently addressed by the environmental risk assessment (ERA) models, currently in vogue, designed for the first generation transgenic plants which are based on the expression of heterologous proteins. Nevertheless, transgenic plants functioning on the foundation of ncRNAs warrant due attention with respect to their unique attributes like off-target or non-target gene silencing effects, small RNAs (sRNAs) persistence, food and feed safety assessments, problems in detection and tracking of sRNAs in food, impact of ncRNAs in plant protection measures, effect of mutations etc. The role of recent developments in sequencing techniques like next generation sequencing (NGS) and the ERA paradigm of the different countries in vogue are also discussed in the context of ncRNA-based gene manipulations.
Strengthening Morality and Ethics in Educational Assessment through "Ubuntu" in South Africa
ERIC Educational Resources Information Center
Beets, Peter A. D.
2012-01-01
While assessment is regarded as integral to enhancing the quality of teaching and learning, it is also a practice fraught with moral and ethical issues. An analysis is made of current assessment practices of teachers in South Africa which seem to straddle the domains of accountability and professional codes of conduct. In the process the position…
NASA Astrophysics Data System (ADS)
Toprak, A. Emre; Gülay, F. Gülten; Ruge, Peter
2008-07-01
Determination of seismic performance of existing buildings has become one of the key concepts in structural analysis topics after recent earthquakes (i.e. Izmit and Duzce Earthquakes in 1999, Kobe Earthquake in 1995 and Northridge Earthquake in 1994). Considering the need for precise assessment tools to determine seismic performance level, most of earthquake hazardous countries try to include performance based assessment in their seismic codes. Recently, Turkish Earthquake Code 2007 (TEC'07), which was put into effect in March 2007, also introduced linear and non-linear assessment procedures to be applied prior to building retrofitting. In this paper, a comparative study is performed on the code-based seismic assessment of RC buildings with linear static methods of analysis, selecting an existing RC building. The basic principles dealing the procedure of seismic performance evaluations for existing RC buildings according to Eurocode 8 and TEC'07 will be outlined and compared. Then the procedure is applied to a real case study building is selected which is exposed to 1998 Adana-Ceyhan Earthquake in Turkey, the seismic action of Ms = 6.3 with a maximum ground acceleration of 0.28 g It is a six-storey RC residential building with a total of 14.65 m height, composed of orthogonal frames, symmetrical in y direction and it does not have any significant structural irregularities. The rectangular shaped planar dimensions are 16.40 m×7.80 m = 127.90 m2 with five spans in x and two spans in y directions. It was reported that the building had been moderately damaged during the 1998 earthquake and retrofitting process was suggested by the authorities with adding shear-walls to the system. The computations show that the performing methods of analysis with linear approaches using either Eurocode 8 or TEC'07 independently produce similar performance levels of collapse for the critical storey of the structure. The computed base shear value according to Eurocode is much higher than the requirements of the Turkish Earthquake Code while the selected ground conditions represent the same characteristics. The main reason is that the ordinate of the horizontal elastic response spectrum for Eurocode 8 is increased by the soil factor. In TEC'07 force-based linear assessment, the seismic demands at cross-sections are to be checked with residual moment capacities; however, the chord rotations of primary ductile elements must be checked for Eurocode safety verifications. On the other hand, the demand curvatures from linear methods of analysis of Eurocode 8 together with TEC'07 are almost similar.
Indulski, J A; Rolecki, R
1994-01-01
In view of the present and proposed amendments to the Labor Code as well as bearing in mind anticipated harmonization of regulations in this area with those of EEC, the authors emphasize the need for well developed methodology for assessing chemical safety in an occupational environment with special reference to health effects in people exposed to chemicals. Methods for assessing health risk induced by work under conditions of exposure to chemicals were divided into: methods for assessing technological/processing risk, and methods for assessing health risk related to the toxic effect of chemicals. The need for developing means of risk communication in order to secure proper risk perception among people exposed to chemicals and risk managers responsible for prevention against chemical hazards was also stressed. It is suggested to establish a centre for chemical substances in order to settle down all issues pertaining to human exposure to chemicals. The centre would be responsible, under the provisions of the Chemical Substances Act, for the qualitative and quantitative analysis of the present situation and for the development of guidelines on assessment of health risk among persons exposed to chemicals.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2004-12-01
Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.
Analysis of film cooling in rocket nozzles
NASA Technical Reports Server (NTRS)
Woodbury, Keith A.; Karr, Gerald R.
1992-01-01
Progress during the reporting period is summarized. Analysis of film cooling in rocket nozzles by computational fluid dynamics (CFD) computer codes is desirable for two reasons. First, it allows prediction of resulting flow fields within the rocket nozzle, in particular the interaction of the coolant boundary layer with the main flow. This facilitates evaluation of potential cooling configurations with regard to total thrust, etc., before construction and testing of any prototype. Secondly, CFD simulation of film cooling allows for assessment of the effectiveness of the proposed cooling in limiting nozzle wall temperature rises. This latter objective is the focus of the current work. The desired objective is to use the Finite Difference Navier Stokes (FDNS) code to predict wall heat fluxes or wall temperatures in rocket nozzles. As prior work has revealed that the FDNS code is deficient in the thermal modeling of boundary conditions, the first step is to correct these deficiencies in the FDNS code. Next, these changes must be tested against available data. Finally, the code will be used to model film cooling of a particular rocket nozzle. The third task of this research, using the modified code to compute the flow of hot gases through a nozzle, is described.
Recent Progress and Future Plans for Fusion Plasma Synthetic Diagnostics Platform
NASA Astrophysics Data System (ADS)
Shi, Lei; Kramer, Gerrit; Tang, William; Tobias, Benjamin; Valeo, Ernest; Churchill, Randy; Hausammann, Loic
2015-11-01
The Fusion Plasma Synthetic Diagnostics Platform (FPSDP) is a Python package developed at the Princeton Plasma Physics Laboratory. It is dedicated to providing an integrated programmable environment for applying a modern ensemble of synthetic diagnostics to the experimental validation of fusion plasma simulation codes. The FPSDP will allow physicists to directly compare key laboratory measurements to simulation results. This enables deeper understanding of experimental data, more realistic validation of simulation codes, quantitative assessment of existing diagnostics, and new capabilities for the design and optimization of future diagnostics. The Fusion Plasma Synthetic Diagnostics Platform now has data interfaces for the GTS and XGC-1 global particle-in-cell simulation codes with synthetic diagnostic modules including: (i) 2D and 3D Reflectometry; (ii) Beam Emission Spectroscopy; and (iii) 1D Electron Cyclotron Emission. Results will be reported on the delivery of interfaces for the global electromagnetic PIC code GTC, the extended MHD M3D-C1 code, and the electromagnetic hybrid NOVAK eigenmode code. Progress toward development of a more comprehensive 2D Electron Cyclotron Emission module will also be discussed. This work is supported by DOE contract #DEAC02-09CH11466.
NASA Technical Reports Server (NTRS)
Stoll, Frederick
1993-01-01
The NLPAN computer code uses a finite-strip approach to the analysis of thin-walled prismatic composite structures such as stiffened panels. The code can model in-plane axial loading, transverse pressure loading, and constant through-the-thickness thermal loading, and can account for shape imperfections. The NLPAN code represents an attempt to extend the buckling analysis of the VIPASA computer code into the geometrically nonlinear regime. Buckling mode shapes generated using VIPASA are used in NLPAN as global functions for representing displacements in the nonlinear regime. While the NLPAN analysis is approximate in nature, it is computationally economical in comparison with finite-element analysis, and is thus suitable for use in preliminary design and design optimization. A comprehensive description of the theoretical approach of NLPAN is provided. A discussion of some operational considerations for the NLPAN code is included. NLPAN is applied to several test problems in order to demonstrate new program capabilities, and to assess the accuracy of the code in modeling various types of loading and response. User instructions for the NLPAN computer program are provided, including a detailed description of the input requirements and example input files for two stiffened-panel configurations.
van der Mei, Sijrike F; Dijkers, Marcel P J M; Heerkens, Yvonne F
2011-12-01
To examine to what extent the concept and the domains of participation as defined in the International Classification of Functioning, Disability and Health (ICF) are represented in general cancer-specific health-related quality of life (HRQOL) instruments. Using the ICF linking rules, two coders independently extracted the meaningful concepts of ten instruments and linked these to ICF codes. The proportion of concepts that could be linked to ICF codes ranged from 68 to 95%. Although all instruments contained concepts linked to Participation (Chapters d7-d9 of the classification of 'Activities and Participation'), the instruments covered only a small part of all available ICF codes. The proportion of ICF codes in the instruments that were participation related ranged from 3 to 35%. 'Major life areas' (d8) was the most frequently used Participation Chapter, with d850 'remunerative employment' as the most used ICF code. The number of participation-related ICF codes covered in the instruments is limited. General cancer-specific HRQOL instruments only assess social life of cancer patients to a limited degree. This study's information on the content of these instruments may guide researchers in selecting the appropriate instrument for a specific research purpose.
Shiiba, Takuro; Kuga, Naoya; Kuroiwa, Yasuyoshi; Sato, Tatsuhiko
2017-10-01
We assessed the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels (DPKs) calculated using the particle and heavy ion transport code system (PHITS) for patient-specific dosimetry in targeted radionuclide treatment (TRT) and compared our data with published data. All mono-energetic and beta-emitting isotope DPKs calculated using PHITS, both in water and compact bone, were in good agreement with those in literature using other MC codes. PHITS provided reliable mono-energetic electron and beta-emitting isotope scaled DPKs for patient-specific dosimetry. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2010-01-01
Codes for predicting supersonic jet mixing and broadband shock-associated noise were assessed using a database containing noise measurements of a jet issuing from a convergent nozzle. Two types of codes were used to make predictions. Fast running codes containing empirical models were used to compute both the mixing noise component and the shock-associated noise component of the jet noise spectrum. One Reynolds-averaged, Navier-Stokes-based code was used to compute only the shock-associated noise. To enable the comparisons of the predicted component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise components. Comparisons were made for 1/3-octave spectra and some power spectral densities using data from jets operating at 24 conditions covering essentially 6 fully expanded Mach numbers with 4 total temperature ratios.
Experimental aerothermodynamic research of hypersonic aircraft
NASA Technical Reports Server (NTRS)
Cleary, Joseph W.
1987-01-01
The 2-D and 3-D advance computer codes being developed for use in the design of such hypersonic aircraft as the National Aero-Space Plane require comparison of the computational results with a broad spectrum of experimental data to fully assess the validity of the codes. This is particularly true for complex flow fields with control surfaces present and for flows with separation, such as leeside flow. Therefore, the objective is to provide a hypersonic experimental data base required for validation of advanced computational fluid dynamics (CFD) computer codes and for development of more thorough understanding of the flow physics necessary for these codes. This is being done by implementing a comprehensive test program for a generic all-body hypersonic aircraft model in the NASA/Ames 3.5 foot Hypersonic Wind Tunnel over a broad range of test conditions to obtain pertinent surface and flowfield data. Results from the flow visualization portion of the investigation are presented.
Scoring the Strengths and Weaknesses of Underage Drinking Laws in the United States
Fell, James C.; Thomas, Sue; Scherer, Michael; Fisher, Deborah A.; Romano, Eduardo
2015-01-01
Several studies have examined the impact of a number of minimum legal drinking age 21 (MLDA-21) laws on underage alcohol consumption and alcohol-related crashes in the United States. These studies have contributed to our understanding of how alcohol control laws affect drinking and driving among those who are under age 21. However, much of the extant literature examining underage drinking laws use a “Law/No law” coding which may obscure the variability inherent in each law. Previous literature has demonstrated that inclusion of law strengths may affect outcomes and overall data fit when compared to “Law/No law” coding. In an effort to assess the relative strength of states’ underage drinking legislation, a coding system was developed in 2006 and applied to 16 MLDA-21 laws. The current article updates the previous endeavor and outlines a detailed strength coding mechanism for the current 20 MLDA-21 laws. PMID:26097775
Application of the DART Code for the Assessment of Advanced Fuel Behavior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rest, J.; Totev, T.
2007-07-01
The Dispersion Analysis Research Tool (DART) code is a dispersion fuel analysis code that contains mechanistically-based fuel and reaction-product swelling models, a one dimensional heat transfer analysis, and mechanical deformation models. DART has been used to simulate the irradiation behavior of uranium oxide, uranium silicide, and uranium molybdenum aluminum dispersion fuels, as well as their monolithic counterparts. The thermal-mechanical DART code has been validated against RERTR tests performed in the ATR for irradiation data on interaction thickness, fuel, matrix, and reaction product volume fractions, and plate thickness changes. The DART fission gas behavior model has been validated against UO{sub 2}more » fission gas release data as well as measured fission gas-bubble size distributions. Here DART is utilized to analyze various aspects of the observed bubble growth in U-Mo/Al interaction product. (authors)« less
NASA Astrophysics Data System (ADS)
Tsilanizara, A.; Gilardi, N.; Huynh, T. D.; Jouanne, C.; Lahaye, S.; Martinez, J. M.; Diop, C. M.
2014-06-01
The knowledge of the decay heat quantity and the associated uncertainties are important issues for the safety of nuclear facilities. Many codes are available to estimate the decay heat. ORIGEN, FISPACT, DARWIN/PEPIN2 are part of them. MENDEL is a new depletion code developed at CEA, with new software architecture, devoted to the calculation of physical quantities related to fuel cycle studies, in particular decay heat. The purpose of this paper is to present a probabilistic approach to assess decay heat uncertainty due to the decay data uncertainties from nuclear data evaluation like JEFF-3.1.1 or ENDF/B-VII.1. This probabilistic approach is based both on MENDEL code and URANIE software which is a CEA uncertainty analysis platform. As preliminary applications, single thermal fission of uranium 235, plutonium 239 and PWR UOx spent fuel cell are investigated.
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson; ...
2018-06-14
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
Accuracy of clinical coding from 1210 appendicectomies in a British district general hospital.
Bhangu, Aneel; Nepogodiev, Dmitri; Taylor, Caroline; Durkin, Natalie; Patel, Rajan
2012-01-01
The primary aim of this study was to assess the accuracy of clinical coding in identifying negative appendicectomies. The secondary aim was to analyse trends over time in rates of simple, complex (gangrenous or perforated) and negative appendicectomies. Retrospective review of 1210 patients undergoing emergency appendicectomy during a five year period (2006-2010). Histopathology reports were taken as gold standard for diagnosis and compared to clinical coding lists. Clinical coding is the process by which non-medical administrators apply standardised diagnostic codes to patients, based upon clinical notes at discharge. These codes then contribute to national databases. Statistical analysis included correlation studies and regression analyses. Clinical coding had only moderate correlation with histopathology, with an overall kappa of 0.421. Annual kappa values varied between 0.378 and 0.500. Overall 14% of patients were incorrectly coded as having had appendicitis when in fact they had a histopathologically normal appendix (153/1107), whereas 4% were falsely coded as having received a negative appendicectomy when they had appendicitis (48/1107). There was an overall significant fall and then rise in the rate of simple appendicitis (B coefficient -0.239 (95% confidence interval -0.426, -0.051), p = 0.014) but no change in the rate of complex appendicitis (B coefficient 0.008 (-0.015, 0.031), p = 0.476). Clinical coding for negative appendicectomy was unreliable. Negative rates may be higher than suspected. This has implications for the validity of national database analyses. Using this form of data as a quality indictor for appendicitis should be reconsidered until its quality is improved. Copyright © 2012 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Early Childhood Diarrhea Predicts Cognitive Delays in Later Childhood Independently of Malnutrition.
Pinkerton, Relana; Oriá, Reinaldo B; Lima, Aldo A M; Rogawski, Elizabeth T; Oriá, Mônica O B; Patrick, Peter D; Moore, Sean R; Wiseman, Benjamin L; Niehaus, Mark D; Guerrant, Richard L
2016-11-02
Understanding the complex relationship between early childhood infectious diseases, nutritional status, poverty, and cognitive development is significantly hindered by the lack of studies that adequately address confounding between these variables. This study assesses the independent contributions of early childhood diarrhea (ECD) and malnutrition on cognitive impairment in later childhood. A cohort of 131 children from a shantytown community in northeast Brazil was monitored from birth to 24 months for diarrhea and anthropometric status. Cognitive assessments including Test of Nonverbal Intelligence (TONI), coding tasks (WISC-III), and verbal fluency (NEPSY) were completed when children were an average of 8.4 years of age (range = 5.6-12.7 years). Multivariate analysis of variance models were used to assess the individual as well as combined effects of ECD and stunting on later childhood cognitive performance. ECD, height for age (HAZ) at 24 months, and weight for age (WAZ) at 24 months were significant univariate predictors of the studies three cognitive outcomes: TONI, coding, and verbal performance (P < 0.05). Multivariate models showed that ECD remained a significant predictor, after adjusting for the effect of 24 months HAZ and WAZ, for both TONI (HAZ, P = 0.029 and WAZ, P = 0.006) and coding (HAZ, P = 0.025 and WAZ, P = 0.036) scores. WAZ and HAZ were also significant predictors after adjusting for ECD. ECD remained a significant predictor of coding (WISC III) after number of household income was considered (P = 0.006). This study provides evidence that ECD and stunting may have independent effects on children's intellectual function well into later childhood. © The American Society of Tropical Medicine and Hygiene.
Development and Assessment of CTF for Pin-resolved BWR Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salko, Robert K; Wysocki, Aaron J; Collins, Benjamin S
2017-01-01
CTF is the modernized and improved version of the subchannel code, COBRA-TF. It has been adopted by the Consortium for Advanced Simulation for Light Water Reactors (CASL) for subchannel analysis applications and thermal hydraulic feedback calculations in the Virtual Environment for Reactor Applications Core Simulator (VERA-CS). CTF is now jointly developed by Oak Ridge National Laboratory and North Carolina State University. Until now, CTF has been used for pressurized water reactor modeling and simulation in CASL, but in the future it will be extended to boiling water reactor designs. This required development activities to integrate the code into the VERA-CSmore » workflow and to make it more ecient for full-core, pin resolved simulations. Additionally, there is a significant emphasis on producing high quality tools that follow a regimented software quality assurance plan in CASL. Part of this plan involves performing validation and verification assessments on the code that are easily repeatable and tied to specific code versions. This work has resulted in the CTF validation and verification matrix being expanded to include several two-phase flow experiments, including the General Electric 3 3 facility and the BWR Full-Size Fine Mesh Bundle Tests (BFBT). Comparisons with both experimental databases is reasonable, but the BFBT analysis reveals a tendency of CTF to overpredict void, especially in the slug flow regime. The execution of these tests is fully automated, analysis is documented in the CTF Validation and Verification manual, and the tests have become part of CASL continuous regression testing system. This paper will summarize these recent developments and some of the two-phase assessments that have been performed on CTF.« less
Designing and Assessing Learning
ERIC Educational Resources Information Center
Quan, Hong; Liu, Dandan; Cun, Xiangqin; Lu, Yingchun
2009-01-01
This paper analyses the design, implementation and assessment of a level 2 module for non-English major students in higher vocational and professional education. 1132001 is a code of module that uses active methods to teach college English in China. It specifically reflects on the module's advantage and defect for developing and improving learning…
Guide for the Development of Safety Assessment Report (SAR)
1987-08-01
Di ’:t ib ityioe I A,;!i~abiiity Codes Dist SpdIt ora bloe INSia OEapy r TABLE OF CONTENTS PAGE III SAFETY ASSESSMENT REPORT...above. Potential hazards associated with the maintenance of the turbine engine (i.e., use of cleaning agents ) are not addressed .in the accompanying
Performance Analysis of GAME: A Generic Automated Marking Environment
ERIC Educational Resources Information Center
Blumenstein, Michael; Green, Steve; Fogelman, Shoshana; Nguyen, Ann; Muthukkumarasamy, Vallipuram
2008-01-01
This paper describes the Generic Automated Marking Environment (GAME) and provides a detailed analysis of its performance in assessing student programming projects and exercises. GAME has been designed to automatically assess programming assignments written in a variety of languages based on the "structure" of the source code and the correctness…
Using Minute Papers to Determine Student Cognitive Development Levels
ERIC Educational Resources Information Center
Vella, Lia
2015-01-01
Can anonymous written feedback collected during classroom assessment activities be used to assess students' cognitive development levels? After library instruction in a first-year engineering design class, students submitted minute papers that included answers to "what they are left wondering." Responses were coded into low, medium and…
What Does the CBM-Maze Test Measure?
ERIC Educational Resources Information Center
Muijselaar, Marloes M. L.; Kendeou, Panayiota; de Jong, Peter F.; van den Broek, Paul W.
2017-01-01
In this study, we identified the code-related (decoding, fluency) and language comprehension (vocabulary, listening comprehension) demands of the CBM-Maze test, a formative assessment, and compared them to those of the Gates-MacGinitie test, a standardized summative assessment. The demands of these reading comprehension tests and their…
Theory-Based Assessment in Environmental Education: A Tool for Formative Evaluation
ERIC Educational Resources Information Center
Granit-Dgani, Dafna; Kaplan, Avi; Flum, Hanoch
2017-01-01
This article reports on the development of a theory-informed assessment instrument for use in evaluating environmental education programs. The instrument involves coding learners' brief reflective writing on five established educational and social psychological constructs that correspond to five important goals of environmental education:…
Assessment of visual communication by information theory
NASA Astrophysics Data System (ADS)
Huck, Friedrich O.; Fales, Carl L.
1994-01-01
This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-16
... health assessment program that evaluates quantitative and qualitative risk information on effects that..., National Center for Environmental Assessment, (mail code: 8601P), Office of Research and Development, U.S... quantitative and qualitative risk information on effects that may result from exposure to specific chemical...
Assessing Question Quality Using NLP
ERIC Educational Resources Information Center
Kopp, Kristopher J.; Johnson, Amy M.; Crossley, Scott A.; McNamara, Danielle S.
2017-01-01
An NLP algorithm was developed to assess question quality to inform feedback on questions generated by students within iSTART (an intelligent tutoring system that teaches reading strategies). A corpus of 4575 questions was coded using a four-level taxonomy. NLP indices were calculated for each question and machine learning was used to predict…
Nicholson, Amanda; Ford, Elizabeth; Davies, Kevin A.; Smith, Helen E.; Rait, Greta; Tate, A. Rosemary; Petersen, Irene; Cassell, Jackie
2013-01-01
Background Research using electronic health records (EHRs) relies heavily on coded clinical data. Due to variation in coding practices, it can be difficult to aggregate the codes for a condition in order to define cases. This paper describes a methodology to develop ‘indicator markers’ found in patients with early rheumatoid arthritis (RA); these are a broader range of codes which may allow a probabilistic case definition to use in cases where no diagnostic code is yet recorded. Methods We examined EHRs of 5,843 patients in the General Practice Research Database, aged ≥30y, with a first coded diagnosis of RA between 2005 and 2008. Lists of indicator markers for RA were developed initially by panels of clinicians drawing up code-lists and then modified based on scrutiny of available data. The prevalence of indicator markers, and their temporal relationship to RA codes, was examined in patients from 3y before to 14d after recorded RA diagnosis. Findings Indicator markers were common throughout EHRs of RA patients, with 83.5% having 2 or more markers. 34% of patients received a disease-specific prescription before RA was coded; 42% had a referral to rheumatology, and 63% had a test for rheumatoid factor. 65% had at least one joint symptom or sign recorded and in 44% this was at least 6-months before recorded RA diagnosis. Conclusion Indicator markers of RA may be valuable for case definition in cases which do not yet have a diagnostic code. The clinical diagnosis of RA is likely to occur some months before it is coded, shown by markers frequently occurring ≥6 months before recorded diagnosis. It is difficult to differentiate delay in diagnosis from delay in recording. Information concealed in free text may be required for the accurate identification of patients and to assess the quality of care in general practice. PMID:23451024
Wilhelms, Susanne B; Huss, Fredrik R; Granath, Göran; Sjöberg, Folke
2010-06-01
To compare three International Classification of Diseases code abstraction strategies that have previously been reported to mirror severe sepsis by examining retrospective Swedish national data from 1987 to 2005 inclusive. Retrospective cohort study. Swedish hospital discharge database. All hospital admissions during the period 1987 to 2005 were extracted and these patients were screened for severe sepsis using the three International Classification of Diseases code abstraction strategies, which were adapted for the Swedish version of the International Classification of Diseases. Two code abstraction strategies included both International Classification of Diseases, Ninth Revision and International Classification of Diseases, Tenth Revision codes, whereas one included International Classification of Diseases, Tenth Revision codes alone. None. The three International Classification of Diseases code abstraction strategies identified 37,990, 27,655, and 12,512 patients, respectively, with severe sepsis. The incidence increased over the years, reaching 0.35 per 1000, 0.43 per 1000, and 0.13 per 1000 inhabitants, respectively. During the International Classification of Diseases, Ninth Revision period, we found 17,096 unique patients and of these, only 2789 patients (16%) met two of the code abstraction strategy lists and 14,307 (84%) met one list. The International Classification of Diseases, Tenth Revision period included 46,979 unique patients, of whom 8% met the criteria of all three International Classification of Diseases code abstraction strategies, 7% met two, and 84% met one only. The three different International Classification of Diseases code abstraction strategies generated three almost separate cohorts of patients with severe sepsis. Thus, the International Classification of Diseases code abstraction strategies for recording severe sepsis in use today provides an unsatisfactory way of estimating the true incidence of severe sepsis. Further studies relating International Classification of Diseases code abstraction strategies to the American College of Chest Physicians/Society of Critical Care Medicine scores are needed.
De Matteis, Sara; Jarvis, Deborah; Young, Heather; Young, Alan; Allen, Naomi; Potts, James; Darnton, Andrew; Rushton, Lesley; Cullinan, Paul
2017-03-01
Objectives The standard approach to the assessment of occupational exposures is through the manual collection and coding of job histories. This method is time-consuming and costly and makes it potentially unfeasible to perform high quality analyses on occupational exposures in large population-based studies. Our aim was to develop a novel, efficient web-based tool to collect and code lifetime job histories in the UK Biobank, a population-based cohort of over 500 000 participants. Methods We developed OSCAR (occupations self-coding automatic recording) based on the hierarchical structure of the UK Standard Occupational Classification (SOC) 2000, which allows individuals to collect and automatically code their lifetime job histories via a simple decision-tree model. Participants were asked to find each of their jobs by selecting appropriate job categories until they identified their job title, which was linked to a hidden 4-digit SOC code. For each occupation a job title in free text was also collected to estimate Cohen's kappa (κ) inter-rater agreement between SOC codes assigned by OSCAR and an expert manual coder. Results OSCAR was administered to 324 653 UK Biobank participants with an existing email address between June and September 2015. Complete 4-digit SOC-coded lifetime job histories were collected for 108 784 participants (response rate: 34%). Agreement between the 4-digit SOC codes assigned by OSCAR and the manual coder for a random sample of 400 job titles was moderately good [κ=0.45, 95% confidence interval (95% CI) 0.42-0.49], and improved when broader job categories were considered (κ=0.64, 95% CI 0.61-0.69 at a 1-digit SOC-code level). Conclusions OSCAR is a novel, efficient, and reasonably reliable web-based tool for collecting and automatically coding lifetime job histories in large population-based studies. Further application in other research projects for external validation purposes is warranted.
Rosen, Lisa M.; Liu, Tao; Merchant, Roland C.
2016-01-01
BACKGROUND Blood and body fluid exposures are frequently evaluated in emergency departments (EDs). However, efficient and effective methods for estimating their incidence are not yet established. OBJECTIVE Evaluate the efficiency and accuracy of estimating statewide ED visits for blood or body fluid exposures using International Classification of Diseases, Ninth Revision (ICD-9), code searches. DESIGN Secondary analysis of a database of ED visits for blood or body fluid exposure. SETTING EDs of 11 civilian hospitals throughout Rhode Island from January 1, 1995, through June 30, 2001. PATIENTS Patients presenting to the ED for possible blood or body fluid exposure were included, as determined by prespecified ICD-9 codes. METHODS Positive predictive values (PPVs) were estimated to determine the ability of 10 ICD-9 codes to distinguish ED visits for blood or body fluid exposure from ED visits that were not for blood or body fluid exposure. Recursive partitioning was used to identify an optimal subset of ICD-9 codes for this purpose. Random-effects logistic regression modeling was used to examine variations in ICD-9 coding practices and styles across hospitals. Cluster analysis was used to assess whether the choice of ICD-9 codes was similar across hospitals. RESULTS The PPV for the original 10 ICD-9 codes was 74.4% (95% confidence interval [CI], 73.2%–75.7%), whereas the recursive partitioning analysis identified a subset of 5 ICD-9 codes with a PPV of 89.9% (95% CI, 88.9%–90.8%) and a misclassification rate of 10.1%. The ability, efficiency, and use of the ICD-9 codes to distinguish types of ED visits varied across hospitals. CONCLUSIONS Although an accurate subset of ICD-9 codes could be identified, variations across hospitals related to hospital coding style, efficiency, and accuracy greatly affected estimates of the number of ED visits for blood or body fluid exposure. PMID:22561713
Recent MELCOR and VICTORIA Fission Product Research at the NRC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bixler, N.E.; Cole, R.K.; Gauntt, R.O.
1999-01-21
The MELCOR and VICTORIA severe accident analysis codes, which were developed at Sandia National Laboratories for the U. S. Nuclear Regulatory Commission, are designed to estimate fission product releases during nuclear reactor accidents in light water reactors. MELCOR is an integrated plant-assessment code that models the key phenomena in adequate detail for risk-assessment purposes. VICTORIA is a more specialized fission- product code that provides detailed modeling of chemical reactions and aerosol processes under the high-temperature conditions encountered in the reactor coolant system during a severe reactor accident. This paper focuses on recent enhancements and assessments of the two codes inmore » the area of fission product chemistry modeling. Recently, a model for iodine chemistry in aqueous pools in the containment building was incorporated into the MELCOR code. The model calculates dissolution of iodine into the pool and releases of organic and inorganic iodine vapors from the pool into the containment atmosphere. The main purpose of this model is to evaluate the effect of long-term revolatilization of dissolved iodine. Inputs to the model include dose rate in the pool, the amount of chloride-containing polymer, such as Hypalon, and the amount of buffering agents in the containment. Model predictions are compared against the Radioiodine Test Facility (RTF) experiments conduced by Atomic Energy of Canada Limited (AECL), specifically International Standard Problem 41. Improvements to VICTORIA's chemical reactions models were implemented as a result of recommendations from a peer review of VICTORIA that was completed last year. Specifically, an option is now included to model aerosols and deposited fission products as three condensed phases in addition to the original option of a single condensed phase. The three-condensed-phase model results in somewhat higher predicted fission product volatilities than does the single-condensed-phase model. Modeling of U02 thermochemistry was also improved, and results in better prediction of vaporization of uranium from fuel, which can react with released fission products to affect their volatility. This model also improves the prediction of fission product release rates from fuel. Finally, recent comparisons of MELCOR and VICTORIA with International Standard Problem 40 (STORM) data are presented. These comparisons focus on predicted therrnophoretic deposition, which is the dominant deposition mechanism. Sensitivity studies were performed with the codes to examine experimental and modeling uncertainties.« less
Acute Radiation Risk and BRYNTRN Organ Dose Projection Graphical User Interface
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Hu, Shaowen; Nounu, Hateni N.; Kim, Myung-Hee
2011-01-01
The integration of human space applications risk projection models of organ dose and acute radiation risk has been a key problem. NASA has developed an organ dose projection model using the BRYNTRN with SUM DOSE computer codes, and a probabilistic model of Acute Radiation Risk (ARR). The codes BRYNTRN and SUM DOSE are a Baryon transport code and an output data processing code, respectively. The risk projection models of organ doses and ARR take the output from BRYNTRN as an input to their calculations. With a graphical user interface (GUI) to handle input and output for BRYNTRN, the response models can be connected easily and correctly to BRYNTRN. A GUI for the ARR and BRYNTRN Organ Dose (ARRBOD) projection code provides seamless integration of input and output manipulations, which are required for operations of the ARRBOD modules. The ARRBOD GUI is intended for mission planners, radiation shield designers, space operations in the mission operations directorate (MOD), and space biophysics researchers. BRYNTRN code operation requires extensive input preparation. Only a graphical user interface (GUI) can handle input and output for BRYNTRN to the response models easily and correctly. The purpose of the GUI development for ARRBOD is to provide seamless integration of input and output manipulations for the operations of projection modules (BRYNTRN, SLMDOSE, and the ARR probabilistic response model) in assessing the acute risk and the organ doses of significant Solar Particle Events (SPEs). The assessment of astronauts radiation risk from SPE is in support of mission design and operational planning to manage radiation risks in future space missions. The ARRBOD GUI can identify the proper shielding solutions using the gender-specific organ dose assessments in order to avoid ARR symptoms, and to stay within the current NASA short-term dose limits. The quantified evaluation of ARR severities based on any given shielding configuration and a specified EVA or other mission scenario can be made to guide alternative solutions for attaining determined objectives set by mission planners. The ARRBOD GUI estimates the whole-body effective dose, organ doses, and acute radiation sickness symptoms for astronauts, by which operational strategies and capabilities can be made for the protection of astronauts from SPEs in the planning of future lunar surface scenarios, exploration of near-Earth objects, and missions to Mars.
Haylen, Bernard T; Lee, Joseph; Maher, Chris; Deprest, Jan; Freeman, Robert
2014-06-01
Results of interobserver reliability studies for the International Urogynecological Association-International Continence Society (IUGA-ICS) Complication Classification coding can be greatly influenced by study design factors such as participant instruction, motivation, and test-question clarity. We attempted to optimize these factors. After a 15-min instructional lecture with eight clinical case examples (including images) and with classification/coding charts available, those clinicians attending an IUGA Surgical Complications workshop were presented with eight similar-style test cases over 10 min and asked to code them using the Category, Time and Site classification. Answers were compared to predetermined correct codes obtained by five instigators of the IUGA-ICS prostheses and grafts complications classification. Prelecture and postquiz participant confidence levels using a five-step Likert scale were assessed. Complete sets of answers to the questions (24 codings) were provided by 34 respondents, only three of whom reported prior use of the charts. Average score [n (%)] out of eight, as well as median score (range) for each coding category were: (i) Category: 7.3 (91 %); 7 (4-8); (ii) Time: 7.8 (98 %); 7 (6-8); (iii) Site: 7.2 (90 %); 7 (5-8). Overall, the equivalent calculations (out of 24) were 22.3 (93 %) and 22 (18-24). Mean prelecture confidence was 1.37 (out of 5), rising to 3.85 postquiz. Urogynecologists had the highest correlation with correct coding, followed closely by fellows and general gynecologists. Optimizing training and study design can lead to excellent results for interobserver reliability of the IUGA-ICS Complication Classification coding, with increased participant confidence in complication-coding ability.
Ensemble coding of face identity is present but weaker in congenital prosopagnosia.
Robson, Matthew K; Palermo, Romina; Jeffery, Linda; Neumann, Markus F
2018-03-01
Individuals with congenital prosopagnosia (CP) are impaired at identifying individual faces but do not appear to show impairments in extracting the average identity from a group of faces (known as ensemble coding). However, possible deficits in ensemble coding in a previous study (CPs n = 4) may have been masked because CPs relied on pictorial (image) cues rather than identity cues. Here we asked whether a larger sample of CPs (n = 11) would show intact ensemble coding of identity when availability of image cues was minimised. Participants viewed a "set" of four faces and then judged whether a subsequent individual test face, either an exemplar or a "set average", was in the preceding set. Ensemble coding occurred when matching (vs. mismatching) averages were mistakenly endorsed as set members. We assessed both image- and identity-based ensemble coding, by varying whether test faces were either the same or different images of the identities in the set. CPs showed significant ensemble coding in both tasks, indicating that their performance was independent of image cues. As a group, CPs' ensemble coding was weaker than controls in both tasks, consistent with evidence that perceptual processing of face identity is disrupted in CP. This effect was driven by CPs (n= 3) who, in addition to having impaired face memory, also performed particularly poorly on a measure of face perception (CFPT). Future research, using larger samples, should examine whether deficits in ensemble coding may be restricted to CPs who also have substantial face perception deficits. Copyright © 2018 Elsevier Ltd. All rights reserved.
Park, Sohyun; Wilking, Cara
2014-01-01
Introduction Caloric intake among children could be reduced if sugar-sweetened beverages were replaced by plain water. School drinking water infrastructure is dictated in part by state plumbing codes, which generally require a minimum ratio of drinking fountains to students. Actual availability of drinking fountains in schools and how availability differs according to plumbing codes is unknown. Methods We abstracted state plumbing code data and used the 2010 YouthStyles survey data from 1,196 youth aged 9 through 18 years from 47 states. We assessed youth-reported school drinking fountain or dispenser availability and differences in availability according to state plumbing codes, sociodemographic characteristics, and area-level characteristics. Results Overall, 57.3% of youth reported that drinking fountains or dispensers in their schools were widely available, 40.1% reported there were only a few, and 2.6% reported that there were no working fountains. Reported fountain availability differed significantly (P < .01) by race/ethnicity, census region, the fountain to student ratio specified in plumbing codes, and whether plumbing codes allowed substitution of nonplumbed water sources for plumbed fountains. “Widely available” fountain access ranged from 45.7% in the West to 65.4% in the Midwest and was less common where state plumbing codes required 1 fountain per more than 100 students (45.4%) compared with 1 fountain per 100 students (60.1%) or 1 fountain per fewer than 100 students (57.6%). Conclusion Interventions designed to increase consumption of water may want to consider the role of plumbing codes in availability of school drinking fountains. PMID:24742393
Onufrak, Stephen J; Park, Sohyun; Wilking, Cara
2014-04-17
Caloric intake among children could be reduced if sugar-sweetened beverages were replaced by plain water. School drinking water infrastructure is dictated in part by state plumbing codes, which generally require a minimum ratio of drinking fountains to students. Actual availability of drinking fountains in schools and how availability differs according to plumbing codes is unknown. We abstracted state plumbing code data and used the 2010 YouthStyles survey data from 1,196 youth aged 9 through 18 years from 47 states. We assessed youth-reported school drinking fountain or dispenser availability and differences in availability according to state plumbing codes, sociodemographic characteristics, and area-level characteristics. Overall, 57.3% of youth reported that drinking fountains or dispensers in their schools were widely available, 40.1% reported there were only a few, and 2.6% reported that there were no working fountains. Reported fountain availability differed significantly (P < .01) by race/ethnicity, census region, the fountain to student ratio specified in plumbing codes, and whether plumbing codes allowed substitution of nonplumbed water sources for plumbed fountains. "Widely available" fountain access ranged from 45.7% in the West to 65.4% in the Midwest and was less common where state plumbing codes required 1 fountain per more than 100 students (45.4%) compared with 1 fountain per 100 students (60.1%) or 1 fountain per fewer than 100 students (57.6%). Interventions designed to increase consumption of water may want to consider the role of plumbing codes in availability of school drinking fountains.
Cooper, P David; Smart, David R
2017-06-01
Recent Australian attempts to facilitate disinvestment in healthcare, by identifying instances of 'inappropriate' care from large Government datasets, are subject to significant methodological flaws. Amongst other criticisms has been the fact that the Government datasets utilized for this purpose correlate poorly with datasets collected by relevant professional bodies. Government data derive from official hospital coding, collected retrospectively by clerical personnel, whilst professional body data derive from unit-specific databases, collected contemporaneously with care by clinical personnel. Assessment of accuracy of official hospital coding data for hyperbaric services in a tertiary referral hospital. All official hyperbaric-relevant coding data submitted to the relevant Australian Government agencies by the Royal Hobart Hospital, Tasmania, Australia for financial year 2010-2011 were reviewed and compared against actual hyperbaric unit activity as determined by reference to original source documents. Hospital coding data contained one or more errors in diagnoses and/or procedures in 70% of patients treated with hyperbaric oxygen that year. Multiple discrete error types were identified, including (but not limited to): missing patients; missing treatments; 'additional' treatments; 'additional' patients; incorrect procedure codes and incorrect diagnostic codes. Incidental observations of errors in surgical, anaesthetic and intensive care coding within this cohort suggest that the problems are not restricted to the specialty of hyperbaric medicine alone. Publications from other centres indicate that these problems are not unique to this institution or State. Current Government datasets are irretrievably compromised and not fit for purpose. Attempting to inform the healthcare policy debate by reference to these datasets is inappropriate. Urgent clinical engagement with hospital coding departments is warranted.
Nouraei, S A R; Hudovsky, A; Frampton, A E; Mufti, U; White, N B; Wathen, C G; Sandhu, G S; Darzi, A
2015-06-01
Clinical coding is the translation of clinical activity into a coded language. Coded data drive hospital reimbursement and are used for audit and research, and benchmarking and outcomes management purposes. We undertook a 2-center audit of coding accuracy across surgery. Clinician-auditor multidisciplinary teams reviewed the coding of 30,127 patients and assessed accuracy at primary and secondary diagnosis and procedure levels, morbidity level, complications assignment, and financial variance. Postaudit data of a randomly selected sample of 400 cases were reaudited by an independent team. At least 1 coding change occurred in 15,402 patients (51%). There were 3911 (13%) and 3620 (12%) changes to primary diagnoses and procedures, respectively. In 5183 (17%) patients, the Health Resource Grouping changed, resulting in income variance of £3,974,544 (+6.2%). The morbidity level changed in 2116 (7%) patients (P < 0.001). The number of assigned complications rose from 2597 (8.6%) to 2979 (9.9%) (P < 0.001). Reaudit resulted in further primary diagnosis and procedure changes in 8.7% and 4.8% of patients, respectively. The coded data are a key engine for knowledge-driven health care provision. They are used, increasingly at individual surgeon level, to benchmark performance. Surgical clinical coding is prone to subjectivity, variability, and error (SVE). Having a specialty-by-specialty understanding of the nature and clinical significance of informatics variability and adopting strategies to reduce it, are necessary to allow accurate assumptions and informed decisions to be made concerning the scope and clinical applicability of administrative data in surgical outcomes improvement.
Implementing Shared Memory Parallelism in MCBEND
NASA Astrophysics Data System (ADS)
Bird, Adam; Long, David; Dobson, Geoff
2017-09-01
MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.
The Environment-Power System Analysis Tool development program. [for spacecraft power supplies
NASA Technical Reports Server (NTRS)
Jongeward, Gary A.; Kuharski, Robert A.; Kennedy, Eric M.; Wilcox, Katherine G.; Stevens, N. John; Putnam, Rand M.; Roche, James C.
1989-01-01
The Environment Power System Analysis Tool (EPSAT) is being developed to provide engineers with the ability to assess the effects of a broad range of environmental interactions on space power systems. A unique user-interface-data-dictionary code architecture oversees a collection of existing and future environmental modeling codes (e.g., neutral density) and physical interaction models (e.g., sheath ionization). The user-interface presents the engineer with tables, graphs, and plots which, under supervision of the data dictionary, are automatically updated in response to parameter change. EPSAT thus provides the engineer with a comprehensive and responsive environmental assessment tool and the scientist with a framework into which new environmental or physical models can be easily incorporated.